id
stringlengths 10
16
| pid
stringlengths 12
18
| input
stringlengths 2.75k
222k
| output
stringlengths 500
9.79k
|
---|---|---|---|
gao_GAO-20-36 | gao_GAO-20-36_0 | <1. Background> <1.1. Immigration Enforcement Priorities> Priority Enforcement Program. Under PEP, which was in effect from January 5, 2015 until February 20, 2017, DHS personnel were directed to, among other things, prioritize the apprehension, detention, and removal from the United States of aliens who pose a threat to national security, border security, and public safety, including convicted felons. It further directed DHS personnel to prioritize for removal new immigration violators and those who had been issued a final order of removal on or after January 1, 2014 and to exercise prosecutorial discretion, as appropriate, in accordance with these priorities and existing guidance. A 2011 ICE memorandum identified factors to consider when exercising prosecutorial discretion, such as the length of the individual s presence in the United States, whether the person or person s immediate relative has served in the U.S. military, on the basis of humanitarian reasons such as personal or family illness, among other factors. Executive Order 13768. Executive order 13768, issued on January 25, 2017, focuses on immigration enforcement within the United States. Among other things, the executive order lays out the administration s immigration enforcement priorities for removable aliens. Specifically, the executive order prioritizes for the removal from the United States aliens who are removable based on certain criminal and security grounds in the Immigration and Nationality Act; as well as removable aliens who have been convicted of, charged with, or committed acts that constitute a criminal offense; have engaged in fraud or otherwise abused any government program; or who are determined to otherwise pose a risk to public safety or national security. In addition, it calls for the termination of the PEP and reinstitution of Secure Communities. See table 1 for a description of enforcement priorities for the removal of aliens from the United States under PEP and Executive Order 13768. The Secretary of Homeland Security issued the 2017 DHS memo to implement Executive Order 13768. According to the 2017 DHS memo, in addition to the priorities outlined in the executive order, the Director of ICE, Commissioner of CBP, and Director of U.S. Citizenship and Immigration Services may allocate resources to prioritize enforcement activities as they deem appropriate, such as by prioritizing enforcement actions against convicted felons or gang members. ICE issued a memo further directing efforts to implement the executive order and apply the guidance from the 2017 DHS memo. The ICE memo stated that ICE was to review all existing policies and guidance documents and revise or rescind relevant policies in order to ensure consistency with the executive order. In addition, ICE s Office of the Principal Legal Advisor (OPLA) issued additional guidance to OPLA attorneys to implement the 2017 DHS memo. OPLA is responsible for providing legal advice, training, and services to support the ICE mission, and for defending the interests of the United States in the administrative and federal courts including immigration court proceedings. See figure 1 for a timeline of DHS memoranda and Executive Order establishing immigration enforcement priorities from 2015 to 2018. Prosecutorial Discretion. Prosecutorial discretion is the longstanding authority of an agency charged with enforcing a law to decide where to focus its resources and whether or how to enforce, or not to enforce, the law against an individual. Due to limited resources, ICE cannot respond to all immigration violations or remove all persons who are determined to be in the United States without legal status, and therefore, must exercise prosecutorial discretion in the enforcement of the law. In accordance with the DHS, ICE, and OPLA memos, agents and officers are to exercise prosecutorial discretion on a case-by-case basis based on the individual facts presented in consultation with the head of the field office, and prosecutorial discretion is not to be exercised in a manner that exempts or excludes a specified class or category of foreign nationals from enforcement of the immigration laws. <1.2. Agency Roles and Responsibilities> ICE s ERO conducts civil immigration enforcement actions, which includes administrative arrests, detentions, and removals. Arrests. ERO arrests aliens for civil violations of U.S. immigration laws. Through the Criminal Alien Program, ICE identifies and arrests potentially removable aliens who are incarcerated within federal, state, and local prisons and jails. The National Fugitive Operations Program identifies and arrests removable aliens who are at-large. ICE does not detain all aliens it arrests, due to lack of bed space, among other factors. To inform custody decisions for aliens who are arrested and not subject to mandatory detention, ICE guidance requires officers to consider certain factors, including risk of flight, risk of harm to public safety, and special vulnerabilities. For example, individuals with a physical or mental illness or disability, or individuals who fear being harmed in detention based on their sexual orientation or gender identity may be considered for release or alternatives to detention (ATD) based on these special vulnerabilities. The ATD program requires that, among other things, aliens released into the community agree to appear at all hearings and report to ICE periodically. Non-detained Unit. ERO is also responsible for supervising and ensuring that aliens who are not held in detention facilities comply with requirements to appear in immigration court for their administrative removal proceedings. ICE uses one or more release options when it determines that an alien can be released from ICE custody including bond, order of recognizance, order of supervision, parole, and on condition of participation in the ATD program. Total ATD enrollment numbers ranged from about 29,000 in calendar year 2015 to over 78,000 in calendar year 2018. ICE does not track specific characteristics of individuals enrolled in ATD programs, including aliens who are pregnant, nursing, disabled, elderly, primary caregivers of minor children, among others. ICE may also release aliens on bond or an order of recognizance who do not pose a threat to public safety, present a low risk of flight, and who are not required to be detained. In addition, in rare instances, ICE may release an alien on an order of supervision when there is no significant likelihood of removal in the reasonably foreseeable future. For example, ICE may not be able to coordinate travel arrangements for certain aliens with final orders of removal who are from countries with which the United States does not have repatriation agreements. An alien subject to a final order of deportation or removal may also request a stay of deportation or removal. ICE may also release certain aliens on parole for urgent humanitarian reasons or significant public benefit, or for a medical emergency or legitimate law enforcement objective, on a case-by-case basis. Detentions. ICE is responsible for providing safe, secure, and humane confinement for detained aliens in the United States who may be subject to removal while they await the resolution of their immigration cases or who have been ordered removed from the United States. This includes aliens transferred to ICE from CBP who were apprehended at or between ports of entry. In fiscal year 2019, ERO oversaw the detention of aliens in 147 facilities authorized to house detainees for over 72 hours. ICE manages these facilities in conjunction with private contractors, state and local governments, and through contract with another federal agency. Within ERO, ICE Health Service Corps (IHSC) is responsible for providing direct medical, dental, mental health care, and public health services to detainees in 20 facilities authorized to house detainees for over 72 hours. Facilities serviced by IHSC include service processing centers, contract detention facilities, dedicated intergovernmental service agreement facilities, and family residential centers. IHSC medical staff are to monitor and implement policy provisions related to pregnant and mentally ill detainees. At detention facilities that are not staffed with IHSC personnel, similar services are provided by local government staff or private contractors and overseen by ICE. Removals. ICE removes aliens who have been determined to be removable and not eligible for any requested relief or protection pursuant to an administrative final order of removal. A removal is defined as the compulsory and confirmed movement of an inadmissible or deportable alien out of the United States. ICE removals include both aliens arrested by ICE and aliens who were apprehended by CBP and transferred to ICE. ERO operates across 24 areas of responsibility nationwide and each area of responsibility is led by a field office director. Each ERO field office director is required by ICE policy to designate supervisory level employees to serve, as a collateral duty, as field liaisons for their area of responsibility tasked with monitoring and implementing the provisions of policies for certain selected populations. These field liaison roles include the LGBTI Field Liaison, Child Welfare Field Point of Contact, Supporting Disability Access Coordinator, and Juvenile Coordinator. In addition to ERO and OPLA, ICE Homeland Security Investigations (HSI) conducts worksite enforcement operations among other law enforcement operations such as oversight of the Student and Exchange Visitor program. This includes arresting undocumented workers and employers who knowingly hire them. We did not include HSI worksite enforcement arrests in our analysis of ICE arrest data because we were unable to identify the number of unique arrests in these data for the purpose of depicting general arrest trends. <2. ERO Arrests, Detentions, and Removals Varied during Calendar Years 2015 through 2018, Increasing Overall> ERO arrests, detentions, and removals varied during calendar years 2015 through 2018, and increased overall for the period, as shown in figure 2. Specifically, males, aliens from four countries Mexico, Guatemala, El Salvador, and Honduras and convicted criminals accounted for the majority of ICE arrests and removals. The majority of ICE detentions were made up of males, aliens from the same four countries, and non- criminals. See appendix II for additional information on ERO arrests, detentions, and removals by gender, country of citizenship, arresting agency, and criminality. ERO Arrests. The number of ERO arrests varied from calendar years 2015 through 2018 but increased overall from 112,870 in 2015 to 151,497 in 2018, see figure 2 above. Male aliens, citizens of four countries Mexico, Guatemala, El Salvador and Honduras and arrests of aliens from state and local jails, through the Criminal Alien Program, accounted for the majority of these arrests each year from 2015 through 2018. Further, ERO arrests increased in all ERO areas of responsibility from calendar years 2015 and 2016, when PEP was in effect, to calendar years 2017 and 2018, following implementation of the 2017 DHS memo. Arrests of convicted criminals accounted for the majority of arrests in all areas of responsibility during both periods. However, as shown in figure 3, the proportion of arrests of convicted criminals decreased in each area of responsibility due to an increased number of arrests of non-criminals following the implementation of the 2017 DHS memo. See appendix II for additional information on ERO arrests by gender, country of citizenship, arresting agency, and criminality. ERO Detentions. The number of ERO detentions varied from calendar years 2015 through 2018 but increased overall from 324,320 in 2015 to 438,258 in 2018. Male aliens and citizens of four countries Mexico, Guatemala, El Salvador and Honduras collectively accounted for most ERO detentions. The majority of detentions resulted from CBP arrests at or between ports of entry. While the number of ERO detentions of convicted criminals stayed relatively stable from 2015 to 2018, the number of detentions of non-criminals increased from 171,856 in 2015 to 279,469 in 2018 and accounted for the majority of ERO detentions each year, as shown in figure 4. See appendix II for additional information on detentions by gender, country of citizenship, arresting agency, and criminality. For the purposes of this report and our presentation of ICE data, we refer to potentially removable aliens without criminal convictions known to ICE as non-criminals and aliens with criminal convictions known to ICE as convicted criminals. According to ICE officials, administrative arrests of non-criminals include individuals who have been charged but not convicted of a crime as well as those with no prior criminal history. According to ICE, ICE officers electronically request and retrieve criminal history information about an alien from the FBI s National Crime Information Center database, which maintains a repository of federal and state criminal history information, and other sources. We used ICE s determination of criminality for our analysis. ERO Removals. The number of ERO removals varied from calendar years 2015 through 2018 but increased overall from 231,559 in 2015 to 261,523 in 2018. Male aliens and citizens of four countries Mexico, Guatemala, El Salvador and Honduras collectively accounted for most ERO removals. The majority of removals resulted from CBP arrests at or between ports of entry. While removals of both convicted criminals and non-criminals increased overall, removals of convicted criminals accounted for the majority of removals each year, see figure 5. See appendix II for additional information on removals by gender, country of citizenship, arresting agency and criminality. <3. ICE Has Operational Policies for Certain Selected Populations, and Revised Its Policies As Needed to Align with the 2017 DHS Memo> According to ICE officials, in early 2018, ERO conducted a review of all existing policies and related documents to help ensure alignment with the 2017 DHS memo, resulting in operational policies related to six of the eight selected populations discussed in this report. The six policies in effect as of July 2019 for the selected populations provide direction and guidance to ERO officers on the identification, detention, care, and removal of aliens who are: individuals with mental disorders, transgender, individuals with disabilities, parents of minors, pregnant, and juveniles. Of the six policies in effect, three were not impacted by the 2017 DHS memo and ERO did not make changes to these policies; two were impacted by the 2017 DHS memo and were revised to remove language ERO determined to be inconsistent with the memo; and guidance on managing juveniles was first issued after the 2017 DHS memo. For the remaining two populations, ERO does not have a separate policy on care provided to detainees who are nursing and as a result of the policy review, rescinded a prior policy related to exercising prosecutorial discretion for elderly individuals, as shown in figure 6. Individuals with Mental Disorders. In May 2014, ICE issued a memo titled Identification of Detainees with Serious Mental Disorders or Conditions, which sets forth procedures to assist ICE and detention facility personnel in identifying detainees with serious mental disorders or conditions in order to assess appropriate facility placement and treatment. To identify individuals with mental disorders, ICE s national detention standards require facilities to conduct an initial medical screening for all detainees, including a documented mental health screening, a 14-day full medical assessment, with mental health components, and timely referral for follow-up mental evaluations, diagnosis, and treatment. ICE s policy also requires detention facilities to notify ICE field office directors of detainees with specified serious mental disorders. In addition, the policy requires that relevant personnel meet regularly to monitor the cases of detainees with serious mental disorders until their removal or release. ERO officials in all six areas of responsibility we visited said that these meetings are conducted weekly or biweekly with attorneys, medical staff, and ERO management staff to discuss and evaluate the needs of each detainee s medical care and security needs. According to ICE, this memo did not need to be revised to align with the 2017 DHS Memo. Our analysis of ICE data shows that the number of detentions of individuals with mental disorders at IHSC-staffed facilities varied from calendar years 2015 through 2018 but increased overall from 8513 to 8796 individuals. Transgender Individuals. In June 2015, ICE issued a memo titled Further Guidance Regarding the Care of Transgender Detainees, which provides guidance regarding the placement and care of transgender adult detainees in ERO custody. This memo provides guidance for initial processing of transgender detainees who voluntarily disclose their gender identity to detention officers. Further, when a detainee self-identifies as transgender, the memo directs ERO officers to make individualized placement determinations to ensure the detainee s safety, and to ensure the facility chosen for placement is able to provide appropriate care for the individual, and to the extent practicable to consider the availability of medical personnel who have experience providing care and treatment to transgender detainees, including the delivery of hormone therapy. This memo also directs ERO to designate a National LGBTI Coordinator to serve as the primary point of contact and subject matter expert for ERO regarding the care and treatment of detainees in ERO custody who self- identify as transgender. Specifically, the National LGBTI Coordinator is to evaluate and report information from all relevant ICE data systems regarding the demographics, care, and custody of transgender detainees and ensure field compliance with the provisions of this memo, among other things. Further, each field office is required to have a LGBTI Field Liaison, appointed by the Field Office Director. The memo directs LGBTI Field Liaisons to provide regular updates to the national ERO LGBTI Coordinator and ERO Headquarters on the progress of implementing and maintaining the provisions of this memo, which includes determining the appropriateness of facilities to house transgender detainees. In addition, the memo requires certain detention facilities to convene a meeting no later than 72 hours after a transgender detainee s arrival to the facility to assess medical, psychological, and housing needs. During our site visits, officers in three of the six areas of responsibility we visited said that they conduct these meetings with relevant ERO management staff and medical officials in accordance with the memo. According to ICE officials, the transgender care memo did not need to be revised to align with the 2017 DHS Memo. The transgender care memo states that field office directors may exercise prosecutorial discretion for transgender individuals who are not subject to mandatory detention. Field ERO officers in five of the six areas of responsibility we visited explained that ERO generally does not detain transgender individuals unless their criminal histories warrant detention, in accordance with the memo. Specifically, officers in three of these five areas of responsibility reported that transgender individuals are likely to be released on bond or under an order of supervision. However, in the sixth area of responsibility, one ERO officer observed an increase in the detention of transgender individuals beginning in early 2017, which the official attributed to the revised priorities described in the 2017 DHS memo. In addition, attorneys from three NGOs we met with also observed an increase in the detention of transgender individuals or described ongoing challenges related to a decrease in the availability of dedicated transgender housing facilities. They also provided anecdotes of transgender clients who had been detained or who experienced challenges obtaining access to appropriate medical care while in detention. Our analysis of ICE data shows that the number of detentions of transgender individuals increased from 237 in calendar year 2016 to 284 in calendar year 2018. While ICE does not have separate policies for aliens who are lesbian, gay, bisexual, or intersex, the national LGBTI coordinator and LGBTI field liaisons also serve as subject matter experts for the care and treatment of these detainees. In addition, the transgender care memo prohibits discrimination or harassment of any kind based on a detainee s sexual orientation or gender identity. As such, ERO officers may take steps to protect a detainee who expresses safety concerns based on their sexual orientation, according to ERO officials. According to ERO officers in five of the six areas of responsibility we visited, they do not ask detainees about sexual orientation unless the individual voluntarily discloses this information. Additionally, ERO officers in the same five areas of responsibility stated that they do not take sexual orientation into consideration for detention or housing decisions, unless an individual specifically requests protective custody due to safety concerns or harassment. Individuals with Disabilities. In December 2016, ERO issued a directive titled Assessment and Accommodations for Detainees with Disabilities, which establishes policy and procedures for ERO to oversee and communicate with detention facilities on the identification, assessment, and accommodation of detainees with disabilities. According to this directive, ERO field leadership is to notify detention facilities in each area of responsibility of their existing obligations under federal law to accommodate detainees with disabilities. These obligations include maintaining a process to identify these detainees through observation, assessments, screenings, and self-identification; notifying detainees of their right to request accommodations; and establishing a process to inform a detainee of the final decision on the request for accommodations, including whether the facility will provide alternative accommodations that are equally effective as those requested; among other things. In addition, this directive requires ERO to designate an ERO disability access coordinator who is to serve as the primary point of contact and subject matter expert for ERO headquarters and the field regarding the accommodation of, and communication with, detainees with disabilities in ERO custody. Among other duties, the ERO disability access coordinator is responsible for evaluating information from all relevant ICE data systems regarding the identification, care, approved accommodations and custody of detainees with disabilities; as well as maintaining records of detainees with communication and mobility impairments, including records of denials of detainee requests for accommodations by facilities. According to the directive, detainees with communication impairments include detainees with hearing, visual, and speech impairments (e.g., detainees who are deaf or hard of hearing, blind, or nonverbal). Detainees with mobility impairments include detainees with physical impairments who require a wheelchair, crutches, prosthesis, cane, other mobility device, or other assistance. Accommodations for these impairments may include accessible showers, Braille material, or note takers for persons with physical and sensory impairments, among other things. The ERO disability access coordinator is also responsible for helping to ensure compliance with the provisions of this directive. Field office directors are required to appoint at least one supervisory-level employee to serve as the supporting disability access coordinator for each area of responsibility. Supporting disability access coordinators are responsible for serving as the main point of contact for their field office regarding compliance with federal law and DHS, ICE, and ERO regulations, detention standards, policies, and procedures related to detainees with disabilities. Supporting disability access coordinators are also responsible for collaborating and communicating with ERO headquarters, field office, detention facility, and health care personnel to monitor the care and treatment of detainees with disabilities, among other things. In all six areas of responsibility we visited, supporting disability access coordinators and medical staff told us that they track detainees who receive accommodations for communication and mobility impairments by recording the accommodation on a form that they submit to ERO headquarters. According to ICE, the Assessment and Accommodations for Detainees with Disabilities directive did not need to be revised to align with the 2017 DHS Memo. This directive states that it is meant to implement and complement the requirements of Section 504 of the Rehabilitation Act of 1973 and states that detainees with disabilities will be provided an equal opportunity to access, participate in, or benefit from in-custody programs, services, and activities, and that detainees with disabilities will be provided with auxiliary aids and services as necessary to allow for effective communication. Further, the directive states that a field office director may consider releasing from ICE custody a detainee with an impairment or disability who is not subject to mandatory detention. ERO officers in five areas of responsibility we visited reported that they consult with the supporting disability access coordinator, medical staff, or a supervisor to determine whether local detention facilities are able to provide appropriate accommodations. Our analysis of ICE data shows that the number of detentions of individuals with communication and mobility impairments increased from 434 to 530 in calendar years 2017 to 2018. Parents or Legal Guardians of Minors. In August 2017, ICE issued a policy titled Detention and Removal of Alien Parents or Legal Guardians, which provides guidance regarding the detention and removal of alien parents and legal guardians, including those with children who are U.S. citizens and legal permanent residents and parents with ongoing cases in family court or child welfare proceedings in the United States. This policy directs ERO to designate a child welfare coordinator to serve as the primary point of contact and subject matter expert for all ICE personnel regarding child welfare issues related to detained alien parents. The child welfare coordinator is also responsible for evaluating information from all relevant ICE data systems regarding detained alien parents or legal guardians of U.S. citizen and legal permanent resident minors and sharing appropriate information with field points of contact, among other things. Specifically, this policy directs field office directors to make appropriate arrangements for detained parents to attend child welfare proceedings. ERO officers in three of the six areas of responsibility we visited stated that they coordinate visits to family courts for the detained parent to appear at these hearings. The field office director in each area of responsibility is to designate a field point of contact to communicate with the child welfare coordinator and address public inquires related to detained parents or legal guardians in ERO custody. The August 2017 policy superseded an August 2013 policy titled Facilitating Parental Interests in the Course of Civil Immigration Enforcement Activities, which ERO revised to align with the 2017 DHS memo. In the revised policy, ERO removed language indicating that field office directors should weigh whether an exercise of prosecutorial discretion may be warranted for an alien who is a parent or legal guardian of a U.S. citizen or legal permanent resident minor or is a primary caretaker of a minor, and to exercise such discretion as early as possible. ERO officers in five of the six areas of responsibility we visited stated that they typically do not detain parents of minors, unless criminal history warrants detention. Attorneys we met with from a NGO that provides services to immigrant families and refugees stated that they have observed an increase in the number and length of detentions of parents or legal guardians of minors since January 2017. We were not able to identify trends in detention of detained parents because ERO does not collect or maintain data on this population in a readily available format. Pregnant Women. In December 2017, ICE issued a directive titled Identification and Monitoring of Pregnant Detainees, which sets forth policy and procedures to ensure pregnant detainees in ICE custody for immigration violations are identified, monitored, tracked, and housed in an appropriate facility to manage their care. According to ICE policy on women s health, pregnant women are identified upon arrival to a detention facility because all women of childbearing age undergo a pregnancy test during intake processing. According to the December 2017 directive, IHSC personnel are responsible for notifying the field office director and IHSC headquarters, as soon as practical, when a pregnant detainee is identified; monitoring the condition of pregnant detainees, including the general health of the pregnant detainee and medical condition of the fetus; and communicating with the field office director about any specific risk factors or concerns. In addition, IHSC personnel are to provide oversight and review of facility capabilities to determine if the needs of a pregnant detainee can be accommodated and recommend to the field office director when a transfer to another facility is necessary for appropriate medical care. Further, IHSC personnel are to develop and maintain a system for tracking and monitoring all pregnant detainees. This policy superseded an August 2016 version with the same title, which ERO revised to align with the 2017 DHS memo, according to ICE officials. In the revised version, ERO removed language stating that absent extraordinary circumstances pregnant women will generally not be detained by ICE. In five of the six areas of responsibility we visited, ERO officers stated that unless mandatory detention is required, they still generally avoid detaining pregnant women. In addition, ERO officers in all six areas of responsibility we visited stated that they are less likely to detain and may release a woman who is having a high risk pregnancy or in the third trimester of her pregnancy. However, an official in the sixth area of responsibility noted that under the revised policy, pregnant women may be detained during the third trimester, if she is likely to be removed quickly and has medical clearance to fly. Officers in two of the six areas of responsibility we visited noted that pregnant women may also be released on bond, under an order of supervision, or other non- detention options, after assessing the facts of the case. Attorneys and policy advocates we met with from three NGOs that represent a range of immigrant populations stated that they have observed increases in the detention of pregnant women since January 2017. Attorneys from another NGO we met with provided anecdotes of cases of pregnant detainees who experienced medical challenges, including miscarriages while in custody. Our analysis of ICE data shows that the number of detentions of pregnant women varied, but increased overall, from 1380 in calendar year 2016 to 2098 in calendar year 2018. Juveniles. In April 2018, ICE issued the Field Office Juvenile Coordinator Handbook to guide ERO staff in processing, transporting, managing, and removing juveniles persons encountered by ERO who have not reached 18 years of age. Field office juvenile coordinators, who serve as local subject-matter experts on juvenile matters for each area of responsibility, provide policy guidance to ERO personnel within their areas of responsibility, and assist with case review and custody redeterminations. Field office juvenile coordinators are also required to coordinate with other federal agencies including the Office of Refugee Resettlement, where juveniles designated as unaccompanied alien children are typically transferred. According to ERO policy, unaccompanied alien children apprehended by ERO or transferred into ERO custody by CBP are to be placed in the care of the Office of Refugee Resettlement within 72 hours of identification, if they are not repatriated at the border. The Field Office Juvenile Coordinator Handbook was released after the 2017 DHS memo and aligns with the 2017 DHS Memo. According to officers in four of the six areas of responsibility we visited, ERO does not target juveniles for arrests, unless they have criminal records. For example, officers we met with in one area of responsibility stated that ERO typically does not target juveniles in that location, unless they are affiliated with gangs, because they are unlikely to pose a public safety threat. Our analysis of ICE data shows that the number of arrests of juveniles varied, but increased overall, from calendar years 2015 through 2018. We excluded juveniles from our analysis of individual ICE detention data because ICE is generally not responsible for detaining juveniles, as discussed above. Nursing Women. While ICE does not have a separate policy on the care, detention, or removal of women who are nursing, the 2017 Directive on Women s Health Services provides guidance to IHSC staff on the delivery and administration of health services to this population. According to this directive, women who are nursing are identified during initial processing before being placed into custody at a detention facility because ERO officials and medical personnel are required to ask women if they are breastfeeding. Medical personnel make recommendations pertaining to the detention of women who are nursing, and in most cases, these detainees are placed in IHSC-staffed facilities. IHSC personnel record and use this information to monitor the care and needs of women who are nursing, according to IHSC officials. In five of the six areas of responsibility we visited, officers stated that they typically do not detain women who are nursing, unless their criminal histories warrant detention. Specifically, health officials in one of the five areas of responsibility explained that if a nursing mother is detained, she is typically released within a few hours or placed on bond or order of supervision. Our analysis of ICE data shows that the number of detentions of nursing women at IHSC-staffed facilities varied from calendar years 2015 through 2018 but increased overall from 157 in 2015 to 381 in 2018. Elderly Individuals. ICE no longer has a policy guiding the detention or care of elderly detainees. According to ICE guidance on assessing individuals with special vulnerabilities during the intake process, ICE generally considers someone to be elderly starting at age 65. However, the guidance instructs agents and officers to assess whether these individuals have physical indicators of infirmity or fragility caused by old age when making decisions regarding detaining or releasing them. In February 2018, as part of its effort to align internal policies with the 2017 DHS memo, ERO rescinded a 2009 policy directing officers to administratively close cases of non-criminal fugitives who are 70 years old or older for humanitarian/health reasons. ERO officers in five of the six areas of responsibility we visited reported that they do not target individuals who are elderly and continue to consider criminal history and medical condition when deciding whether to detain them. For example, officials in one of these five areas of responsibility explained that someone who committed an aggravated felony would be subject to mandatory detention regardless of age, but if the individual has a serious medical condition, such as advanced cancer, ERO may decide to release them from custody because the agency would be responsible for the cost of their medical treatments while they are in custody. Officers in the sixth area of responsibility said they have started to detain individuals who are elderly following the issuance of the 2017 DHS memo, but noted that they coordinate with the courts to expedite these hearings before an immigration judge who may order the release of an elderly detainee. Attorneys we met with from a NGO that provides services to immigrant families and refugees stated that they have observed an increase in detentions of individuals who are elderly, and only those with serious medical issues were considered for release. Our analysis of ICE data shows that the number of detentions of individuals who were elderly varied, increasing overall, from 882 in calendar year 2015 to 1159 in calendar year 2018. <4. Data Indicate Detentions of Selected Populations Varied, Increasing Overall; but ICE Lacks Readily Available Data on All Detained Parents or Legal Guardians of Minors> Available ICE data show that detentions of most of the selected populations in our review varied between calendar years 2015 and 2018. Specifically, detentions of transgender individuals and pregnant women increased from calendar years 2016 to 2018, after ICE began collecting data for these populations. Similarly, detentions of individuals with disabilities increased from 2017 to 2018, after ICE began collecting data for this population. Detentions of individuals with mental disorders and nursing women at IHSC-staffed facilities varied from calendar years 2015 to 2018. Finally, detentions of individuals who were elderly varied, increasing overall during the same timeframe. We were unable to obtain data on parents or legal guardians of minors in ICE custody because ICE does not collect or maintain data on this population in a readily available format. <4.1. ICE Data Show Detentions of Most Selected Populations Varied, Increasing Overall> <4.1.1. Detentions of Transgender Individuals Increased from 2016 through 2018; the Majority Resulted from CBP Arrest and Were Detentions of Non-Criminals> ICE began collecting and maintaining data on transgender individuals who voluntarily disclose their gender identity to ICE officers in November 2015, as previously discussed. ERO officials said they use these data to monitor the placement and care of transgender individuals in ICE custody, in accordance to ICE s memo on Further Guidance Regarding the Care of Transgender Detainees. These data show that the number of detentions of transgender individuals increased from calendar years 2016 through 2018, as shown in table 2. Detentions resulting from CBP arrests accounted for about half of the total detentions of transgender individuals in 2016 and 2017, increasing to 69 percent in 2018. Also shown in table 2, detentions of non-criminal transgender individuals increased from calendar years 2016 through 2018, increasing from 46 percent of total detentions of transgender individuals in 2016 to 71 percent in 2018. Detentions of non-criminal transgender individuals include both detentions of individuals with pending criminal charges (ranging from 12 to 24 percent) and individuals with no recorded criminal history (ranging from 76 to 88 percent). Detentions resulting from CBP arrests comprised most of these detentions (ranging from 77 to 91 percent). Detentions of transgender individuals with criminal convictions decreased over the same period, and most resulted from ICE arrests (ranging from 71 to 84 percent). For the purposes of this report and our presentation of ICE data, we refer to potentially removable aliens without criminal convictions known to ICE as non-criminals and aliens with criminal convictions known to ICE as convicted criminals. According to ICE officials, administrative arrests of non-criminals include individuals who have been charged with, but not convicted of a crime, (we refer to these as aliens with pending criminal charges ), as well as those with no prior criminal history, (we refer to these as aliens with no recorded criminal history ). According to ICE, ICE officers electronically request and retrieve criminal history information about an alien from the FBI s National Crime Information Center database, which maintains a repository of federal and state criminal history information, and other sources. We used ICE s determination of criminality for our analysis. <4.1.2. Detentions of Individuals with Disabilities Increased from 2017 to 2018; the Majority Resulted from ICE Arrests and Were Detentions of Convicted Criminals> ICE began collecting and maintaining data on certain detainees with disabilities i.e., those with communication and mobility impairments who disclosed their impairment or who were identified by facility staff as having an impairment in January 2017, in accordance with its directive, titled Assessment and Accommodations for Detainees with Disabilities. These data show that detentions of individuals with disabilities increased from calendar years 2017 to 2018, as shown in table 3. Detentions resulting from ICE arrests accounted for the majority of these detentions (70 percent in 2017 and over 50 percent in 2018). Also shown in table 3, detentions of convicted criminals with disabilities decreased from calendar years 2017 to 2018, and accounted for the majority of total detentions of this population (67 percent in 2017 and 53 percent in 2018). Most of these detentions resulted from ICE arrests (89 percent in 2017 and 72 percent in 2018). Detentions of non-criminals in this population increased from calendar years 2017 to 2018. Detentions of individuals with no recorded criminal history accounted for most detentions of non-criminals in this population (71 percent in 2017 and 79 in 2018 percent), and the majority resulted from CBP arrests (68 percent in 2017 and 74 percent in 2018). ICE began collecting and maintaining data on pregnant women in ICE s custody in June 2015. IHSC officials said they use these data to monitor the condition of pregnant women in ICE custody, including the term of the pregnancy, general health of the pregnant detainee, and medical conditions of the fetus, in accordance to ICE s directive on Identification and Monitoring of Pregnant Detainees. These data show that the number of detentions of pregnant women varied, but increased overall from calendar years 2016 through 2018, as shown in table 4. Detentions resulting from CBP arrests accounted for most of the total detentions of pregnant women each year (ranging from 90 to 96 percent). Also shown in table 4, detentions of non-criminal pregnant women varied from calendar years 2016 through 2018, but increased overall. Detentions of non-criminal pregnant women accounted for most of the total detentions of pregnant women each year (ranging from 91 to 97 percent), and detentions of women with no recorded criminal history accounted for almost all of these detentions (ranging from 96 to 99 percent). Detentions of convicted criminal pregnant women also increased overall for the period. ICE began collecting and maintaining data needed to identify individuals with mental disorders at IHSC-staffed facilities in August 2013. According to IHSC officials, ICE does not collect these data for non-IHSC staffed facilities, in part because many of these facilities do not have electronic health records. However, IHSC personnel are notified of detainees with mental disorders at non-IHSC staffed facilities and these individuals may be transferred to another facility if the current facility is unable to provide appropriate care. While we were not able to present the overall number of detentions of individuals with mental disorders in ICE custody, we reviewed available ICE data to indicate the number and characteristics of detentions of individuals with mental disorders at IHSC- staffed facilities. These data show that the number of detentions of individuals with mental disorders at IHSC-staffed facilities varied from calendar years 2015 through 2018, as shown in table 5. Detentions resulting from CBP arrests accounted for the majority of these detentions (ranging from 53 to 67 percent) in 2015, 2016, and 2018. In 2017, detentions resulting from ICE arrests accounted for the majority (51 percent) of these detentions. Also shown in table 5, detentions of non-criminals with mental disorders varied from calendar years 2015 through 2018. These detentions accounted for the majority of total detentions of individuals with mental disorders in 2015, 2016, and 2018 (ranging from about 53 to 58 percent). Detentions of individuals with no recorded criminal history accounted for most detentions of non-criminals for this population (ranging from 79 to 92 percent), and most resulted from CBP arrests (ranging for 77 to 97 percent). Detentions of convicted criminals with mental disorders varied over the period and the majority resulted from ICE arrests (ranging from 71 to 79 percent). IHSC began collecting and maintaining data needed to identify women who are nursing at IHSC-staffed facilities, which is where ICE typically detains women who are nursing, in August 2013. These data are used to monitor the care and needs of women who are nursing, according to IHSC officials. While we were not able to present the overall number of detentions of nursing women in ICE custody, we reviewed available ICE data to indicate the number and characteristics of detentions of nursing women at IHSC-staffed facilities. These data show that the number of detentions of nursing women at IHSC-staffed facilities varied from calendar years 2015 through 2018, as shown in table 6. Detentions resulting from CBP arrests accounted for most of the detentions of women who were nursing each year (ranging from 98 to 99 percent). Also shown in table 6, detentions of both non-criminal and convicted criminal nursing women at IHSC-staffed facilities varied from calendar years 2015 through 2018. Detentions of non-criminal women who were nursing accounted for most of the total detentions of nursing women at IHSC-staffed facilities each year (ranging from 98 to 99 percent), and detentions of women who were nursing with no recorded criminal history accounted for almost all of these detentions (ranging from 99 to 100 percent), and resulted from CBP arrests (ranging from 98 to 100 percent). From calendar year 2015 through 2018, ICE collected and maintained data on a detainee s date of birth and is able to identify whether an individual is elderly, defined as someone who is over 65 years old, by calculating the individual s age at the time they are detained. ICE does not collect or maintain specific data on whether an individual is elderly because it does not have a separate policy for elderly detainees. Rather, ICE considers an individual s health, criminal history, and other factors when making detention determinations, according to officials. ICE data show that the number of detentions of individuals who were elderly varied, but increased overall from calendar years 2015 through 2018, as shown in table 7. Detentions resulting from ICE arrests accounted for the majority of detentions of individuals who were elderly each year (ranging from 64 to 71 percent). Also shown in table 7, detentions of both non-criminal and criminal individuals who were elderly varied from calendar years 2015 through 2018, and increased overall. Detentions of convicted criminals accounted for the majority of detentions of individuals who were elderly each year (ranging from 65 to 74 percent) and most of these detentions resulted from ICE arrests (ranging from 82 to 85 percent). Detentions of individuals who were elderly with no recorded criminal history accounted for most detentions of non-criminal individuals who were elderly (ranging from 80 to 91 percent), and the majority resulted from CBP arrests (ranging from 70 to 74 percent). <4.2. ICE Does Not Readily Know How Many Parents or Legal Guardians of U.S. Citizens and Legal Permanent Resident Minors It Has in Custody> While ICE collects information on detained parents or legal guardians, including those of U.S. citizens and legal permanent resident minors, this information is not maintained in a readily available format that would allow ICE to systematically identify such detained parents and ensure officers are collecting information on this population as required by policy. According to ICE officials, before making custody determinations, ICE officers are instructed to inquire whether arrested aliens are parents or legal guardians of minors, including parents of U.S. citizen and legal permanent resident minors. ICE officers are to enter this information in a separate tab in the ENFORCE Alien Detention Module, a subsystem within ICE s data system for recording information about individuals in its custody. This information on detained parents, however, cannot be readily searched to identify all detained parents or legal guardians in custody. Therefore, ICE does not know how many detained parents or legal guardians are in custody, including parents of U.S. citizen and legal permanent resident minors, during any given time. In accordance with a currently recurring Congressional reporting requirement, ICE generates a semi-annual report on removals of parents of U.S.-born citizen children. However, officials explained that they must review this information manually to generate the report and added that ICE is not required to report in an aggregate way on detained parents of U.S. citizen or legal permanent residents. ICE also tracks individual cases requiring specific actions, such as arranging transportation for parents to attend child welfare proceedings or accommodating visitation for parents with mandated child visitation schedules. However, according to ICE officials, these parents represent a small proportion of all parents in ICE custody. ICE s policy on Detention and Removal of Alien Parents or Legal Guardians requires ICE personnel to enter information into ENFORCE once a detained alien has been determined to be a parent or legal guardians of a U.S. citizen or legal permanent resident minor. As previously mentioned, this policy also requires the Child Welfare Coordinator to evaluate information from all relevant ICE data systems regarding detained parents or legal guardians of minors, including parents of U.S. citizen and legal permanent resident minors, and share appropriate information with the ERO field points of contact. ICE s policy further states that in pursuing the enforcement of U.S. immigration laws against parents of minors, ICE personnel should remain cognizant of the impact enforcement actions may have on U.S. citizen or legal permanent resident minors. Standards for Internal Control in the Federal Government call for design of any data collection to collect quality information, and for management to use quality information to make informed decisions and evaluate the entity s performance in achieving key objectives and addressing risks. Because information entered into ICE s data system on detained parents or legal guardians, including those of U.S. citizen or legal permanent resident minors, is not maintained in a readily available format, ICE headquarters officials cannot ensure that ICE officers are collecting and entering this information into the system as required by policy. According to ICE officials, the agency had previously considered implementing a system update to readily identify certain detained parents of minors, but as of October 2019 is no longer considering this update. Collecting and maintaining information in a readily available format on detained parents of U.S. citizen or legal permanent resident minors could help ensure that ICE personnel can identify, evaluate, and share information on this population, as required by ICE policy. In addition, collecting and evaluating this information would provide greater transparency regarding the impacts of ICE s enforcement actions on U.S. citizen or legal permanent resident minors. <5. Conclusions> In 2015, DHS reported that about 12 million aliens were residing in the United States without lawful status or presence, which includes parents of U.S. citizen, legal permanent resident, and alien minors. Through its policies, ICE has established the importance of collecting and maintaining information on detained parents and legal guardians of U.S. citizen and legal permanent resident minors. However, because ICE has not implemented a process to collect or maintain this information in a readily available format, it does not have reasonable assurance that it can identify all detained parents and legal guardians of U.S. citizen and legal permanent resident minors. Therefore, ICE cannot evaluate and share this information and ensure its officers are collecting information on this population in accordance with its policy. Implementing a process to collect and maintain this information in a readily available format would allow ICE to better assess the impacts of its enforcement actions on U.S. citizen and legal permanent resident minors and help improve ICE oversight efforts. <6. Recommendation for Executive Action> The Director of ICE should implement a process to collect and maintain data in a readily available format on detained parents or legal guardians of U.S. citizen and legal permanent resident minors to ensure that information on this population is entered into ICE s data system as required by policy. (Recommendation 1) <7. Agency Comments and Our Evaluation> We provided a draft of this report for review and comment to DHS. DHS provided comments, which are reproduced in appendix XI. DHS also provided technical comments, which we incorporated, as appropriate. DHS did not concur with our recommendation. Specifically, in its comments, DHS stated that data on detained parents or legal guardians of U.S. citizens and legal permanent residents are available to approved EARM users and that we did not identify any problems with the quality of the data. However, as we noted in our report, these data are not readily available because ICE s data on family relationships, including parents or legal guardians of U.S. citizens and legal permanent resident minors, can only be accessed by manually reviewing each separate case file in EARM. To that end, we or anyone else wishing to do so are unable to determine whether there are problems with the data as ICE is not able to provide aggregate data that would allow us to assess the quality or to report on these data. In its comments, DHS states that ICE does not have any requirement or need to aggregate data on this particular group and doing so would not better inform ICE s decision making processes. However, as noted in the report, ICE s policy states that in pursuing the enforcement of U.S. immigration laws against parents of minors, ICE personnel should remain cognizant of the impact enforcement actions may have on U.S. citizen or legal permanent resident minors. Without making these data readily available, ICE is not able to account for the overall impact of its enforcement actions on U.S. citizen or legal permanent resident minors whose parents or legal guardians have been detained. Additionally, headquarters and field officials we met with during the course of this review agreed that having this information readily available would be useful. They also explained that ICE was developing a method to better track and report on primary caregivers of children. However, in October 2019, ICE officials stated that the agency is no longer considering this improvement. We continue to believe that collecting and maintaining information in a readily available format on detained parents or legal guardians of U.S. citizen or legal permanent resident minors could help ensure that ICE personnel can identify, evaluate, and share information on this population, as required by ICE policy. Without such data, ICE headquarters officials cannot ensure that ICE officers are collecting and entering this information into the system as required. In addition, collecting and evaluating this information would provide greater transparency regarding the impacts of ICE s enforcement actions on U.S. citizen or legal permanent resident minors. We are sending copies of this report to the appropriate congressional committees, and the Acting Secretary of the Department of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or goodwing@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix XII. Appendix I: Objectives, Scope, and Methodology This appendix provides additional information on our objectives, scope, and methodology. Specifically, our objectives were to examine the following questions: 1. What does ICE data show about ICE arrests, detentions, and removals from calendar years 2015 through 2018? 2. What policies are in effect for selected populations and what changes did ICE make to align these policies with the 2017 DHS memo? 3. To what extent does ICE collect data on selected populations in detention and what do these data show? To address our first question, we analyzed individual-level data from the U.S. Immigration and Customs Enforcement (ICE) Integrated Decision Support (IIDS) database, to determine the total number of ICE Enforcement and Removal Operations (ERO) administrative arrests (arrests), detentions, and removals from January 2015 (the start of the Priority Enforcement Program) through December 2018 (to include the first two years for the 2017 DHS Memo). ERO conducts civil immigration enforcement actions, which includes arrests for civil violations of U.S. immigration laws, detentions, and removals. Arrests. We analyzed individual-level arrest data from IIDS to determine the total number of ERO arrests for each calendar year 2015 through 2018. We examined multiple data fields from the individual-level arrest data, including alien file number, family name, given name, gender, country of citizenship, arrest date, area of responsibility, and criminality, among other variables. Because aliens may have multiple arrests, we used alien number and arrest date to identify the unique number of arrests rather than the number of unique aliens who were arrested. We excluded from our analysis arrest records that had a missing alien number, an invalid alien number i.e., that included all zeroes or had duplicate alien number and arrest date combinations or records that indicated test in the name fields. We analyzed these data to determine total numbers of arrests by gender, country of citizenship, criminality, arresting program, and area of responsibility. To determine the number of arrests by gender, we analyzed IIDS individual-level arrest data. We also analyzed these data to determine the number of arrests by criminality for each gender, using ICE s determination of criminality for our analysis, as discussed below. To determine the number of arrests by country of citizenship, we analyzed IIDS individual-level arrest data. ICE obtains country of citizenship data from arrest reports, which may be based on documentation or self-reported. To determine the number of arrests by criminality, we analyzed IIDS individual-level arrest data. For the purposes of this report and our presentation of ICE data, we refer to potentially removable aliens without criminal convictions known to ICE as non-criminals and aliens with criminal convictions known to ICE as convicted criminals. According to ERO officials, arrests of non-criminals include individuals who have been charged but not convicted of a crime as well as those with no prior criminal history. According to ICE, ICE officers electronically request and retrieve criminal history information about an alien from the FBI s National Crime Information Center (NCIC) database, which maintains a repository of federal and state criminal history information. ICE officers are also able to manually enter criminal history information in ICE s data system if they discover additional criminal history information that was not available in NCIC. ICE officers may also check for criminal convictions committed outside the United States, on a case-by-case basis. Most of the ICE data we reviewed indicated criminal or non-criminal history, where criminal included convictions, and non-criminal included both pending criminal charges and other immigration violations. Therefore, wherever we referred to criminality, we used ICE s determination of criminality criminal or non-criminal for our analysis. To determine the number of arrests by arresting program, we analyzed IIDS individual-level arrests data to determine the number of arrests at-large in the communities by ICE s fugitive operations teams and those resulting from an incarceration in federal, state, and local prisons and jails through the Criminal Alien Program. To determine the number of arrests by ERO area of responsibility, we analyzed IIDS individual-level arrests data for calendar years 2015 through 2018. We also used these data to calculate the proportion of arrests of convicted criminals by ERO area of responsibility. We compared the number of arrests across the 24 ERO areas of responsibility to examine the differences in enforcement actions between the years the Priority Enforcement Program were in effect (2015-2016) and the years immediately following implementation of the DHS memo (2017-2018). We excluded from our analysis arrest records that had a missing or unknown area of responsibility. We also analyzed IIDS individual-level arrest data to determine the total number of arrests of juveniles during calendar years 2015 through 2018. Because aliens may have multiple arrests, we used alien number and arrest date to identify the unique number of arrests rather than the number of unique aliens who were arrested. We excluded from our analysis arrest records that had a missing alien number, an invalid alien number i.e., that included all zeroes or had duplicate alien number and arrest date combinations. We used these data to determine the total number of arrests of juveniles by age and gender. Detentions. We analyzed individual-level detention data from IIDS to determine the total number of ERO detentions during calendar years 2015 through 2018. We examined multiple data fields from the individual-level detention data, including alien file number, person id, family name, given name, gender, country of citizenship, arresting agency, criminality, detention facility, book-in date, book-out date, release reason, and length of stay, among other variables. Because aliens may have multiple detentions, we used alien number and initial book-in date fields i.e., the first date the individual is taken into ICE custody to identify the unique number of detentions rather than the number of unique aliens who were detained. We excluded from our analysis arrest records that had a missing alien number or had an invalid alien number i.e., that included all zeroes. We analyzed these data to determine total numbers of detentions by gender, country of citizenship, arresting agency, and criminality. To determine the number of detentions by gender, we analyzed IIDS individual-level detention data. We also analyzed these data to determine the number of detentions by arresting agency ICE or U.S. Customs and Border Protection (CBP) and criminality for each gender. We included all detentions resulting from both ICE and CBP arrests because ICE is responsible for detaining certain aliens apprehended by CBP at or between ports of entry. To conduct our analysis, we used ICE s determination of criminality criminal or non-criminal which ICE determines by conducting electronic criminal history checks, as previously discussed. To determine the number of detentions by country of citizenship, we analyzed IIDS individual-level detention data. ICE obtains country of citizenship data from arrest reports, which may be based on documentation or self-reported. To determine the number of detentions by arresting agency, we analyzed IIDS individual-level detention data for detentions resulting from ICE arrests and those resulting from CBP arrests at or between ports of entry. To determine the number of detentions by criminality, we analyzed IIDS individual-level detention data. We also examined the extent to which detentions varied by criminality and arresting agency. To conduct our analysis, we used ICE s determination of criminality criminal or non-criminal which ICE determines by conducting electronic criminal history checks, as previously discussed. Removals. We analyzed individual-level removal data from IIDS to determine the total number of ERO removals during calendar years 2015 through 2018. We examined multiple data fields from the individual-level removal data, including alien file number, family name, given name, gender, country of citizenship, criminality, arresting agency, and removal date, among other variables. Because aliens may have multiple removals, we used alien number and removal date to identify the unique number of removals rather than the number of unique aliens. We excluded from our analysis removal records that had a missing alien number, an invalid alien number i.e., that included all zeroes, or had duplicate alien number and removal date combinations, or records that indicated test in the name fields. We analyzed these data to determine total numbers of removals by gender, country of citizenship, arresting agency, and criminality. To determine the number of removals by gender, we analyzed IIDS individual-level removal data. We also analyzed these data to determine the number of removals by arresting agency and criminality for each gender. To conduct our analysis, we used ICE s determination of criminality criminal or non-criminal which ICE determines by conducting electronic criminal history checks, as previously discussed. To determine the number of removals by country of citizenship, we analyzed IIDS individual-level data. ERO obtains country of citizenship data from arrest reports, which may be based on documentation or self-reported. To determine the number of removals by arresting agency, we analyzed IIDS individual-level removal data for removals resulting from ERO arrests and those resulting from CBP arrests at or between ports of entry. To determine the number of removals by criminality, we analyzed IIDS individual-level removal data. To conduct our analysis, we used ICE s determination of criminality criminal or non-criminal which ICE determines by conducting electronic criminal history checks, as previously discussed. We determined that the data used in each of our analyses were sufficiently reliable for the purposes of this report by analyzing available documentation, such as related data dictionaries; interviewing ICE officials knowledgeable about the data; conducting electronic tests to identify missing data, anomalies, or erroneous values; and following up with officials, as appropriate. We also analyzed arrest data from Homeland Security Investigations (HSI) worksite enforcement to determine the total number of criminal and administrative arrests conducted by HSI worksite enforcement between January 2015 and December 2018. We were unable to use these data for the purposes of reporting the total number of arrests by HSI worksite enforcement for each calendar year. Specifically, we identified combined arrest, charge, and conviction dates in the same field, among other issues, which limited our ability to identify the number of aliens arrested by HSI as a result of worksite enforcement operations each year. To address our second question, we reviewed a master list of ICE policies and interviewed policy officials to identify policies related to individuals with special vulnerabilities. Based on this review as well as input from nongovernmental organizations (NGOs) that serve or represent various populations, we selected eight populations including aliens who are: lesbian, gay, bisexual, transgender, and intersex (LGBTI), individuals with disabilities, juveniles, parents or legal guardians of minors, pregnant, individuals with mental disorders, women who are nursing, or individuals who are elderly. To identify the changes ICE made to align these policies with the 2017 DHS memo, we reviewed specific provisions in the executive order and implementing memoranda. We then analyzed existing policies as well as policies that ICE revised or rescinded to align with the 2017 DHS memo, including policies related to prosecutorial discretion and selected populations. We conducted interviews with officials from ICE headquarters offices, including the Office of the Principal Legal Advisor, Office of Policy, Homeland Security Investigations, as well as program officials within ERO, including Domestic Operations, Fugitive Operations, and Custody Management Divisions. We met with six national organizations that serve or represent immigrants as well as six state or regional organizations that serve or represent immigrants in the locations we visited to obtain their perspectives on how the policies affected the individuals they represent. The perspectives of NGOs are not generalizable and my not be indicative of care provided at all detention facilities. We selected these NGOs to reflect a range of types of populations served or represented as well as based on their proximity to ICE areas of responsibility we visited, see table 8 for more information on the organizations we interviewed. We conducted site visits to six selected ICE ERO areas of responsibility (Atlanta, Dallas, Los Angeles, San Diego, St. Paul, and Washington, D.C.) and interviewed ICE officials to obtain their perspectives on the policy revisions. We selected these locations based on the prevalence of arrests in fiscal year 2017, percent changes in arrests from fiscal year 2016 to 2017, and geographical dispersion. Specifically, we identified locations that had the highest arrest numbers in fiscal year 2017 or the largest percentage increases in arrests from fiscal years 2016 to 2017, and then selected locations that provided wide geographical representation. In each location we met with ERO liaisons and officers responsible for monitoring and implementing the provisions of policies for certain selected populations, as well as detention and deportation officers and supervisors who oversee the detention and removal of aliens, including those with special vulnerabilities. We also met with ICE medical staff in areas of responsibility with this position. In one area of responsibility, we limited our visit to a detention facility and met with the staff at that facility due to its proximity to another area of responsibility we visited. The information obtained from these site visits is not generalizable and may not be indicative of care provided to all populations at all detention facilities, but provided insights into how selected ICE areas of responsibility conduct enforcement activities and implement immigration enforcement policies. To address our third question, we reviewed multiple data sources that ICE uses to track information on certain aliens with special vulnerabilities in detention and matched these data with IIDS individual-level detention data to determine what ICE data show about detentions of selected populations between January 2015 and December 2018. To conduct our analysis, we first excluded records that contained missing alien numbers or alien numbers that were all zeroes. Then, we matched each data source to the IIDS detention data using alien number and excluded additional records we were unable to match. Because aliens may have multiple detentions, we compared the admission or book-in date from each data source with the book-in dates from the IIDS detention data, and excluded additional records with dates beyond 30 days apart. We analyzed this information to determine the total number of detentions for six of the eight selected populations (aliens who are: transgender, individuals with disabilities, pregnant, individuals with mental disorders, nursing, and elderly); and the number of detentions resulting from ICE versus CBP arrests; as well as detentions by criminality and the length of detention for each of these six populations. We excluded juveniles from our analysis because ERO is generally not responsible for detaining juveniles. To determine the extent to which ICE maintains data on detained parents or legal guardians of minors, we reviewed ICE policies pertaining to detained parents, including those that set forth requirements for tracking detained parents or legal guardians of U.S. citizens and legal permanent resident minors. We also interviewed ERO officials about ICE s data collection processes and any limitations with the data it collects and maintains. We assessed ICE s efforts to track this population against agency policy. To conduct our analysis of criminality for each population, we used ICE s determination of criminality criminal or non-criminal which ICE determines by conducting electronic criminal history checks, as previously discussed. We also analyzed IIDS data on criminal charges for detentions of aliens that resulted from ICE arrests to determine the type of charges (e.g., immigration-related or other criminal charges) associated with these detentions. To conduct our analysis on length of detention, we compared initial book-in date with the most recent book-out date to calculate the total days in detention for each of our selected populations. Transgender Individuals: We matched ERO records for transgender detainees from calendar years 2016 through 2018 with IIDS individual-level detention data to determine the total number of detentions of transgender individuals, as well as the number of detentions by arresting agency, criminality, and length of detention. We excluded 4 of the unique transgender detainee records for 2016, 33 for 2017 and 27 for 2018. These records were excluded because we were unable to match these records to the IIDS individual level- detention data using alien number and book-in date combinations. According to ICE officials, this may be due to data entry errors. Our analysis is based on those records we were able to match: 228 for 2016, 241 for 2017, and 277 for 2018. ICE also recorded 55 transgender detainees in 2015; however, we excluded these records from our analysis since ICE did not collect complete data on this population in 2015. For the LGBTI population, ICE only collects and maintains data on transgender individuals in detention. Therefore, we were only able to analyze data for this subset of the LGBTI population. Individuals with Disabilities: We matched ERO records for individuals with communication and mobility impairments in ERO custody during calendar years 2017 and 2018 with IIDS individual- level detention data to determine the total number of detentions of these individuals, as well as the number of detentions by arresting agency, criminality, and length of detention. We excluded 5 of the unique detainee records for 2017, and 1 for 2018 because we were unable to match these records to the IIDS individual level-detention data using alien number and book-in date combinations. According to ICE officials, this may be due to data entry errors. Our analysis is based on those records we were able to match: 424 for 2017, and 516 for 2018. When ICE began collecting these data, it included aliens who were placed in detention prior to January 2017. We excluded 99 records for this reason from our analysis since ICE did not collect complete data on this population prior to January 2017. Pregnant Women: We matched ICE Health Service Corps (IHSC) records for pregnant women in ERO custody during calendar years 2016 through 2018 with IIDS individual-level detention data to determine the total number of detentions of pregnant women, as well as the number of detentions by arresting agency, criminality, and length of detention. We excluded 60 of the unique pregnant detainee records for 2016, 20 for 2017 and 32 for 2018 because we were unable to match these records to the IIDS individual-level detention data using alien number and book-in date combinations. According to ICE officials, this may be due to data entry errors. Our analysis is based on those records we were able to match: 1,377 for 2016, 1,150 for 2017, and 2,094 for 2018. ICE also recorded 675 pregnant detainees in 2015; however, we excluded these records from our analysis since ICE did not collect complete data on this population in 2015. Elderly Individuals: We analyzed data records in IIDS for elderly individuals (those 65 years or older at the time of initial book-in) in ERO custody during calendar years 2015 through 2018 to determine the total number of detentions of elderly individuals, as well as the number of detentions by arresting agency, criminality, and length of detention. According to ERO, the agency does not maintain separate data records for elderly individuals in ERO custody; however, ERO officials were able to identify these detainees by calculating their age at the time they were detained. We excluded 4 of the unique elderly detainee records for 2015, 3 for 2016 and 4 for 2018 because we were unable to match these records to the IIDS individual-level detention data using alien number and book-in date combinations. According to ICE officials, this may be due to data entry errors. Our analysis is based on those records we were able to match: 863 for 2015, 736 for 2016, 763 for 2017, and 1,132 for 2018. Individuals with Mental Disorders and Nursing Women: We matched IHSC records for individuals with mental disorders and nursing women detained at IHSC-staffed facilities during calendar years 2015 through 2018 with IIDS individual-level detention data to determine the total number of detentions of each of these populations, as well as the number of detentions by arresting agency, criminality, and length of detention. Because ICE did not maintain data on individuals with mental disorders or nursing women detained at the over 200 non-IHSC staffed facilities, our findings for these two populations are not generalizable, but provided valuable insights into these detentions. We excluded 207 of the unique detainee with mental disorders records for 2016, 850 for 2017, and 1,233 for 2018 because we were unable to match these records with the IIDS individual-level detention data using alien number and book-in date combinations. Our analysis is based on the unique detainee with mental disorders records we were able to match: 8,138 for 2015, 9,466 for 2016, 8,643 for 2017, and 8,501 for 2018. Similarly, we excluded 2 of the unique nursing detainee records for 2015, 3 for 2017 and 5 for 2018 for the same reason. Our analysis is based on the unique nursing detainee records we were able to match: 157 for 2015, 399 for 2016, 564 for 2017, and 381 for 2018. According to ICE officials, this may be due to data entry errors. We assessed the reliability of the data used in each of our analyses by analyzing available documentation, such as related data dictionaries; interviewing ERO officials knowledgeable about the data; conducting electronic tests to identify missing data, anomalies, or erroneous values; and following up with officials, as appropriate. We determined the data were sufficiently reliable for depicting general trends in detentions for the selected populations. We conducted this performance audit from November 2017 to December 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Enforcement and Removal Operations Arrests, Detentions, and Removals, 2015-2018 The number of Enforcement and Removal Operations (ERO) administrative arrests (arrests) by gender, country of citizenship, ICE enforcement program, criminality, and area of responsibility from calendar years 2015 through 2018. The number of detentions by gender, country of citizenship, arresting agency, and criminality from calendar years 2015 through 2018. The number of removals by gender, country of citizenship, arresting agency, and criminality from calendar years 2015 through 2018. We analyzed individual-level Immigration and Customs Enforcement (ICE) data to identify ERO arrests, detentions, and removals during calendar years 2015 through 2018. <8. Arrests> The Number of Arrests Varied during the Period, Increasing Overall. The number of ERO arrests varied from calendar years 2015 through 2018, and increased more than 30 percent overall for the 4-year period (from 112,870 arrests in 2015 to 151,497 arrests in 2018). During the two years Priority Enforcement Program (PEP) was in effect, the number of ERO arrests varied little, decreasing 5 percent from 2015 to 2016. Following issuance of the 2017 DHS memo, ERO arrests increased 41 percent from 2016 to 2017, and stayed relatively the same from 2017 to 2018. Arrests by Gender. Each year from calendar years 2015 through 2018, arrests of males accounted for the majority of ERO arrests (ranging from 92 to 93 percent), as shown in figure 7. Arrests by Country of Citizenship. Each year from 2015 through 2018, ERO arrests of citizens of Mexico, Guatemala, El Salvador, and Honduras collectively accounted for about 86 percent of all ERO arrests, with individuals from Mexico accounting for the majority (ranging from 59 to 65 percent), as shown in figure 8. All other individual countries collectively accounted for about 14 to 15 percent of total arrests each year. Arrests by ICE Enforcement Program. Arrests of individuals from federal, state and local prisons and jails, through the Criminal Alien Program, accounted for the majority (ranging from 72 to 76 percent) of ERO arrests each calendar year from 2015 through 2018, as shown in figure 9. Arrests of individuals at-large through Fugitive Operations (ranging from 17 to 19 percent) and other programs accounted for the balance of the arrests each year. Criminal Alien Program arrests also accounted for most of the increase in ERO arrests in calendar years 2017 and 2018 (see figure 9). Arrests by Criminality. As shown in figure 10, the number and proportion of ERO arrests of non-criminals aliens increased each year from calendar years 2015 through 2018. For the purposes of this report and our presentation of ICE data, we refer to potentially removable aliens without criminal convictions known to ICE as non-criminals and aliens with criminal convictions known to ICE as convicted criminals. Specifically, the arrests of non-criminals increased from 13,494 (12 percent of total arrests) in 2015 to 51,513 (34 percent of total arrests) in 2018. According to ERO officials, arrests of non-criminals include individuals who have been charged with but not convicted of a crime as well as those with no prior criminal history. The number of ERO arrests of convicted criminals stayed relatively stable from calendar years 2015 to 2018, ranging between about 91,000 and 107,000. Each of these years, arrests of convicted criminals comprised the majority of total arrests, but decreased from 88 percent in 2015 to 66 percent in 2018. Most arrests of convicted criminals resulted from the Criminal Alien Program (ranging from 76 to 80 percent), followed by Fugitive Operations (ranging from 15 to 19 percent). Arrests by Areas of Responsibility. The number of ERO arrests increased in all ERO areas of responsibility when comparing calendar years 2015 and 2016, when PEP was in effect, to calendar years 2017 and 2018, following implementation of the 2017 DHS memo. These increases ranged from less than 1 percent increase in the Los Angeles area of responsibility to a 99 percent increase in the Miami area of responsibility. Arrests of convicted criminals accounted for the majority of total arrests in all areas of responsibility. However, the proportion of arrests of convicted criminals to total arrests decreased in all areas of responsibility from 2015 and 2016 to 2017 and 2018. This decrease is partially due to the increase in the number of ERO arrests of non- criminals in all areas of responsibility during these years. Table 9 presents total numbers of ERO arrests for each of ERO s 24 areas responsibility nationwide. It also presents the percentage of arrests of convicted criminals by area of responsibility for calendar years 2015 and 2016 combined and calendar years 2017 and 2018 combined. <9. Detentions> The Number of Detentions Varied, Increasing Overall. The number of ERO detentions varied from calendar years 2015 through 2018, and increased more than 30 percent overall for the 4-year period (from 324,320 detentions in 2015 to 438,258 detentions in 2018). ERO detention data include detentions resulting from both ICE and CBP arrests. During the two years PEP was in effect, the number of ERO detentions increased 13 percent, from 324,320 in 2015 to 366,740 in 2016. Following issuance of the 2017 DHS memo, ERO detentions decreased 15 percent from 2016 to 2017 (from 366,740 to 310,309 detentions), and increased 41 percent from 2017 to 2018 (to 438,258 detentions). Detentions by Gender. Each year from calendar years 2015 through 2018, detentions of males accounted for the majority of ERO detentions (ranging from 74 to 81 percent), as shown in figure 11. Detentions by Country of Citizenship. Each year from 2015 through 2018, ERO detentions of citizens of Mexico, Guatemala, El Salvador, and Honduras collectively accounted for the most detentions (ranging from 84 to 89 percent). All other individual countries collectively accounted for 11 to 16 percent of total detentions each year, as shown in Figure 12. Detentions by Arresting Agency. Detentions resulting from CBP arrests at or between ports of entry accounted for the majority of ERO detentions each year from calendar years 2015 through 2018 (ranging from 52 to 71 percent). Detentions resulting from CBP arrests also accounted for most of the variation in detentions from year to year, as shown in figure 13. Detentions resulting from ICE arrests varied little from 2015 to 2016, increased in 2017, and then varied little from 2017 to 2018. Detentions by Criminality. As shown in figure 14, the number of ERO detentions of non-criminals varied, but increased overall from calendar years 2015 to 2018. These detentions accounted for the majority of total ERO detentions each year (ranging from 53 to 64 percent). The variation in the number of detentions of non-criminals was partially due to fluctuations in detentions that resulted from CBP arrests. The number of ERO detentions of convicted criminals stayed relatively stable from 2015 to 2018, and accounted for the minority of total ERO detentions (ranging from 36 to 47 percent). The majority of these detentions resulted from ICE arrests (ranging from 64 to 76 percent) rather than CBP arrests. <10. Removals> The Number of Removals Varied, Increasing Overall. The number of ERO removals varied from calendar years 2015 through 2018, and increased 13 percent overall for the 4-year period (from 231,559 removals in 2015 to 261,523 removals in 2018). ERO removal data include removals resulting from both ICE and CBP arrests. During the two years PEP was in effect, the number of ERO removals varied little, increasing 6 percent from 2015 to 2016. Following issuance of the 2017 DHS memo, ERO removals decreased 12 percent in 2017, and increased 21 percent from 2017 to 2018. Removals by Gender. Removals of male aliens accounted for most of ERO removals (about 90 percent) each year from calendar years 2015 through 2018, as shown in figure 15. Removals by Country of Citizenship. In addition, from calendar years 2015 through 2018, ERO removals of citizens of Mexico, Guatemala, El Salvador, and Honduras collectively accounted for most of the removals each year (ranging from 90 to 94 percent). Citizens of all other countries collectively accounted for 6 to 10 percent of total removals each year, as shown in figure 16. Removals by Arresting Agency. Each year, removals resulting from CBP arrests at or between ports of entry accounted for the majority of total ERO removals (ranging from 60 to 74 percent). ERO removals resulting from CBP arrests also accounted for most of the variation in total removals from year to year, as shown in figure 17. Removals by Criminality. The number and proportion of ERO removals of non-criminals varied, but increased overall, from calendar years 2015 through 2018, as shown in figure 18. Specifically, removals of non- criminals increased from 40 percent of total removals in 2015 to 43 percent of total removals in 2018. Most removals of non-criminals resulted from CBP arrests (ranging from 80 to 95 percent), rather than ICE arrests. ERO removals of convicted criminals varied, increasing overall, from calendar years 2015 to 2018, and accounted for the majority of total ERO removals each year (ranging from 55 to 60 percent). Removals of convicted criminals resulted from CBP and ICE arrests at approximately equal levels. Appendix III: Enforcement and Removal Operations Arrests, Detentions, and Removals of Males, 2015-2018 This appendix presents the overall number of Enforcement and Removal Operations (ERO) administrative arrests (arrests), detentions, and removals of males from calendar years 2015 through 2018, including the number of arrests by criminality and the number of detentions and removal by criminality and arresting agency. We analyzed individual- level Immigration and Customs Enforcement (ICE) data to identify ERO arrests, detentions, and removals of males during calendar years 2015 through 2018. <11. Arrests> The Number of Arrests of Males Generally Increased. The number of ERO arrests of males varied from calendar years 2015 through 2018 but generally increased by 32 percent across the period, as shown in figure 19. During the two years the Priority Enforcement Program (PEP) was in effect, between calendar years 2015 and 2016, the number of ERO arrests remained stable, decreasing by about 5 percent in that period. The following year, after the issuance of the 2017 DHS memo in February 2017, ERO arrests increased by about 40 percent from calendar years 2016 to 2017, and decreased by less than 1 percent in calendar year 2018. Arrests of Males by Criminality. During the same time, the proportion of ERO arrests of convicted criminal males decreased each year from 90 percent of total arrests of males in calendar year 2015 to 69 percent in calendar year 2018, as shown in figure 19. For the purposes of this report and our presentation of ICE data, we refer to potentially removable aliens without criminal convictions known to ICE as non-criminals and aliens with criminal convictions known to ICE as convicted criminals. Conversely, the proportion of ERO arrests of non-criminal males increased each year, from 10 percent of total arrests of males in calendar year 2015 to 31 percent of total arrests in calendar year 2018. According to officials, arrests of non-criminals include individuals who have been charged with but not convicted of a crime as well as those with no prior criminal history. For the purposes of this report and our presentation of ICE data, we refer to potentially removable aliens without criminal convictions known to ICE as non-criminals and aliens with criminal convictions known to ICE as convicted criminals. According to ICE officials, administrative arrests of non-criminals include individuals who have been charged with but not convicted of a crime as well as those with no prior criminal history. According to ICE, ICE officers electronically request and retrieve criminal history information about an alien from the FBI s National Crime Information Center database, which maintains a repository of federal and state criminal history information, and other sources. We used ICE s determination of criminality for our analysis. <12. Detentions> Detentions of Males Increased Overall. The number of ERO detentions varied from calendar years 2015 through 2018, but increased overall by 32 percent over the period, as shown in figure 20. ERO detention data include detentions resulting from both ICE and U.S. Customs and Border Protection (CBP) arrests. During the two years PEP was in effect, the number of ERO detentions of males increased by more than 8 percent from calendar years 2015 to 2016. Following the issuance of the 2017 DHS memo, the number of male detentions decreased by more than 8 percent in calendar year 2017, and increased again in calendar year 2018, by over 32 percent. Detentions of Males by Arresting Agency. Detention of males resulted from both ICE and CBP arrests from calendar years 2015 through 2018, as shown in figure 20. For all the years in this period, except calendar year 2017, detentions resulting from a CBP arrest at or between ports of entry account for the majority of the detentions of males (ranging from about 58 to 63 percent). In calendar year 2017, detentions resulting from ICE arrests accounted for about 56 percent of all male detentions. Detentions of Males by Criminality. During the same time, the number and proportion of ERO detentions of convicted criminal males varied, ranging from 45 to 57 percent of all detentions of males, as shown in figure 21. The majority of these detentions resulted from ICE arrests, ranging from 66 to 77 percent of all convicted criminal male detentions. The number of ERO detentions of non-criminal males also varied, ranging from 43 to 55 percent of all detentions of males. Detentions of non- criminal males primarily resulted from CBP arrests, which ranged from 69 to 93 percent of detentions of non-criminal males between calendar years 2015 and 2018. <13. Removals> Removals of Males Increased Overall. The number of ERO removals of males varied from calendar years 2015 through 2018, but increased overall by 14 percent over the period, as shown in figure 22. ERO removal data include removals resulting from both ICE and CBP arrests. During PEP, which was in effect from calendar years 2015 and 2016, the number of ERO removals of males increased by about 6 percent. From calendar years 2016 to 2017, following the issuance of the 2017 DHS memo, the number of these removals decreased by more than 11 percent, then increased by more than 20 percent in calendar year 2018. Removals of Males by Arresting Agency. From calendar years 2015 to 2018, the majority of ERO removals of males resulted from CBP arrests at or in between ports of entry (ranging from 58 to 72 percent), as shown in figure 22. Removals of Males by Criminality. From calendar years 2015 through 2018, ERO removals of convicted criminal males accounted for the majority of removals each year, ranging from 58 to 63 percent of the total removal of males, as shown in figure 23. The removals of convicted criminal males were the result of both CBP and ICE arrests. For all the years in this period, except calendar year 2017, removals resulting from a CBP arrest account for the majority of the removals of convicted criminal males (ranging from about 52 to 56 percent). In calendar year 2017, removals resulting from ICE arrests accounted for about 56 percent of all removals of convicted criminal males. ERO removals of non-criminal males varied, increasing overall, from calendar years 2015 to 2018, and accounted for the minority of ERO removals of males each year (ranging from 37 to 42 percent). Most of the removals of non-criminal males were as a result of CBP arrests, ranging from 79 to 95 percent of all removals of non-criminal males. Appendix IV: Enforcement and Removal Operations Arrests, Detentions, and Removals of Females, 2015-2018 This appendix presents the overall number of Enforcement and Removal Operations (ERO) administrative arrests (arrests), detentions, and removals of females from calendar years 2015 through 2018, including the number of arrests by criminality and the number of detentions and removals by criminality and arresting agency. We analyzed individual- level Immigration and Customs Enforcement (ICE) data to identify ERO arrests, detentions, and removals of females during calendar years 2015 through 2018. <14. Arrests> The Number of Arrests of Females Generally Increased. The number of ERO arrests of females generally increased more than 70 percent from calendar years 2015 through 2018, as shown in figure 24. Between 2015 and 2016, the two years the Priority Enforcement Program (PEP) was in effect, the number of ERO arrests remained stable, decreasing by less than 1 percent in that period. Following the issuance of the 2017 DHS memo, ERO arrests increased by 65 percent from calendar years 2016 to 2017, and increased by less than 5 percent in calendar year 2018. Arrests of Females by Criminality. During the same time, the proportion of arrests of non-criminal females increased each year from 43 percent in calendar year 2015 to 63 percent of total arrests of females in calendar year 2018. For the purposes of this report and our presentation of ICE data, we refer to potentially removable aliens without criminal convictions known to ICE as non-criminals and aliens with criminal convictions known to ICE as convicted criminals. According to officials, arrests of non-criminals include individuals who have been charged with but not convicted of a crime as well as those with no prior criminal history. Conversely, the proportion of ERO arrests of convicted criminal females decreased each year from 57 percent in calendar year 2015 to 37 percent in calendar year 2018, as shown in figure 24. <15. Detentions> Detentions of Females Increased Overall. The number of ERO detentions varied from calendar years 2015 through 2018, and increased more than 45 percent over the period, as shown in figure 25. ERO detention data include detentions resulting from both ICE and U.S. Customs and Border Protection (CBP) arrests. During the two years PEP was in effect, the number of ERO detentions of females increased by more than 28 percent from calendar years 2015 through 2016. Following the issuance of the DHS memo, the number of detentions decreased by about 36 percent in 2017, then increased by over 77 percent in calendar year 2018. Detentions of Females by Arresting Agency. Detentions of females resulting from CBP arrests at or between ports of entry accounted for most of the detentions of females each year from calendar years 2015 through 2018 (ranging from 84 to 94 percent), as shown in figure 25. Detentions of Females by Criminality. As shown in figure 26, the number of ERO detentions of non-criminal females varied, but increased overall from calendar years 2015 to 2018. These detentions accounted for most of the total ERO detentions of females each year (ranging from 87 to 92 percent). Most of the detention of non-criminal females resulted from CBP arrests (ranging from 91 to 98 percent) rather than ICE arrests. The number of ERO detentions of convicted criminal females stayed relatively stable from calendar years 2015 through 2018, and accounted for the minority of total ERO detentions (ranging from 8 to 13 percent). CBP and ICE arrests accounted for approximately the same number of detentions of convicted criminal females. <16. Removals> Removals of Females Increased Overall. The number of ERO removals of females remained relatively stable from calendar years 2015 through 2018, but increased overall by 6 percent over the period, as shown in figure 27. ERO removal data include removals resulting from both ICE and CBP arrests. During the PEP, which lasted from calendar years 2015 and 2016, the number of ERO removals increased by more that 2 percent. From calendar years 2016 to 2017, following the issuance of the 2017 DHS memo, the number of ERO removals decreased by more than 14 percent, then increased by more than 20 percent in 2018. Removals of Females by Arresting Agency. Each calendar year, removals resulting from CBP arrests at or between ports of entry accounted for most of the ERO removals of females (ranging from 80 to 90 percent), as shown in figure 27. Removals of Females by Criminality. From calendar years 2015 through 2018, the majority of ERO removals were of non-criminal females (ranging from 66 to 72 percent), as shown in figure 28. Most removals of non-criminal females resulted from CBP arrests (ranging from 88 to 97 percent), rather than ICE arrests. ERO removals of convicted criminal females varied, increasing overall, from calendar years 2015 to 2018, and accounted for the minority of ERO removals of females each year (ranging from 28 to 34 percent). The majority removals of convicted criminal females also resulted from CBP arrests (ranging from 56 to 71 percent). Appendix V: Enforcement and Removal Operations Arrests of Juveniles by Age and Gender, 2015-2018 This appendix presents the overall number of Enforcement and Removal Operations (ERO) administrative arrests (arrests) of juveniles persons encountered by ERO who have not reached 18 years of age as well as the number of juvenile arrests by age and gender. We analyzed individual-level Immigration and Customs Enforcement (ICE) data to identify the number of ERO arrests of juveniles during calendar years 2015 through 2018. The Number of Arrests of Juveniles Increased Overall. The number of ERO arrests of juveniles increased overall by 53 percent from calendar years 2015 through 2018, as shown in figure 29. During the two years the Priority Enforcement Program was in effect, ERO arrests of juveniles increased 47 percent (from 887 arrests in 2015 to 1,307 arrests in 2016). Following issuance of the 2017 DHS memo, ERO arrests of juveniles increased 76 percent in calendar year 2017 (2,294 arrests), and decreased 41 percent in calendar year 2018 (1,361 arrests). Arrests of Juveniles by Age. The proportion of arrests for juveniles of all age groups ages 0 to 6, 7 to 12, and 13 to 17 varied between calendar years 2015 and 2018, as shown in figure 30. For instance, the proportion of arrests of juveniles ages 0 to 6 between calendar years 2015 and 2018, ranged from 31 to 43 percent of the total number of arrests of juveniles. The proportion of arrests of juveniles ages 7 to 12 ranged from 16 percent to 23 percent of total arrests of juveniles during this same period while arrests of juveniles ages 13 to 17, during the same period ranged from 34 percent to 50 percent of total arrests of juveniles. Arrests of Juveniles by Gender. Each calendar year from 2015 through 2018, arrests of male juveniles accounted for the majority of ERO arrests of juveniles (ranging from 57 to 66 percent), as shown in figure 31. Appendix VI: Enforcement and Removal Operations Administrative Arrests by Country of Citizenship This appendix presents the number of U.S. Immigration and Customs Enforcement (ICE) Enforcement and Removal Operations (ERO) administrative arrests by country of citizenship for calendar years 2015 through 2018. Each year from 2015 through 2018, ERO administratively arrested aliens from over 200 countries. Appendix VII: Enforcement and Removal Operations Detentions by Country of Citizenship This appendix presents the number of U.S. Immigration and Customs Enforcement (ICE) Enforcement and Removal Operations (ERO) detentions by country of citizenship for calendar years 2015 through 2018. Each year from 2015 through 2018, ERO detained aliens from over 200 countries. Appendix VIII: Enforcement and Removal Operations Removals by Country of Citizenship This appendix presents the number of U.S. Immigration and Customs Enforcement (ICE) Enforcement and Removal Operations (ERO) removals by country of citizenship for calendar years 2015 through 2018. Each year from 2015 through 2018, ERO removed aliens from almost 200 countries. Appendix IX: Review of Available Criminal Charges for Detentions of Selected Populations Resulting from ICE Arrests This appendix presents the number and type of criminal charges of U.S. Immigration and Customs Enforcement (ICE) Enforcement and Removal Operations (ERO) detentions of selected populations (aliens who are: transgender, individuals with disabilities, pregnant, individuals with mental disorders, women who are nursing, or individuals who are elderly) resulting from ICE arrests. ICE administrative arrests of aliens for civil violations of U.S. immigration laws include arrests of both aliens with prior criminal convictions and those without prior criminal convictions. According to ICE, ICE officers electronically request and retrieve criminal history information about an alien from the FBI s National Crime Information Center (NCIC) database, which maintains a repository of federal and state criminal history information. ICE officers are also able to manually enter criminal history information in ICE s data system if they discover additional criminal history information that was not available in NCIC. ICE officers may also check for criminal convictions committed outside the United States, on a case by case basis. To identify which convictions or charges were immigration-related for these selected populations, we reviewed the criminal history information recorded in ICE s data system by ICE officers. m ICE collected data to identify each of these populations beginning at different timeframes or subsets within the population, as shown below. For information on the number of detentions of selected populations resulting from ICE arrests by criminal charge type, see tables 13 through 18. Appendix X: Length of Detentions of Selected Populations This appendix presents the length of U.S. Immigrations and Customs Enforcement (ICE) Enforcement and Removal Operations detentions of selected populations aliens who are: transgender, individuals with disabilities, pregnant, individuals with mental disorders, women who are nursing, or individuals who are elderly. Available ICE data varied for each of these populations because ICE began collecting these data at different time periods. In addition, the length of some detentions from a particular year may be undetermined because they were still ongoing at the time of our review (as of May 15, 2019). We present available data for each of the populations. Detentions of Transgender Individuals. Based on available records each year from 2016 through 2018, the majority of detentions of transgender individuals were 90 days or less (ranging from 62 to 70 percent), as shown in table 19. Detentions of Individuals with Disabilities. Based on available records in calendar years 2017 and 2018, the majority of detentions of individuals with disabilities were 90 days or less (56 and 65 percent, respectively), as shown in table 20. Detentions of Pregnant Women. From calendar years 2016 through 2018, the majority of detentions of pregnant women were 15 days or less (ranging from 71 to 93 percent), as shown in table 21. Detentions of Individuals with Mental Disorders at ICE Health Service Corps-staffed facilities. Based on available records each year from calendar years 2015 through 2018, the majority of detentions of individuals with mental disorders at ICE Health Service Corps (IHSC)- staffed facilities were 90 days or less (ranging from 59 to 71 percent), as shown in table 22. Detentions of Nursing Women at IHSC-staffed facilities. From calendar years 2015 through 2018, most detentions of nursing women at IHSC-staffed facilities were 30 days or less (ranging from 77 to 97 percent), as shown in table 23. Detentions of Elderly Individuals. Based on available records each year from calendar years 2015 through 2018, most of the detentions of elderly individuals were 90 days or less (ranging from 80 to 84 percent), with the majority being of 30 days or less, as shown in table 24. Appendix XI: Comments from the Department of Homeland Security Appendix XII: GAO Contact and Staff Acknowledgments <17. GAO Contact> <18. Staff Acknowledgments> In addition to the contact name above, Meg Ullengren (Assistant Director), Carissa Bryant (Analyst-in-Charge), Hiwotte Amare, Michele Fejfar, Eric Hauswirth, Dainia Lawes, Marycella Mierez, Heidi Nielson, and Claire Peachey made key contributions to this report. | Why GAO Did This Study
In January 2017, the President issued Executive Order 13768 that instructs the Department of Homeland Security (DHS) to enforce U.S. immigration law against all removable individuals. In February 2017, the Secretary of DHS issued a memorandum (2017 DHS memo) establishing policy and providing guidance related to the Executive Order. Within DHS, ICE is responsible for providing safe confinement for detained aliens, including certain vulnerable populations.
GAO was asked to review ICE immigration enforcement priorities, including those for vulnerable populations. This report examines (1) ICE data on arrests, detentions, and removals from calendar years 2015 through 2018; (2) the policies in effect for selected populations and any changes ICE made to align these policies with the 2017 DHS memo; and (3) the extent to which ICE collects data on selected populations and what those data show.
GAO analyzed ICE data on arrests, detentions, and removals from calendars years 2015 through 2018; reviewed policies and documents on eight populations GAO selected based on ICE policies and input from organizations that represent various vulnerable populations; and interviewed agency officials.
What GAO Found
The numbers of administrative arrests (arrests), detentions, and removals of aliens (people who are not citizens or nationals of the United States) by U.S. Immigration and Customs Enforcement (ICE) varied during calendar years 2015 through 2018, and increased overall for the period. Males, aliens from four countries—Mexico, Guatemala, El Salvador, and Honduras—and convicted criminals accounted for the majority of ICE arrests and removals. The majority of detentions were made up of males, aliens from the same four countries, and non-criminals.
ICE has policies related to six of the selected populations GAO examined, including aliens who are: transgender, individuals with disabilities, individuals with mental disorders, juveniles, parents of minors, and pregnant. These policies provide guidance on identifying, detaining, caring for, and removing aliens in these populations. After issuance of the 2017 DHS memo, ICE removed language from its existing policies for individuals who are pregnant and parents of minors that it determined to be inconsistent with 2017 DHS memo.
Available ICE detention data show that detentions of transgender and pregnant individuals increased from calendar years 2016 to 2018 and detentions of individuals with disabilities increased from 2017 to 2018. Detentions at facilities staffed by ICE medical personnel of individuals with mental disorders and women who are nursing varied from calendar years 2015 to 2018. We found that ICE does not collect or maintain readily available data on detained parents or legal guardians of U.S. citizen or legal permanent resident minors, as required by ICE policy. Without such information, ICE headquarters officials cannot ensure that ICE officers are collecting and entering this information into the system as required by policy. ICE officials said they have considered actions to identify this population, but are no longer considering these actions as of October 2019. Maintaining these data in a readily available format could help ensure that ICE personnel identify, evaluate, and share information on this population.
What GAO Recommends
GAO is recommending that ICE collect readily available data on detained parents or guardians of U.S. citizen and legal permanent resident minors. DHS did not concur with the recommendation. GAO continues to believe this recommendation is valid as discussed in the report. |
gao_GAO-19-595 | gao_GAO-19-595_0 | <1. Background> Education administers federal student aid programs, including the William D. Ford Federal Direct Loan (Direct Loan) program, through the Office of Federal Student Aid. Only Direct Loans are eligible for the TEPSLF and PSLF programs. Under the Direct Loan program, Education issues and oversees federal loans provided to students and contractors service these loans. Education currently contracts with nine loan servicers that each handle the billing and other services for a portion of the over $1 trillion in outstanding student loans provided through the Direct Loan program. These servicers track and manage day-to-day servicing activities. Education contracts with a single loan servicer to implement PSLF and TEPSLF, which includes responding to borrower inquiries, reviewing requests for loan forgiveness, and processing loan forgiveness for qualifying borrowers. Borrowers interested in pursuing loan forgiveness under either PSLF or TEPSLF must have their loans transferred to this loan servicer in order to proceed. TEPSLF is a temporary expansion of the PSLF program and the eligibility requirements for TEPSLF are largely the same as those of the PSLF program with a few key exceptions. Both provide eligible borrowers with forgiveness on the remaining balance of their Direct Loans after they have met all program requirements. To receive forgiveness for a loan under either PSLF or TEPSLF, borrowers are required to be employed in a public service job for 10 years when making 120 qualifying payments, at the time they apply for forgiveness, and at the time they receive forgiveness for their loans. Specifically, borrowers are generally required to: Work full-time for at least 10 years at a public service organization, a government organization, agency, or entity at any level (federal, state, local, or Tribal); a nonprofit, tax exempt organization (under section 501(c)(3) of the Internal Revenue Code); or another private nonprofit organization that provides certain public services. Not be in default on their loans. Make 120 on-time monthly loan payments for the full amount due on their bill. These monthly payments do not need to be consecutive. Key differences between PSLF and TEPSLF include: Qualifying repayment plans. PSLF generally requires borrowers to repay their loans through one of the eligible income-driven repayment plans or the Standard repayment plan. TEPSLF allows borrowers to qualify for loan forgiveness through several additional types of repayment plans that do not qualify for PSLF, including the Graduated repayment plan, Extended repayment plan, Consolidated Standard repayment plan, and Consolidated Graduated repayment plan. Funding. TEPSLF loan forgiveness is temporarily available to borrowers on a first-come, first-served basis until the $700 million is expended. The PSLF program will continue unless repealed by Congress. Specific payment requirements. For TEPSLF, the following two payments generally must be at least as much as the borrower would have paid under an income-driven repayment plan: (1) the payment made immediately prior to applying for TEPSLF, and (2) the payment made 12 months prior to applying for TEPSLF. In the context of high denial rates in the PSLF program and evidence that some borrowers were being misinformed by loan servicers about which repayment plans would qualify for PSLF, Congress appropriated $4.6 million for Education to conduct outreach on PSLF and TEPSLF. The legislation called for this outreach to be targeted to, among others, borrowers who would qualify for PSLF loan forgiveness except that they have made some or all of their payments through plans that do not qualify. <2. Education s Temporary Expanded Loan Forgiveness Process Is Not Clear to Borrowers> Congress directed Education to implement a simple method for borrowers to apply for TEPSLF within 60 days after the legislation funding the program was enacted. In response, Education established a process in which borrowers send an email to the TEPSLF loan servicer with their name and date of birth to initiate their TEPSLF review and establish their place in line for TEPSLF funds. In addition to sending an email to initiate a TEPSLF request, Education requires that a borrower has submitted a PSLF application before they can be considered for TEPSLF (see fig. 1). While a PLSF application is not explicitly required by statute for a borrower to be considered for TEPSLF loan forgiveness, Education officials said that the department added this step to the process because the PSLF application form captures information the TEPSLF loan servicer needs to determine a borrower s eligibility for TEPSLF. Education officials said that they added this step in order to roll out the TEPSLF program within the required 60 days. Education s TEPSLF website states that borrowers interested in this temporary expanded loan forgiveness option must submit a PSLF application in order to be considered for TEPSLF. Even with this information, our review of TEPSLF loan servicer data found that 71 percent of denied TEPSLF requests were denied because the borrower had not submitted a PSLF application. Education officials said that they believed that many borrowers send a TEPSLF request without submitting a PSLF application because TEPSLF funding is temporary and borrowers know that sending the email request will hold their place in line for the limited funds. However, borrowers who have not submitted the PSLF application are sent a denial letter from the TEPSLF loan servicer. According to Education officials, these borrowers would lose their place in line and have to reapply if they want to be reconsidered for TEPSLF. Officials from Education, the TEPSLF loan servicer, and representatives from selected organizations representing student borrowers all said that the requirement to submit a PSLF application to be eligible for TEPSLF loan forgiveness can confuse borrowers. For example, Education officials acknowledged that the majority of TEPSLF requests come from borrowers who have not first submitted a PSLF application, and officials from the TEPSLF loan servicer said that borrowers who called were frequently confused when they received a TEPSLF denial based on the fact that they had not first submitted the PSLF application. In addition, representatives from the three student borrower groups we interviewed said that having to apply for PSLF before requesting TEPSLF often confuses borrowers and, in the opinion of officials from two of the three groups, leads directly to large numbers of TEPSLF denials. We also found some examples of borrower confusion about the PSLF application requirement in our review of borrower complaints. In three TEPSLF borrower complaints filed with Education that we reviewed, the borrowers expressed confusion and frustration about why they were being asked to submit an application for a program PSLF they knew they did not qualify for in order to receive TEPSLF loan forgiveness. Education s policy of requiring all borrowers requesting TEPSLF to first submit a PSLF application has created a confusing process for borrowers. Education officials said that integrating the TEPSLF request into the PSLF application for example, by including a checkbox that borrowers could use to request a TEPSLF review if the loan servicer finds they are ineligible for PSLF would eliminate the need for borrowers to take multiple steps, reduce the number of borrowers who are denied, and improve service to borrowers. Education officials estimated that integrating the TEPSLF request into the existing PSLF process would require roughly a year in order to revise the PSLF application form, borrower communications, and program procedures, and to work with the loan servicer to implement new contractual requirements. Education officials told us that they will be implementing a new online portal in fall 2019 to provide better overall service to borrowers. They also stated that the new portal could incorporate an online integrated PSLF and TEPSLF application if they had sufficient resources and time, but that there were currently no specific plans to do so. While Education rolled out the process for requesting TEPSLF loan forgiveness in 2 months, it has not created a borrower-friendly TEPSLF process. This does not align with Education s strategic plan objective to improve the quality of service to customers across the student aid life cycle. Further, Congress created the temporary expansion to the PSLF program to help certain borrowers who faced barriers obtaining PSLF loan forgiveness and required Education to develop a simple method for borrowers to apply for TEPSLF. While initiating a TEPSLF request through an email is straightforward, requiring borrowers to have submitted a PSLF application to successfully pursue TEPSLF loan forgiveness is confusing and inefficient for borrowers because borrowers must take multiple steps and complete an application for a program they do not qualify for. If Education were to allow borrowers to request TEPSLF loan forgiveness through an integrated application form, it would improve service to borrowers, reduce borrower confusion about how to obtain loan forgiveness, and better align with its strategic plan objective to improve service to borrowers. Further, although TEPSLF is a temporary opportunity, it could be years before the $700 million appropriation is exhausted, and it is therefore worthwhile for Education to invest resources in improving the process now. <3. Ninety-nine Percent of Borrowers TEPSLF Requests Have Been Denied and Certain Denial Letters Do Not Include Important Information> <3.1. Education Has Approved 1 Percent of TEPSLF Loan Forgiveness Requests and Spent 4 Percent of TEPSLF Loan Forgiveness Funds in a Year> From May 2018 through May 2019, about 40,000 borrowers submitted TEPSLF requests for loan forgiveness and Education has approved or denied about 54,000 separate TEPSLF requests. Education has approved 1 percent (661) and denied 99 percent (53,523) of these requests, according to the most recent data from the TEPSLF loan servicer (see fig. 2). Of the 53,523 denied TEPSLF requests, about 38,000 were ineligible for consideration and were therefore denied because the borrower had not submitted a PSLF application, according to data from the TEPSLF loan servicer. Of the remaining denied requests, about 15,000 were denied because they did not meet other program requirements (see fig. 3). As we previously noted, under the current TEPSLF review process, the loan servicer first checks to see if the borrower requesting TEPSLF has submitted a PSLF application. If the borrower has not done so, the loan servicer does not conduct any further review of the borrower s request and sends the borrower a denial letter informing them they have to submit the PSLF application before the borrower can be considered for TEPSLF. Without the loan servicer conducting any further review of a borrower s request, it is impossible to know how many of the roughly 38,000 requests that were denied because the borrower had not yet submitted a PSLF application would have otherwise met all other program requirements and been approved for TEPSLF loan forgiveness. The large number of TEPSLF requests denied for not submitting a PSLF application provides further evidence that many borrowers may be confused about the process for obtaining TEPSLF loan forgiveness. Further, more than 5,000 (about 10 percent) of the TEPSLF requests were denied because the borrower had not been repaying their loans for at least 10 years, which indicates that they had not yet made 120 qualifying payments a straightforward program requirement. Since TEPSLF became available in May 2018, Education has approved TEPSLF loan forgiveness totaling about 4 percent (approximately $26.9 million) of the $700 million appropriated for TEPSLF loan forgiveness, according to the most recent data available from the TEPSLF loan servicer at the time of our review (see fig. 4). Of the 656 borrowers approved for TEPSLF loan forgiveness, the borrowers received an average of about $41,000 in loan forgiveness, with loan forgiveness amounts ranging from about $190 to about $227,000. <3.2. Education Does Not Fully Inform Borrowers about Available Options to Contest Denial Decisions> Education does not provide complete information to borrowers about options they have to contest a denied TEPSLF request. Specifically, the letter the TEPSLF loan servicer sends to the borrower communicating a decision to deny the TEPSLF request includes the reason for the denial and the TEPSLF loan servicer s customer service number for borrowers to call with questions. An FSA official told us that while there is no formal process for borrowers who are dissatisfied with their TEPSLF or PSLF determinations to contest them, borrowers do have additional options for addressing concerns, such as an additional review by the TEPSLF servicer, or a complaint to the FSA Feedback System or Ombudsmen (see fig. 5). According to Education officials, when a borrower is denied loan forgiveness, they can call the TEPSLF loan servicer s customer service number with questions about the denial. TEPSLF servicer officials said that if the customer service representative is unable to resolve the borrower s questions, the representative can elect to elevate the borrower s concern internally within the TEPSLF loan servicer, which may result in a second review by loan servicer management. Education and TEPSLF loan servicer officials said that borrowers who are not able to resolve their issues with the loan servicer can bring their issues directly to Education. Specifically, if a borrower is dissatisfied with their TEPSLF decision, they can submit their concern through the online FSA Feedback Tool. Borrowers can also contest the decision with the FSA Ombudsman Group. Education officials told us it does not provide information about these options in its denial letters or on its TEPSLF website, noting that borrowers could find this information at the bottom of FSA s main website. Education officials also stressed the importance of borrowers resolving their concerns first with their loan servicer directly before pursuing other avenues, and said that this is part of the reason why they do not include this information in letters sent to borrowers. All TEPSLF denial letters include a statement at the bottom of the letter indicating that if borrowers had questions about the information in their denial letter, they should call the general customer service number at the TEPSLF loan servicer for assistance. The letters did not explain how the servicer could potentially do a second review or subsequently refer the matter to Education. Information about the potential for a second review at the loan servicer and the option to raise an issue with Education directly would help borrowers who are unable to resolve their concerns by calling the general customer service number at the loan servicer. Additional information on options for contesting decisions is not necessary for all TEPSLF borrowers who are denied. For example, it may not be appropriate to include this information in denial letters sent to borrowers who do not meet basic program requirements for example, those who have no federal Direct Loans. However, borrowers who are denied for reasons that are more prone to error, such as having fewer than 120 qualifying payments, are not made aware of all the available options so they can make informed decisions about how to best resolve their concerns. We previously reported that Education does not ensure that the loan servicer responsible for PSLF and TEPSLF is receiving consistent loan payment history information from other loan servicers, increasing the risk of inaccurate qualifying payment counts. This also raises the risk of inappropriate denials for TEPSLF. Our review of TEPSLF complaints made to Education from borrowers found eight examples of borrowers contesting the loan servicer s determination of the number of qualifying payments. In six of these instances, the TEPSLF servicer determined that the borrowers were correct and had met requirements for loan forgiveness. Given the risk of denial errors, additional information about options for borrowers who are dissatisfied with their TEPSLF denial determination is especially important. While there is information about options for contesting decisions at the bottom of FSA s main website, it is not incorporated into the TEPSLF website and borrowers may not know where to find this information. Federal internal control standards for external communication with stakeholders call for communication of quality information with external parties to achieve program objectives. Providing this information in relevant denial letters and Education websites will increase the likelihood that borrowers with valid concerns will have their TEPSLF requests appropriately resolved. <4. Education Contacts Certain Borrowers Directly about TEPSLF, but Its General TEPSLF Outreach Activities Are Limited> <4.1. Education Conducts Direct Outreach to Certain Individual Borrowers> Education and the TEPSLF loan servicer conduct direct outreach to certain borrowers about TEPSLF. For example, when TEPSLF was first rolled out, Education sent a notice to over 1,000 borrowers who had been denied PSLF due to a lack of 120 qualifying payments, but who had been in repayment for at least 10 years. Education officials told us that they had identified this group of borrowers as the most likely to be eligible for TEPSLF. This notice informed borrowers of the new TEPSLF loan forgiveness opportunity and told them how to apply for it. Education officials told us that they continue to review the PSLF denial list on a weekly basis and send notices to those whom they have determined to be the most likely to qualify for TEPSLF loan forgiveness. In addition, borrowers who have previously expressed interest in TEPSLF by sending an email to request TEPSLF loan forgiveness will be sent a TEPSLF outreach letter by the TEPSLF loan servicer under certain conditions: after submitting a new employment certification form, or after submitting a new PSLF application that is subsequently denied. In these two circumstances, the TEPSLF loan servicer sends the borrower a letter suggesting that they may now be eligible for TEPSLF loan forgiveness and would need to re-request such loan forgiveness. <4.2. Education s General TEPSLF Online Outreach Is Limited> Education officials told us that the agency has focused on a broad, general outreach strategy; however, we found that its online information is limited because TEPSLF information is not included in several key online sources. Education and TEPSLF loan servicer officials told us that they primarily direct borrowers to online sources to inform them about TEPSLF requirements. For example, Education has created a TEPSLF-specific website and the TEPSLF loan servicer s website references the TEPSLF opportunity and links to Education s TEPSLF website if borrowers would like more information. However, we found that while all nine of the loan servicers websites contain some information on the PSLF program, none of them (other than the TEPSLF loan servicer) included TEPSLF information on their websites or provided a link to Education s TEPSLF website. While Education officials told us that only the TEPSLF servicer is required to have TEPSLF information on its website, other loan servicers may also serve borrowers who are potentially eligible but may be unaware of TEPSLF. In addition, borrowers with other loan servicers who are interested in TEPSLF may not be aware that they have to transfer to the TEPSLF loan servicer to obtain loan forgiveness. Further, according to agency officials, Education s PSLF Online Help Tool also does not include any TEPSLF information, and Education has not created a similar tool for TEPSLF. Education officials told us that the PSLF Online Help Tool, which assists borrowers with determining whether their loans and employment qualify and which PSLF forms they need to submit, is one of the primary PSLF outreach mechanisms to inform borrowers about PSLF eligibility. According to Education data, since the rollout of the online tool in December 2018 through the beginning of March 2019, about 340,000 users have used the online tool, and about 100,000 have logged on and have collectively generated about 40,000 PSLF-related forms, such as PSLF application forms. However, according to Education officials, the online tool does not include any information on TEPSLF. Education officials stated that the first phase of the Online Help Tool was focused on informing borrowers about eligibility requirements for PSLF and that as the department makes enhancements to phase two of the Online Help Tool, it could consider adding TEPSLF information and functionality. Both FSA and TEPSLF loan servicer officials stated that having information on TEPSLF integrated into the PSLF Online Help Tool would be beneficial for borrowers and would reduce confusion about TEPSLF. Federal internal control standards state that management should externally communicate the necessary quality information to achieve the entity s objectives. Including TEPSLF information in the PSLF Online Help Tool and noting it on all loan servicer websites could increase borrower awareness of TEPSLF and the likelihood that borrowers are able to take advantage of this opportunity. <5. Conclusions> The loan forgiveness opportunity through TEPSLF is an expansion of the PSLF program and helps borrowers who hoped to qualify for PSLF but who did not realize they were in an ineligible loan repayment plan. Instead of integrating the expanded loan forgiveness opportunity into the existing PSLF process, Education required borrowers to have submitted a separate PSLF application before the loan servicer will consider a borrower s TEPSLF request. The large number of requests denied because borrowers had not submitted a PSLF application suggests that borrowers are confused about this requirement. In some cases, these borrowers may have been working in public service jobs for years believing they were on track for loan forgiveness, only to find out later that they did not qualify. While the loan forgiveness opportunity through TEPSLF is only available until the $700 million in funding has been spent, a relatively small amount of total funding has been spent so far. It is possible that the program could continue for years, supporting the case for investing in improvements to the process now. Integrating the process for obtaining loan forgiveness through TEPSLF into the PSLF application would be easier for borrowers and help Education meet its goal to improve customer service. Information provided in TEPSLF denial letters and on the TEPSLF website does not explain what options are available to borrowers if they want to contest the loan servicer s determination. While additional information on this topic is not necessary for borrowers who do not meet basic program requirements for example, those who have no qualifying federal loans this information would help certain borrowers whose TEPSLF requests may have been denied. By including this additional information on the TEPSLF website and in denial letters to these borrowers, the borrowers can then pursue additional options to contest the denial and help Education avoid denial errors. Finally, Congress provided funding and tasked Education with conducting outreach to borrowers to help increase overall borrower awareness of the public service loan forgiveness programs. While Education has engaged in some outreach activities, Education is missing opportunities to reach out to borrowers potentially eligible for TEPSLF specifically, by not requiring all loan servicers websites to include information about TEPSLF and not including TEPSLF information in the PSLF Online Help Tool. TEPSLF was created to provide relief to a group of borrowers who were ineligible because they were repaying their loans on repayment plans that were not eligible for the original PSLF program. Without improved TEPSLF outreach in these two areas, however, many of these borrowers who were initially unable to qualify for the PSLF program may be unaware of the TEPSLF opportunity that was designed to help them. <6. Recommendations for Executive Action> We are making the following four recommendations to Education s Office of Federal Student Aid: The Chief Operating Officer of the Office of Federal Student Aid should integrate the TEPSLF request into the PSLF application, for example, by including a checkbox on the PSLF application, to provide borrowers a more seamless way to request TEPSLF consideration. (Recommendation 1) The Chief Operating Officer of the Office of Federal Student Aid should provide certain borrowers, for example, those who are denied TEPSLF for not having 120 qualifying payments, with more information about options available to contest TEPSLF decisions on the TEPSLF website and in their denial letters. (Recommendation 2) The Chief Operating Officer of the Office of Federal Student Aid should require all loan servicers to provide TEPSLF information on their websites. (Recommendation 3) The Chief Operating Officer of the Office of Federal Student Aid should include TEPSLF information in its PSLF Online Help Tool. (Recommendation 4) <7. Agency Comments and Our Evaluation> We provided a draft of this report to Education for its review and comment. In its comments, reproduced in appendix I, Education concurred with each of our recommendations and identified steps it plans to take to implement them. To make the TEPSLF loan forgiveness process easier for borrowers, Education stated that it will integrate the TEPSLF request into the PSLF application as part of the improvements planned for the PSLF application under its new online interface for student borrowers. Regarding our recommendation to provide certain borrowers with more information about options available to contest TEPSLF decisions, Education stated that it will add information for borrowers on the procedures for contesting TEPSLF decisions to FSA s specific TEPSLF website and in relevant TEPSLF denial letters. To improve outreach and help increase overall borrower awareness of TEPSLF, Education stated it will require all loan servicers to provide TEPSLF information on their websites within 120 days. In addition, Education stated that it will also include TEPSLF information in the PSLF Help Tool. We also provided relevant report sections to the TEPSLF loan servicer for technical comments. The TEPSLF servicer provided technical comments, which we incorporated as appropriate. We are sending copies of this report to relevant congressional committees, the Secretary of Education, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0534 or emreyarrasm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Education Appendix II: GAO Contact and Staff Acknowledgments <8. GAO Contact> <9. Staff Acknowledgments> In addition to the contact named above, Michelle L. St. Pierre (Assistant Director), Nora Boretti (Analyst-In-Charge), and Aaron Karty made significant contributions to this report. Also contributing to this report were James E. Bennett, Deborah Bland, Alicia P. Cackley, Marcia L. Carlsen, Linda A. Collins, William W. Colvin, Alex Galuten, Sheila R. McCoy, Jean L. McSween, Jessica S. Orr, Debra Prescott, and Ashanta Williams. Related GAO Products Public Service Loan Forgiveness: Education Needs to Provide Better Information for the Loan Servicer and Borrowers. GAO-18-547. Washington, D.C.: Sept. 5, 2018. Federal Student Loans: Further Actions Needed to Implement Recommendations on Oversight of Loan Servicers. GAO-18-587R. Washington, D.C.: July 27, 2018. Federal Student Loans: Education Could Improve Direct Loan Program Customer Service and Oversight. GAO-16-523. Washington, D.C.: May 16, 2016. Federal Student Loans: Key Weaknesses Limit Education s Management of Contractors. GAO-16-196T. Washington, D.C.: Nov. 18, 2015. Federal Student Loans: Education Could Do More to Help Ensure Borrowers Are Aware of Repayment and Forgiveness Options. GAO-15-663. Washington, D.C.: Aug. 25, 2015. | Why GAO Did This Study
In the context of high denial rates in the PSLF program, Congress appropriated $700 million in 2018 for a temporary expansion to the public service loan forgiveness program for certain borrowers who were not eligible for the original PSLF program. TEPSLF funds are available on a first-come, first-served basis. GAO was asked to review TEPSLF.
This report examines (1) the extent to which the process for obtaining TEPSLF is clear to borrowers, (2) what is known about loan forgiveness approvals and denials, and (3) the extent to which Education has conducted TEPSLF outreach. GAO analyzed data from the TEPSLF servicer on loan forgiveness requests from May 2018 through May 2019 (the most recent available at the time of our review); reviewed Education's guidance and instructions for the TEPSLF servicer; assessed Education's outreach activities; interviewed officials from Education, the TEPSLF servicer, and selected groups representing borrowers; and reviewed borrower complaints about TEPSLF submitted to Education.
What GAO Found
The Department of Education's (Education) process for obtaining Temporary Expanded Public Service Loan Forgiveness (TEPSLF) is not clear to borrowers. Established in 2007, the Public Service Loan Forgiveness (PSLF) program forgives federal student loans for borrowers who work for certain public service employers for at least 10 years while making 120 payments via eligible repayment plans, among other requirements. In 2018, Congress funded TEPSLF to help borrowers who faced barriers obtaining PSLF loan forgiveness because they were on repayment plans that were ineligible for PSLF. Congress also required Education to develop a simple method for borrowers to apply for TEPSLF. Education established a process for borrowers to initiate their TEPSLF requests via e-mail. The agency also required TESPLF applicants to submit a separate PSLF application before it would consider their TEPSLF request. Agency officials said they established this process to quickly implement TEPSLF and obtain the information needed to determine borrower eligibility. However, the process can be confusing for borrowers who do not understand why they must apply separately for PSLF—a program they are ineligible for—to be eligible for TEPSLF. Requiring borrowers to submit a separate PSLF application to pursue TEPSLF, rather than having an integrated request such as by including a checkbox on the PSLF application for interested borrowers, is not aligned with Education's strategic goal to improve customer service to borrowers. As a result, some eligible borrowers may miss the opportunity to have their loans forgiven.
As of May 2019, Education had processed about 54,000 requests for TEPSLF loan forgiveness since May 2018, and approved 1 percent of these requests, totaling about $26.9 million in loan forgiveness (see figure). Most denied requests (71 percent) were denied because the borrower had not submitted a PSLF application. Others were denied because the borrower had not yet made 120 qualifying payments (4 percent) or had no qualifying federal loans (3 percent).
More than a year after Congress initially funded TEPSLF, some of Education's key online resources for borrowers do not include information on TEPSLF. Education reported that it has conducted a variety of PSLF and TEPSLF outreach activities such as emails to borrowers, social media posts, and new website content. However, Education does not require all federal loan servicers (who may serve borrowers interested in public service loan forgiveness) to include TEPSLF information on their websites. Further, Education's Online Help Tool for borrowers—which provides information on PSLF eligibility—does not include any information on TEPSLF. Requiring all loan servicers to include TEPSLF information on their websites and including TEPSLF information in its online tool for borrowers would increase the likelihood that borrowers are able to obtain the loan forgiveness for which they may qualify.
What GAO Recommends
GAO is making four recommendations, including that Education integrate the TEPSLF request into the PSLF application, require all loan servicers to include TEPSLF information on their websites, and include TEPSLF information in its PSLF Online Help Tool. Education agreed with GAO's recommendations.
(617) 788-0534 or emreyarrasm@gao.gov . |
gao_GAO-20-396 | gao_GAO-20-396_0 | <1. Background> FEMA is the federal agency primarily responsible for assisting state and local governments, private entities, and individuals to prepare for, mitigate, respond to, and recover from natural disasters, including floods. Floods are the most frequent natural disasters in the United States, causing billions of dollars of damage annually. In 1968, Congress passed the National Flood Insurance Act, which created NFIP, to address the increasing amount of flood damage, the lack of readily available insurance for property owners, and the cost to the taxpayer for flood-related disaster relief. Since its inception, NFIP has served as a key component of FEMA s efforts to minimize or mitigate the damage and financial impact of floods on the public, as well as to limit the need for federal assistance after floods occur. A primary goal of NFIP is to minimize flood-related property losses by making flood insurance available on reasonable terms and encouraging its purchase by commercial and residential property owners who need flood insurance protection. The program focuses on areas in communities that are at the highest risk of flooding, known as special flood hazard areas. As of November 2019, 22,436 communities across the United States and its territories voluntarily participated in NFIP by adopting and agreeing to enforce flood-related building codes and floodplain management requirements. <1.1. FEMA Reviews of Community Compliance> FEMA uses community assistance visits and community assistance contacts to oversee community enforcement of NFIP requirements. Community assistance visits are on-site assessments of a community s floodplain management program and its knowledge and understanding of NFIP s floodplain management requirements. During the visit, FEMA also helps the community remedy any program deficiencies or violations. Some visits are conducted by FEMA regional office staff and others by state floodplain management personnel, through funding from FEMA s Community Assistance Program (State Support Services Element). Community assistance contacts are usually done by telephone, and their purpose is to establish or re-establish contact with an NFIP community regarding any existing problems or issues and to offer assistance if necessary. These contacts generally include a broad discussion of the community s floodplain management activities, as well as any outstanding deficiencies and violations and community actions taken to resolve them. NFIP regulations allow FEMA to place a community on probation or to suspend the community from the program if it does not meet or enforce NFIP requirements. <1.2. Substantial Damage Assessments> After a flood, local officials in communities that participate in NFIP must determine whether the proposed repairs to a damaged building are above or below FEMA s threshold for substantial improvement or repair of substantial damage. Substantial improvement refers to any reconstruction, rehabilitation, addition, or other improvement of a structure that equals or exceeds 50 percent of the market value of the structure before the start of the construction. Repair of substantial damage means that the cost of restoring the structure to its pre-damage condition equals or exceeds 50 percent of the market value of the structure before the damage occurred. Substantially improved and substantially damaged buildings must be brought into compliance with NFIP requirements for new construction, including the requirement that lowest floors be elevated above the level indicated by the current NFIP flood map. These requirements help reduce future flood risk by elevating or otherwise mitigating properties at risk of flooding. FEMA officials generally do not conduct substantial damage assessments themselves but offer communities tools they can use to collect information and perform damage assessments. When a building insured under NFIP suffers a flood loss and is declared substantially damaged, the owner of the building can apply to receive up to $30,000, on top of any claim payment, to help rebuild according to current NFIP requirements, under a program called Increased Cost of Compliance. <1.3. FEMA s Community Rating System> In 1990, FEMA implemented a voluntary rating system to recognize and encourage community floodplain management activities that exceed the minimum NFIP requirements. Communities may apply to join CRS if they are in full compliance with the minimum NFIP floodplain management requirements. As of June 2017, about 5 percent of NFIP communities participated in CRS, and more than 69 percent of all flood insurance policies were written in CRS communities. Communities are grouped into classes based on their ratings and can move up in ratings by earning CRS credits for activities such as increasing public information about flood risks, preserving open space, taking steps to reduce flood damage, and preparing residents for floods. The three goals of the CRS program are to reduce flood damage to insurable property by reducing existing buildings risk of flood damage and by protecting new buildings from current and future flood hazards; strengthen and support the insurance aspects of NFIP, in particular by encouraging communities to implement NFIP flood maps and increasing residents awareness of flood risk so they purchase and maintain flood insurance policies; and foster a comprehensive approach to floodplain management, such as by ensuring that new development does not cause adverse impacts elsewhere in the watershed or on other properties. As the community earns credits for additional flood-mitigation activities, residents and property owners in special flood hazard areas become eligible for increased NFIP policy premium discounts. Each CRS class improvement produces a 5 percent greater discount on flood insurance premiums for properties in the special flood hazard area, up to a maximum of 45 percent. FEMA contracts with a private company to administer many aspects of the CRS program. This contractor verifies the activities of communities on a 5-year cycle, though some communities may be visited on a 3-year cycle as their CRS class and discount improve. Communities can lose discounts if they do not sustain their activities. <1.4. NFIP Communities in Texas and Florida> Communities in Texas and Florida made up 2 percent and 6 percent, respectively, of all NFIP communities nationwide, and their residents purchased almost half of all NFIP policies in force in 2019 (see fig. 1). After Hurricanes Harvey and Irma, property owners in Texas, Florida, and other states made about 98,000 flood insurance claims to NFIP and received a total of almost $10 billion. According to FEMA, Hurricane Harvey required a disaster response that was the largest in Texas state history. Nearly 80,000 homes had at least 18 inches of floodwater, and 23,000 of those had more than 5 feet. Older homes that were not built to minimum NFIP standards sustained the greatest damage. In Florida, Hurricane Irma caused widespread damage to residential and commercial buildings and infrastructure, and flood damage occurred particularly in low-lying areas. <2. NFIP s Requirements Seek to Limit Future Flooding but Communities Described Implementation Challenges> <2.1. Communities Must Meet Certain Floodplain Management Requirements> Community participation in NFIP is voluntary, but communities must join NFIP for their residents to purchase flood insurance through the program. To join NFIP, communities must adopt and enforce FEMA-approved building standards, floodplain management strategies, and floodplain management regulations to reduce future flood damage. FEMA relies on the communities to notify it of changing flood hazards and help update flood hazards on NFIP flood maps. (See figure 2 for an example of how development can increase flood risk.) Communities designate a floodplain administrator, who may be a local member of the community, such as a building inspector, community zoning official, engineer, or planner, or an entity contracted by the community, such as a county, regional planning agency, another jurisdiction or authority, or a private firm. 44 C.F.R. 60.2(h). base flood elevations, or the elevation to which FEMA anticipates floodwater will rise during a flood (see fig. 3). Communities must require permits for all development in special flood hazard areas. The permit requirement includes both the construction of buildings or other structures and other land operations, such as mining, paving, excavation, or drilling, which can increase the risk of flooding by obstructing floodwater flows. Development must not increase the flood hazard on other properties. NFIP requires communities to regulate development to ensure that new development does not increase the risk of flooding for surrounding properties. 44 C.F.R. 60.3. elevated to or above the base flood elevation indicated on the NFIP flood map. FEMA allows elevation on fill; elevation on posts, piers, or columns; or elevation on walls or a crawlspace (see fig. 4). Some communities set standards higher than what is required by NFIP. For example, Harris County, Texas, and Key West, Florida, require new or substantially improved construction to be elevated 2 feet and 1 foot, respectively, above NFIP s base flood elevation level. In addition, several communities in Florida have cumulative substantial improvement rules. The rules require property owners who make substantial improvements over a period of time to a home built before the community implemented NFIP flood maps to elevate or bring the home into NFIP compliance. Several FEMA studies show that homes that are rebuilt above the base flood elevation suffer less damage in subsequent floods. <2.2. Communities Cited Challenges in Implementing Requirements, Including Difficulty Inspecting Buildings after a Flood> Challenges expressed by some community officials whom we interviewed included difficulty enforcing NFIP requirements after a storm, retaining experienced floodplain management staff, and implementing updated NFIP flood maps. Difficulty inspecting buildings after a flood. Officials in several communities discussed the challenges related to inspecting buildings for substantial damage after a flood. In one community, inspectors had difficulty assessing flood damage because officials allowed construction to begin immediately and without a building permit. Floodplain officials in two communities said insurance adjustors may pay claims before inspectors have assessed damage, hindering inspectors ability to determine if repairs will exceed 50 percent of the home s value if the homeowner begins to repair damage before the inspection. Challenges retaining floodplain management staff. In eight of the 19 communities we visited, officials cited difficulties obtaining or retaining sufficient staff to perform work such as conducting substantial damage assessments or fulfilling CRS paperwork requirements. For example, one floodplain official told us that after a major storm, the small floodplain management office was overwhelmed with trying to inspect damaged buildings to determine which would require rebuilding to current NFIP standards. Another community we visited did not have a full-time floodplain manager and relied on its building department, which is responsible for issuing building permits, to implement NFIP requirements. Officials said that retaining floodplain management staff is challenging due to factors such as the overwhelming amount of work that had to be performed after a hurricane and low prioritization of floodplain management in noncoastal communities. Two officials said that floodplain management is a difficult job, which can lead to high turnover of staff. Difficulty adopting new NFIP flood maps. Officials in three communities said the introduction of a new flood map can create difficulties. For example, an official said a new flood map can increase the size of the special flood hazard area and require more property owners to buy flood insurance. Another official said that new maps also can raise the base flood elevation, which can raise the cost of insurance premiums. A community official said that his community has been working with FEMA to revise a map for a few years and noted that some property owners in the community planned to challenge the new maps, further delaying adoption. <3. FEMA s Oversight Is Hindered by Limited Community Visits and Incomplete Data> <3.1. FEMA Uses Community Assistance Visits to Oversee NFIP Community Compliance> FEMA s primary method of verifying community compliance with NFIP requirements is through community assistance visits. These visits, along with community assistance contacts which are in-depth discussions that can be conducted by telephone are intended to help FEMA prevent, identify, and mitigate deficiencies in a community s floodplain management. According to FEMA s guidance, FEMA or state specialists who conduct these visits are to take the following steps (see fig. 5): Prepare for the visit. Specialists prepare for the visit by learning about the characteristics of the community and its prior history with NFIP in order to identify potential issues. Conduct the visit. Specialists tour the community, meet with local officials, and inspect files, among other activities. During the tour, specialists make observations, such as noting for later file inspection whether new structures or structures undergoing major repair meet permit documentation and base flood elevation requirements, and whether major new developments will divert flood water from special flood hazard areas. The specialists meet with local officials to assess the community s floodplain management program and to provide technical assistance. Specialists also inspect the community s files to assess the documentation and activities of its floodplain management program. Document findings. Within 30 days of the visit, the specialists are to enter information obtained from the visit, including specific information on deficiencies and violations, into FEMA s Community Information System. Follow up with the community. After completing the visit, the specialists who conducted the visit are to ensure that the community resolves deficiencies and violations found during the visit in a timely manner. Specialists are to consider additional action, including enforcement actions, if deficiencies remain. In our visits to NFIP communities, officials told us that community assistance visits generally were consistent with the process we found documented in FEMA s guidance. For example, community officials said specialists toured the floodplains to observe structures (such as new construction, renovations, and waterfront developments) and inspected community files, including permits and elevation certificates. The community officials said specialists generally spent from 1 to 7 days on site performing their reviews. <3.2. Some High-Risk Communities Were Not Visited Between 2008 and 2019, and Many Were Visited Only Once> Until recently, FEMA s guidance documents stated that its goal was to visit all communities it considered to be high-risk every 5 years. FEMA designated some communities as high-risk based on factors including the community s size, number of flood insurance policies, and number of previously damaged structures. Lower-risk communities were designated to receive a community assistance contact, training, or other contact without regard to time frame. FEMA officials with whom we spoke noted that the risk factors used to designate communities had not been updated since 2010. As a result, according to FEMA officials, in 2019 FEMA began developing a new selection tool that includes updated criteria and focuses on the risk of flooding in a community, the opportunity for a community to improve resilience, and the level of interest a community has in improving its floodplain management. An early version of the tool was released for testing in 2019. FEMA officials said that they and the states started using the new tool to select communities for the annual community visit cycle that began in July 2019. FEMA officials said that while they no longer have a goal of visiting high-risk communities once every 5 years, they do not anticipate conducting fewer visits than before. FEMA officials also noted that communities requesting to participate in CRS will be prioritized for a community assistance visit. From January 2008 through July 2019, FEMA met the 5-year goal for 13 percent of high-risk communities in Florida and 5 percent of such communities in Texas (see fig. 6). FEMA records also indicated that approximately 13 percent of high-risk communities in Florida and 31 percent in Texas did not receive a community assistance visit in that period. However, most high-risk communities in the two states were visited at some point during the overall time period. About 87 percent of high-risk communities in Florida and about 69 percent in Texas received at least one visit during that period. FEMA officials said that one reason for the limited number of visits to some high-risk communities is that FEMA resources, including state specialists, can be diverted to assist with disaster recovery efforts. FEMA officials also said that it is a challenge to visit all high-risk communities in states with a large number of NFIP communities, such as Texas and Florida, but they generally do not have the same challenge in states with fewer communities. FEMA officials said that in 2019 they employed about 120 specialists nationally, and that state grants allowed for another 130 state specialists to be divided among all states. Based on our analysis of FEMA s data for Florida and Texas, FEMA regional staff completed about 20 percent of the visits and state specialists and others completed the remaining 80 percent. A FEMA official told us that the agency has been considering using methods other than community visits (such as checking in with communities 12 18 months after a flood) to verify compliance with NFIP requirements. However, as community assistance visits currently remain FEMA s primary tool for ensuring compliance, the limited number of visits it has conducted in high-risk communities hinders its ability to provide such oversight. For example, it hinders FEMA s ability to prevent, identify, and mitigate deficiencies in communities implementation of NFIP requirements, which, in turn, can limit their ability to prevent or limit future flood losses. <3.3. FEMA Officials Were Unsure Whether Open Records of Community Visits Indicated Unresolved Deficiencies or Incomplete Data> According to FEMA guidance, specialists should document their community assistance visits, including information on any deficiencies and violations found during the visit, in FEMA s Community Information System within 30 days of the visit. If a deficiency or violation is found, the specialists are to close out the record of the community visit after any deficiencies and violations have been addressed. The guidance further states that during the course of the visit, specialists should collect documentation that thoroughly supports their findings. Such documentation helps monitor a community s progress toward resolving its floodplain management issues and, if needed, support any enforcement actions. Our review of FEMA records of community assistance visits in Florida and Texas from 2008 through 2019 showed that about one-third of all records remained open for a year or longer, and in some cases records stayed open for 5 years or more (see fig. 7). For example, around 29 and 23 percent of community assistance visits conducted in Florida and Texas, respectively, remained open for 3 or more years. In Florida, 4 percent remained open for 8 years or more. FEMA headquarters officials told us that they were unsure whether individual records remained open due to unresolved deficiencies and violations or because the specialist who conducted the visit failed to close the record. The officials also noted that specialists who enter information into the Community Information System about deficiencies and violations may not understand the importance of noting specific details and, as a result, may exclude details in many cases. As such, the level of detail can vary from one visit record to another depending on the individual entering the data. FEMA officials told us that turnover of state floodplain specialists and community floodplain managers could be a reason that many records remained open for an extended period. For example, they said turnover among state specialists could result in visit records remaining open because the staff responsible for closing a visit record no longer worked for the state. They also said that turnover among community floodplain managers could result in deficiencies remaining open for extended periods because there was no one in the community to address them. Furthermore, they said that because of the high turnover of community floodplain managers, they want to find other ways of monitoring community compliance with NFIP requirements. FEMA officials told us that another reason visit records can remain open for longer periods of time is FEMA s approach to community oversight. The officials said that they would rather work with a community to resolve any deficiencies and consider steps such as suspension and probation to be a last resort. As a result, FEMA guidance does not include a maximum number of days a deficiency can remain open before beginning enforcement action, such as probation or suspension. Standards for internal control in the federal government state that management should use quality information to achieve the entity s objectives. Without appropriate steps to ensure that it has reliable and timely information on community assistance visits, FEMA cannot readily determine if open records indicate a recordkeeping problem, a community deficiency that needs to be addressed, or something else. As a result, FEMA s ability to determine if communities have been following NFIP requirements is hindered and the agency may miss opportunities to prevent future flood losses. <4. FEMA and Communities Lack Access to Some Data That Would Be Useful in Overseeing and Implementing Post- Flood Requirements> <4.1. NFIP Communities Assess Damage to Properties Following a Flood, Sometimes with FEMA Assistance> Immediately after a flood, local floodplain management officials may assess the extent of damage to individual properties and determine whether damage is substantial enough that certain structures must be rebuilt to current NFIP requirements. As stated earlier, a substantially damaged property is one requiring repair work that costs 50 percent or more of a structure s pre-flood market value. Local officials usually assess substantial damage to a property in three stages. Initial assessment. Local officials conduct initial assessments of flood-damaged properties typically by driving through affected areas to gauge the number of buildings affected and extent of damage. Preliminary damage assessments. These assessments are performed by FEMA or state officials, along with community officials. They are intended to broadly characterize the extent of damage. Local officials charged with performing building inspections and making substantial damage determinations may find the results of these assessments useful for identifying areas where significant damage has occurred and to coordinate their substantial damage inspections. Substantial damage assessment. Local officials conduct substantial damage assessments on the most severely damaged structures. These assessments are more in depth than the initial review and generally involve identifying damage to a property, estimating the cost to fix that damage, and determining whether the damaged structure can be classified as substantially damaged. State and FEMA representatives can assist local officials in performing these assessments, as they did after Hurricanes Harvey and Irma. FEMA also recently began offering communities an updated version of its substantial damage estimator tool, a software template designed to help officials assess damage more quickly and consistently. Figure 8 illustrates the process for declaring properties to be substantially damaged after a flood. While FEMA may provide assistance in conducting damage assessments, NFIP guidance documents state that community floodplain management officials are responsible for estimating the cost to repair and the market value of the structure, determining which properties are substantially damaged, and notifying property owners of their determination. As noted earlier, NFIP requires property owners to bring any substantially damaged buildings located in a special flood hazard area into compliance with minimum NFIP requirements, if they choose to rebuild. This could mean elevating their structure to reduce the risk of future flood damage or losses. For example, several officials from NFIP communities we visited commented that properties raised to or built at higher elevations following floods prior to 2017 received less flood damage during the events of 2017. Commercial and residential property owners with NFIP flood insurance who wish to rebuild a property that has been declared substantially damaged must work with the insurance company through which they purchased their NFIP policy to process their NFIP claim, and then must obtain permits from their community for repair work. As noted previously, these policy holders may be eligible to receive additional funding through NFIP s Increased Cost of Compliance program currently up to $30,000 beyond the claim payment to help with the cost of bringing their home into compliance with current NFIP standards. <4.2. FEMA Does Not Have Ready Access to Community Data on Substantial Damage Assessments> FEMA does not have ready access to data on substantial damage assessments outside of community assistance visits, which we noted above are FEMA s primary mechanism for NFIP community oversight. For example, we requested data from FEMA on the number of substantial damage assessments performed after Hurricane Harvey in Texas and Hurricane Irma in Florida in 2017. FEMA headquarters officials said that the data were not readily available and they would have to reach out to the regional offices to provide the figure, which took several months. In addition, FEMA regional officials said in August 2019 that they were still assessing the total number of properties that were substantially damaged in Texas in 2017 and that it would take approximately 12 to 24 months to collect these data. They estimated that local NFIP officials and state contractors in Texas performed 27,000 substantial damage assessments with FEMA assistance after Hurricane Harvey. FEMA regional officials also said that following Hurricane Irma, FEMA floodplain management specialists helped train local officials for, or assisted local communities in conducting, 20,206 substantial damage assessments in Florida. According to FEMA, as of December 2019, approximately 2,232 properties had been declared substantially damaged as a result of Hurricane Irma, 86 percent of which had been brought into compliance with NFIP regulation. FEMA officials could not tell us how many substantial damage assessments were conducted in Texas after Hurricane Harvey almost 2 years after the hurricane in part because FEMA does not have ready access to community data on substantial damage assessments. To access data on substantial damage assessments, FEMA headquarters officials first need to ask FEMA regional officials to request data from NFIP communities, and then wait for the communities to compile and send the data to the regional offices. FEMA officials also can review data on individual substantial damage assessments during community assistance visits. FEMA officials said they have not centralized or automated their collection of information on substantial damage assessments for several reasons. FEMA officials said that, in their view, the community is responsible for gathering and maintaining this information as a condition of its NFIP participation, and they consider the communities to be owners of those data. Furthermore, they said that centralized collection of substantial damage data would involve data privacy issues and be a drain on limited resources for disaster relief. However, FEMA officials expressed concern that some communities might not be consistently maintaining documentation of the substantial damage assessments. FEMA officials told us that they have two initiatives underway to help NFIP communities and FEMA staff collect data on substantial damage assessments: Substantial damage estimator tool. Updates to the substantial damage estimator tool, discussed earlier, should help communities collect data more consistently and better document assessments, according to FEMA officials. Community officials can use the tool to evaluate flood damage to residential and nonresidential structures and enter information such as structure type and address. The tool also includes a square-footage calculator and now allows photographs or other files to be attached to the completed assessment. Staff guidance. New staff guidance, which officials said will be implemented sometime in 2020, explicitly outlines for NFIP floodplain managers and FEMA staff the information NFIP communities should collect and maintain when performing substantial damage assessments. The guidance was created to address what FEMA officials believed were shortcomings in existing guidance to communities, which may have made some NFIP communities reluctant to conduct substantial damage assessments and enforce the requirements for those deemed substantially damaged. The new guidance also establishes time frames for data collection at the NFIP community level. While these steps may improve the quality of FEMA s data on substantial damage assessments, federal internal control standards state that management should obtain relevant data from reliable sources in a timely manner based on the identified information requirements and obtain data on a timely basis so that they can be used for effective monitoring. If FEMA headquarters and regional offices do not have ready access to such data beyond the data collected during community assistance visits, they will be hindered in their ability to evaluate community compliance with NFIP requirements. FEMA also may be hindered in its ability to measure the effectiveness of substantial damage assessments, such as the extent to which substantially damaged homes are rebuilt according to NFIP requirements. It is especially important for FEMA to monitor community compliance with the process for assessing substantially damaged properties because this is the system FEMA uses to mitigate flooded properties and reduce the risk of future losses. If FEMA does not know how effectively this process operates, it could miss opportunities to use the process to reduce the financial exposure of NFIP. <4.3. FEMA Has Not Clarified How Communities Can Access NFIP Claims Data That Could Help Them after a Flood> NFIP communities that we visited reported varying levels of access to NFIP claims data and information. According to FEMA guidance, the agency should provide local officials with information on their community that includes the number of flood insurance policies in force, dollar amount of coverage, and the number of claims. NFIP communities also can access information on publicly available data on claims payouts. Officials in some communities we visited were able to access claims data while officials in other communities were not. Some officials with whom we spoke were unsure whether access was permissible. For example, an NFIP community official in Texas said that FEMA told his office they could not provide him with the data when he asked for it. Another Texas official said that typically the communities do not have access to data on flood losses and claims paid. In Florida, an official said that she was able to access some data on NFIP claims in her area as long as the community did not use the data to make substantial damage determinations. Community officials told us that it would be helpful for them to access NFIP claims data after a flood. For example, a number of community floodplain managers told us that having NFIP claims information from FEMA would benefit their flood recovery efforts because it would allow them to better target their substantial damage assessments and make that process more efficient. Officials from other NFIP communities that we visited stated that claims data could help them identify property owners who were likely to start to rebuild and ensure they obtained permits, which can be difficult to determine otherwise. Another group of community officials said that claims data for their community indicated NFIP paid out more than the community s own estimated value of the insured homes in their community, indicating there may have been more substantially damaged homes than they identified. FEMA officials acknowledged confusion among communities concerning their access to NFIP claims data and said they have been working to address it, noting that they must ensure compliance with the Privacy Act of 1974, under which the agency can share certain data only with organizations that have a programmatic need for the information. Officials also said they have been working to streamline the process through which NFIP communities can request claims data. For example, they said they have been considering the most efficient methods for sharing data with local communities that require post-disaster flood information while protecting the privacy of the data. In addition, FEMA officials said they have been drafting guidance which they expect to be issued in 2020 and a new form for community data requests. They said their intent is that once communities are approved for access to a certain type of data, they would not have to reapply for subsequent requests. FEMA officials said the guidance will provide communities with access to NFIP claims data on a property-by-property basis. Federal standards for internal control state that management should externally communicate necessary quality information to achieve the agency s objectives and address related risks. While FEMA has taken positive steps toward reducing confusion surrounding communities access to claims data, at the time of our review FEMA had not yet finalized new guidance. As a result, we were unable to evaluate the potential of these tools to effectively resolve communities confusion over appropriate access to claims information. Until FEMA clarifies the process for communities to access claims data on properties within their community, FEMA may be missing an opportunity to provide communities with data that they would find helpful in the substantial damage assessment process. <5. Conclusions> FEMA relies on communities participating in NFIP to follow its floodplain management requirements, which are designed to reduce the risk of flood damage and the resulting cost to taxpayers. Community assistance visits are the agency s primary tool for ensuring that communities implement these requirements. However, in Texas and Florida FEMA often has not conducted such visits to high-risk communities and lacks complete data on the results. As a result, FEMA s ability to ensure that the communities follow NFIP requirements is limited. In addition, FEMA does not have ready access to data on substantially damaged properties and the related documentation, which hinders its ability to determine if an NFIP community has followed NFIP substantial damage assessment procedures and correctly identified all substantially damaged homes. This, in turn, limits FEMA s ability to evaluate NFIP s effectiveness. Finally, confusion exists among some NFIP communities regarding their access to NFIP claims data, potentially limiting the benefit such data could provide to those communities in identifying substantially damaged properties and ensuring all repairs of flood damage are done to NFIP community standards. <6. Recommendations for Executive Action> We are making a total of four recommendations to FEMA: The Administrator of FEMA should assess different approaches, in addition to community assistance visits, for using existing resources to ensure communities compliance with NFIP requirements. This should include analyzing alternatives to community assistance visits. (Recommendation 1) The Administrator of FEMA should identify appropriate steps to ensure it has complete, up-to-date, and reliable records of community assistance visits, including information on why some visit records remain open for a significant period of time. (Recommendation 2) The Administrator of FEMA should ensure that communities are consistently collecting data on their substantial damage assessments and that FEMA has a way to readily access those data to evaluate community compliance with NFIP requirements for rebuilding substantially damaged properties. (Recommendation 3) The Administrator of FEMA should clarify with NFIP communities its policies on sharing data on NFIP claims and provide such information to those communities as needed. (Recommendation 4) <7. Agency Comments> We provided a draft of this report to the Department of Homeland Security for review and comment. In its comments, the Department of Homeland Security concurred with our recommendations. FEMA also provided technical comments, which we incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Acting Secretary of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report (1) describes the requirements that communities participating in the National Flood Insurance Program (NFIP) must meet and the challenges they face in doing so, (2) examines the extent to which the Federal Emergency Management Agency (FEMA) uses community visits to ensure communities follow requirements, and (3) examines how FEMA oversees community implementation of NFIP requirements for conducting substantial damage assessments. This report focuses on NFIP communities in Florida and Texas that were affected by Hurricanes Irma and Harvey in 2017. For all three objectives, we reviewed FEMA guidance and regulations for communities participating in NFIP and in FEMA s Community Rating System. We interviewed officials from FEMA s Federal Insurance and Mitigation Administration, as well as officials in two FEMA regional offices in Georgia and Texas. We also visited 18 communities in Texas and Florida, and an additional community in Louisiana, that were affected by flooding in the 2017 hurricanes. We conducted structured interviews with officials in these communities. We selected these communities to represent a mix of large and small communities and because they participate in FEMA s Community Rating System. The officials we interviewed included floodplain managers, emergency management coordinators, watershed managers, and representatives of homebuilder associations. We also interviewed representatives of four national and state floodplain associations, and three additional experts two academic experts and a city official with significant knowledge of NFIP and flooding issues. For our first objective, we analyzed the responses of these officials to identify the most commonly cited challenges. For our second objective, we analyzed data on community assistance visits in Florida and Texas from FEMA s Community Information System from January 1, 2008, through July 30, 2019, and spoke with FEMA and community officials. To determine whether FEMA carries out the community assistance visits in accordance with its own guidance, we reviewed FEMA s guidance for specialists to prepare for, conduct, and follow up on the visits. We also spoke with FEMA and other officials about their experience with the visits to determine whether FEMA and state specialists generally followed FEMA s guidance. To determine the extent to which FEMA met its goal of visiting high-risk communities once every 5 years, we compared the data in the Community Information System on community visits against the lists of Tier 1 (high-risk) and Tier 2 (lower- risk) communities provided by FEMA. We also analyzed the data to determine the length of time that records from the community visits were left open, and whether the records were complete. While we noted that the data in the Community Information System were at times incomplete, we found the data reliable enough to identify the frequency of community assistance visits and issues with data entry. For our third objective, to examine how FEMA oversees community implementation of NFIP requirements for conducting substantial damage assessments, we reviewed FEMA policies and guidance, including NFIP Floodplain Management Requirements outlined in 44 C.F.R. Parts 59 and 60. We also reviewed FEMA s Substantial Improvements Substantial Damage Desk Reference (FEMA 758-P) and FEMA flood-mitigation requirements. We examined FEMA s NFIP post-flood processes and procedures related to substantial damage assessments. We reviewed FEMA data on the number of substantial damage assessments performed in Florida and Texas after Hurricanes Irma and Harvey as well as the number of damaged properties that received increased cost of compliance funding. We discussed with community officials their experiences conducting substantial damage assessments and the challenges they faced in doing so. We also reviewed literature to identify actions taken by NFIP communities after a flood, and we reviewed FEMA documentation to determine the actions taken by FEMA and NFIP communities after a flood. We also compared FEMA s practices for collecting data for effective monitoring and communication against federal standards for internal controls. We conducted this performance audit from October 2018 to May 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the U.S. Department of Homeland Security Appendix III: GAO Contact and Staff Acknowledgments <8. GAO Contact Staff Acknowledgments> Alicia Puente Cackley, (202) 512-8678 or cackleya@gao.gov. In addition to the contact named above, Patrick Ward (Assistant Director), Leah DeWolf (Analyst in Charge), Audrey Blumenfeld, Tarik Carter, Anar Jessani, Angela Pun, Jessica Sandler, Jennifer Schwartz, and Jena Sinkfield made key contributions to this report. William Chatlos and Yann Panassie provided technical assistance. | Why GAO Did This Study
NFIP's effectiveness depends in part on communities implementing FEMA requirements on floodplain management and post-disaster rebuilding efforts. GAO was asked to undertake a comprehensive evaluation of federal disaster preparedness, response, and recovery efforts. This report examines (1) requirements NFIP communities must meet and challenges they face, (2) FEMA's use of community visits to ensure compliance, and (3) how FEMA oversees community implementation of NFIP requirements for conducting substantial damage assessments.
GAO analyzed FEMA data on oversight visits and substantial damage assessments from January 2008 through July 2019. GAO also interviewed floodplain managers in 19 communities in Texas, Florida, and Louisiana, and officials from FEMA and floodplain management organizations.
What GAO Found
The Federal Emergency Management Agency (FEMA) requires communities participating in the National Flood Insurance Program (NFIP) to adopt FEMA floodplain maps; limit flooding caused by new development; and require that substantially damaged structures meet elevation requirements (see figure). Community floodplain officials cited challenges, including difficulty inspecting buildings after a flood, staff turnover, and adopting new NFIP flood maps.
FEMA primarily uses community assistance visits to monitor compliance with NFIP requirements. The visits include evaluations of recent construction. Until 2019, FEMA's goal was to visit all communities considered to be high-risk every 5 years. However, FEMA did not meet this goal in Texas or Florida in 2008–2019 because of a lack of resources. Many high-risk communities received only one visit in this period, and some were not visited at all. Without regular monitoring, FEMA's ability to ensure communities comply with requirements is limited. FEMA and state specialists also are to close out records of these visits in FEMA's tracking system if they find no deficiencies or violations, or when the community has resolved any issues. However, in Florida and Texas GAO found that records for many visits remained open for several years, and FEMA staff were unsure whether this indicated unresolved deficiencies or incomplete recordkeeping. Unreliable recordkeeping hinders FEMA's ability to assess community compliance with NFIP requirements.
After a flood, one key community responsibility is to assess whether flood damage on a property was substantial (50 percent or more of the property's value). In such cases, the community must ensure the properties are rebuilt to current NFIP standards. However, FEMA generally does not collect or analyze the results of these assessments, limiting its ability to ensure the process operates as intended. Furthermore, FEMA has not clarified how communities can access NFIP claims data. Such data would help communities target substantial damage assessments after a flood.
What GAO Recommends
GAO is making four recommendations to FEMA: The agency should (1) assess different approaches for ensuring compliance with NFIP requirements, (2) ensure data on community visits are up-to-date and complete, (3) ensure communities collect data on substantial damage assessments, and (4) clarify policies on data sharing between FEMA and NFIP communities. FEMA concurred with the recommendations. |
gao_GAO-20-153 | gao_GAO-20-153_0 | <1. Background> According to FCC, caller ID services became commonplace due to technology developed in the 1980s, and caller ID information transmitted with the call could generally be trusted by the call recipient. However, FCC found as voice service providers migrated to Internet Protocol (IP) networks, these technologies lessened the overall accuracy and reliability of the information presented to the call recipient. Caller ID allows the recipient of an incoming call to determine the telephone number of the caller and, in some cases, the name. This information helps the recipient make informed decisions about which calls to accept or ignore. While the number and name displayed on the caller ID may be associated with the caller, a caller can also deliberately falsify or spoof the information transmitted to the caller ID display to disguise the source of the call. Under the current telephone system, this information, true or false, is conveyed to the call recipient unless the caller requests that such information not be conveyed. Caller ID spoofing is widespread. Many instances of spoofing are legal. For example, spoofing is legally used by professionals such as doctors who want to use their cell phones to return calls to patients but choose to transmit their office number instead. Spoofing also often accompanies robocalls an automated telephone call which delivers a recorded message. Certain types of robocalls are illegal, such as robocalls for sales pitches unless companies have consumers express written permission to call. In addition, telemarketers may not call home or mobile numbers that consumers have registered in the National Do Not Call Registry, which was established through legislation and is maintained by FTC and they must transmit their telephone number and, if possible, their name, to the call recipient s caller ID. According to FCC, advancements in technology have made it inexpensive and easy to make robocalls. As telecommunications systems have transitioned from traditional wireline services, to IP networks, the cost of making phone calls has dramatically decreased. IP-based voice services use existing internet connections to send phone calls, which may be cheaper than long distance phone charges associated with traditional phone service. Autodialers can be programmed to dial a long list of phone numbers in order to deliver millions of calls in a short period of time. These dialing systems, coupled with IP-based voice services, such as Voice over Internet Protocol (VoIP), enable telemarketers and scammers to make high volumes of calls from anywhere in the world. IP-based voice services have also made it inexpensive and easy to spoof caller IDs. According to an industry stakeholder, historically, the router systems used to spoof calls were physical devices located on site, which could be prohibitively expensive. However, software that is available for free can now be downloaded to enable a computer to function as a router. According to stakeholders, telemarketers and scammers can, with minimal cost, configure a router to display either a single spoofed number or a constantly changing set of numbers, making it appear as though calls originated in the United States even if they did not. (See fig. 1.) FCC, FTC, and DOJ each enforce different rules or laws related to caller ID spoofing. FCC enforces rules prohibiting anyone from causing the transmission of misleading or inaccurate caller ID information with the intent to defraud, cause harm, or wrongfully obtain anything of value. FCC also enforces rules requiring telemarketers to transmit caller ID information. FTC protects consumers against unfair or deceptive business acts or practices. FTC, similar to FCC, enforces rules requiring telemarketers to transmit their telephone number, and when available, the name of the telemarketer to a consumer s caller ID service. DOJ enforces federal fraud statutes under which fines or imprisonment can be imposed against anyone who uses interstate telecommunications as part of a fraud scheme. DOJ can also take civil enforcement actions on FTC s behalf. FCC and FTC each manage consumer complaint databases where consumers can file complaints about unwanted calls, robocalls, and violations of the Do Not Call Registry. In addition to government efforts, the telecommunications industry, including voice service providers and third party companies, have taken steps to counteract illegal spoofing. For example, some of these companies have developed or deployed applications (i.e., software programs, often referred to as apps) to defend against robocalls and other unwanted calls. This includes call blocking devices for landline telephones and various mobile applications that can label and block robocalls and other unwanted calls based on call patterns, consumer complaints or other means. While some carriers provide these services free, others may charge a fee. In addition, some carriers also work with analytics providers to analyze traffic on their networks. Beginning in 2017, FCC authorized voice service providers to block certain categories of unwanted calls before they reach consumers phones. Recently, FCC clarified that service providers can also, as a default, block calls identified as likely unwanted based on the provider s reasonable analysis of call data unless consumers opt out of this service. <2. Caller ID Spoofing Is Used in a Variety of Financial Fraud and Other Schemes, and Consumer Complaints Suggest a Substantial Increase in Its Use> <2.1. Caller ID Spoofing Schemes Seek to Obtain Money or Valuable Financial and Personal Information, Generate Telemarketing Leads, or Harass> Scammers use caller ID spoofing to facilitate a variety of financial fraud and other schemes, often in combination with robocalling. Based on our analysis of FCC, FTC, and DOJ enforcement cases and alerts from federal and state government agencies, as well as interviews with stakeholders, we identified three types of caller ID spoofing schemes. To Obtain Money or Information: Scammers have used caller ID spoofing to trick consumers into providing their financial or personal information or sending money such as via a debit or gift card. These scammers may spoof a name and phone number that looks familiar and trustworthy, such as that of a government agency, a company you do business with, or local number. Scams include telling call recipients they may be arrested or they owe money. For example, spoofed robocalls have been used as part of a wide-reaching scam in which callers spoofed IRS phone numbers and impersonated IRS staff to trick people into sending the scammers money for supposed unpaid taxes. IRS reported that from October 2013 through March 2019, the agency was contacted more than 2.4 million times by taxpayers who reported such calls, and more than 15,453 taxpayers reported losing about $75.1 million. (See fig. 2.) To Generate Telemarketing Leads: Unscrupulous telemarketers have used spoofing as part of an attempt to sell goods or services. In this scheme, consumers may receive a pre-recorded robocall with a sales pitch and be instructed to press 1 to indicate interest, at which point the call recipient is transferred to a live operator. In one such scheme, more than 96-million spoofed robocalls were made over a 3- month period. These calls included pre-recorded messages falsely claiming to be from Hilton and other well-known travel companies; once consumers were transferred, live operators attempted to sell vacations not affiliated with the brands presented during the prerecorded message. To Harass: People have used spoofing to harass others. In some of these cases, people have used spoofing to cause another person s caller ID to display a familiar or trusted phone number. In one case, an individual apparently placed 31 spoofed calls as part of a personal campaign to harass and stalk another person. These spoofed numbers appeared to be from the victim s child s school, among others. Spoofing is also one of several techniques used to place false calls to emergency response centers to elicit a police response to an address where no emergency exists. Callers have used spoofing to make it appear as if their call originated at or near the reported address. This practice, known as swatting, has resulted in death. For example, in one swatting case, a man was shot and killed by police who believed he was holding others at the address hostage. <2.2. Available Data Suggest That Caller ID Spoofing Is a Growing Issue> FCC and FTC consumer complaint data both show dramatic increases in recent years in the number of unwanted call complaints that specifically mention the term spoofing. According to our analysis of FCC and FTC complaint data, from 2015 through 2018, complaints to FCC that specifically referred to spoofing more than doubled and those received by FTC increased by more than four times. (See fig. 3). Several industry stakeholders we spoke with noted a growing trend in one particular type of spoofing, neighbor spoofing. Neighbor spoofing occurs when the caller ID is manipulated to display a phone number matching the area code and prefix (the first six digits) of the consumer s phone number. Consumers may be more inclined to answer these calls because they appear to be local perhaps from someone they know. Among FCC s complaints that included both the caller s and the call recipient s phone numbers, the percentage that were indicative of 6-digit neighbor spoofing increased from 10 percent in 2015 to 15 percent in 2018; for similar FTC complaints, the percentage increased from 2 percent in 2015 to 16 percent in 2018; and a call blocking provider told us that its percentage of neighbor-spoofed robocalls increased from 2 percent in January 2016 to 23 percent in December 2018. One analytics provider told us there has been a shift recently from spoofing the first six digits to spoofing the first four and five, which the provider believed to be a reflection of scammers adjusting their methods as more people become aware of the original six-digit form of neighbor spoofing. From 2015 to 2018, FCC and FTC data show substantial increases in complaints indicative of four and five digit neighbor spoofing, with FCC complaints nearly doubling and FTC complaints increasing more than 10 times during this time period. FCC and prior GAO work have described several limitations with using complaint data as a means of measuring the extent of unwanted calls. For example, complaints might increase following consumer outreach regarding how to file a complaint or after news media coverage of a particular scam. In addition, not all consumers who experience problems file complaints, and not all complaints are necessarily legitimate or categorized appropriately. Further, a consumer could submit a complaint more than once, or to more than one agency, potentially resulting in duplicate submissions. Finally, while some consumers may use the term spoof when describing the complaint, others may not, either because they do not know they have been spoofed or are not familiar with the term. According to our analysis of FCC data, in 2018, 66 percent of all complaints that were indicative of neighbor spoofing did not include the term spoof in the complaint description. Nonetheless, FCC, FTC, and DOJ officials told us they use this complaint data to identify specific trends in types of scams that may help the agencies enforcement and public education efforts, which we discuss later in this report. Although we could not find industry data that estimated the total number of spoofed calls, available industry data suggest that the volume of unwanted calls and robocalls (of which illegally spoofed calls are a subset) has increased over the past several years. Using call patterns on their own networks or other means, voice service providers, call blocking applications and analytics providers track data on unwanted calls and robocalls. According to one company, these companies may have limited ability to detect or isolate spoofed calls, in part, because scammers may frequently change the numbers they use. In addition, stakeholders told us, because each of these companies analyzes their specific user base and may use different methods to identify and label robocalls and other unwanted calls, the number of unwanted calls each company estimated may be substantially different. For example, while one analytics company estimated 26.3 billion robocalls nationwide in 2018, another company estimated the number at nearly 48 billion. Similarly, one company estimated a 46 percent increase in robocalls from 2017 to 2018, while another estimated a 57 percent increase for the same time period. Despite these differences, all analytics and call blocking companies we interviewed reported that their estimates of the number of unwanted calls and robocalls have increased in recent years. Because there is no comprehensive data source on unwanted calls, robocalls, or spoofed calls, it is not possible to reliably estimate national trends. FCC has taken steps to seek input from industry and other stakeholders on how to better measure the extent of the unwanted call and spoofing problem. In a November 2017 Further Notice of Proposed Rulemaking, FCC sought comment on, among other things, what information should be collected to evaluate the effectiveness of efforts to combat these calls and whether FCC should adopt a reporting obligation for providers. FCC received numerous comments from voice service providers, their associations, and other stakeholders in response to this notice. One commenter expressed concern that a reporting obligation would be burdensome to providers or of little benefit to FCC, and other commenters stated the agency should instead continue to monitor trends in consumer complaints. More recently, in a June 2019 Declaratory Ruling, FCC adopted a recommendation from 2017 to prepare two reports one in 2020 and a second in 2021 to measure the effectiveness of efforts to combat illegal robocalls. The ruling explicitly delegates authority to FCC staff to collect any and all relevant information and data from voice service providers necessary to complete these reports and states that the report should include authoritative data about the number of illegal robocalls. <2.3. Agencies Consider Risk of Harm to Public and Generally Follow Key Collaboration Practices in Their Enforcement Efforts, but Face Significant Challenges Agencies Reported Taking Risk-Based Approach to Prioritizing Spoofing- Related Investigations and Enforcement Actions, but Collecting Evidence Can Be Difficult> FCC, FTC, and DOJ officials all said that their agencies must prioritize which illegal spoofing activity to investigate and take enforcement action against because they do not have sufficient resources to pursue all such activities. FCC and FTC officials stated that while they review complaint data and other information, it would not be practical to open investigations related to every complaint. According to officials at all three agencies, given their limited resources, the agencies prioritize investigations based on the level of harm being perpetrated and the likelihood of being able to effectively bring an enforcement case. Such prioritization is consistent with standards for internal control in the federal government. Those standards call for agencies to estimate the significance of risks to achieving agency objectives in this case objectives related to protecting the public from harm and to use those estimates as a basis for responding to the risks. More specifically: In a 2015 letter to several members of Congress, the Chairman of the FCC stated that the agency is more likely to pursue enforcement action when a problem appears to be pervasive, represents a trend, involves an agency priority, affects many consumers, reflects particularly egregious abuse, or presents a security or safety concern. Focusing specifically on investigations and enforcement action related to caller ID spoofing, FCC officials told us that the agency s three highest priorities are events that (1) threaten public safety; (2) involve very large numbers of spoofed calls; or (3) involve malicious scams or threats. FTC s strategic plan for fiscal years 2018 to 2022 calls for the agency to target its enforcement efforts on those areas that cause the greatest amount of consumer harm. In line with this objective, FTC officials told us that the agency decides which consumer complaints to investigate based on the level of harm being perpetrated, as well as the likelihood of being able to effectively bring an enforcement case. DOJ s Justice Manual states that serious violations of federal law must be prosecuted. DOJ officials told us that for fraud schemes that employ caller ID spoofing, the agency is more likely to charge a violation of one of the fraud statutes, such as mail fraud, wire fraud, computer fraud, or conspiracy, as well as the money laundering and identity theft statutes. Specifically with regard to wire and mail fraud cases, the Justice Manual states that serious consideration should be given to the prosecution of any scheme which in its nature is directed to defrauding a class of persons or the general public with a substantial pattern of conduct. FCC and FTC officials stated that there are significant challenges related to investigating spoofing cases that can affect which investigations they choose to pursue and limit the number of enforcement cases they are able to bring. For example, FTC officials stated that the use of VoIP technology enables fraudsters to easily change both their physical locations and the numbers they spoof, making it harder for FTC and other law enforcement agencies to track them down. An industry stakeholder said that the use of VoIP technology makes it difficult to determine even whether the call originated domestically or from overseas. Moreover, FCC officials stated that when spoofed calls originate wholly from a foreign jurisdiction, a lack of foreign cooperation can make it exceptionally difficult to follow a trail back to either the service provider that originated the call or the person or company making the calls. The officials explained that foreign cooperation may be lacking when the calls come from countries with which the United States does not have strong diplomatic relationships. The officials stated that because of this challenge, they are less likely to bring an enforcement case when calls originate wholly from a foreign jurisdiction, due to the low likelihood of successfully resolving such cases and the heightened use of limited staff resources required by such cases. Regardless of these challenges, FCC and FTC officials stated that their agencies have taken steps to improve their ability to investigate cases based overseas. For example, both agencies cited their outreach to the Indian government and the U.S.-India Business Council as well as their participation in the Unsolicited Communications Enforcement Network, a global network of law enforcement authorities and regulatory agencies that works to combat unsolicited communications. <2.4. FCC, FTC, and DOJ Identified 62 Spoofing or Caller-ID-Blocking-Related Enforcement Cases Brought since 2006> FCC, FTC, and DOJ officials identified 62 enforcement cases that they said involved spoofing or blocking of caller ID information, though DOJ officials stated that their list of enforcement cases was not comprehensive because DOJ s enforcement database does not include an indicator for whether spoofing was employed as part of a fraud scheme. (For a description of these 62 cases, see app. II.) As noted below, these 62 cases are not representative of all of the cases the agencies have brought related to illegal robocalling. FCC officials provided us information on six cases each of which the officials said involved spoofing or a caller s blocking of their caller ID information that the agency brought from April 2011 to September 2018. For example, one case involved a company that used spoofed robocalls to target elderly and low-income individuals to generate sales of health insurance coverage. The company s high numbers of robocalls also disrupted an emergency medical paging service. FCC issued fines in five of these cases, and one pending case includes a proposed fine. FCC officials told us that since January 2004, the agency has initiated approximately 20 additional enforcement cases and has issued approximately 1,000 warnings, all for robocalling or Do-Not-Call violations under the Telephone Consumer Protection Act of 1991. FTC officials provided us information on 31 cases each of which the officials said involved spoofing that FTC brought or that DOJ brought on FTC s behalf from April 2006 to June 2019. Examples of cases include several involving numerous calls to numbers on the National Do Not Call Registry and an incident in which a company impersonated government officials and help centers to make a sales pitch with false and misleading claims about an English-language learning course to Spanish-speaking U.S. consumers. Monetary judgments were issued in all but one of these cases. FTC officials told us that as of November 2019 the agency had brought 147 enforcement cases against Do Not Call and robocall violators. FTC officials also stated that FTC obtains injunctive relief in their Do Not Call, robocall, and spoofing cases, including court orders prohibiting the defendants from engaging in similar conduct, and in some cases, banning defendants from any telemarketing activity. Further, they stated the injunctive relief also includes reporting and compliance requirements to help FTC monitor defendants. FTC officials told us that the agency has obtained injunctive relief in all of its completed spoofing cases and that these injunctions provide strong deterrence and help stop illegal spoofing. DOJ officials provided us information on 25 cases each of which the officials said involved spoofing that the agency brought from May 2010 to August 2018. Several of these cases involved companies or individuals that used spoofing as part of a scheme to swindle money from people. For example, in one case, defendants used spoofing as part of a scheme to defraud and extort money from victims who were falsely told they had failed to accept and pay for products they had never ordered. Twenty cases had judgments that included prison time; 18 cases had monetary judgements. FCC and FTC have collected far less than has been assessed in fines or monetary judgements, but officials at both agencies stated that the amounts they have collected still serve both punitive and deterrent purposes. Specifically, FCC officials stated that thus far, FCC has collected $25,970 of the approximately $205 million in fines it assessed. This mostly represents full payment of a $25,000 fine FCC issued in January 2017, but FCC has yet to collect any portion of the more recent fines it has issued: a fine of $120 million it issued in May 2018 and a fine of approximately $82 million it issued in September 2018. FCC has referred both of these cases to DOJ for collection action. FCC officials noted that these large fines may not represent the amount that the defendants are able to pay, and that even payment of a fairly small fraction of a large fine could be enough to put a scammer out of business and serve as a substantial deterrent. FTC officials said that FTC has obtained a total of about $363 million in monetary judgments in its 31 spoofing cases. The officials said that many of these judgements were partially suspended based on the defendants ability to pay determined by a defendant s net worth and assets. Further, the officials said if the defendant misrepresents his or her financial position, the entire judgment can become due under a clause that is part of the judgement. The officials said that as of August 14, 2019, FTC had collected about $31 million in its spoofing cases, and that this amount represents all or substantially all of the unsuspended judgments in those cases. Officials with DOJ s Consumer Protection Branch said that the branch views monetary judgments as one piece of the deterrence equation for caller-ID-spoofing offenses. The officials stated that the low amounts collected suggest that other preventative measures, such as injunctive relief and imprisonment, must be employed to deter continued unlawful activity. <2.5. FCC, FTC, and Others Have Proposed Various Legal Changes to Strengthen Enforcement against Illegal Spoofing and Robocalling> FCC and FTC both favor some changes to law to enhance the effectiveness of their enforcement efforts. Specifically: In May 2019, FTC officials testified that the agency s enforcement efforts are hindered by a statutory provision that prohibits the agency from taking action against telecommunications carriers, to the extent they are engaged in common carriage activities. FTC further testified that it would like this provision removed so that the agency could take enforcement action against carriers engaged in illegal telemarketing activities. In 2018, an FCC official publicly stated that a longer statute of limitations for enforcement of the Telephone Consumer Protection Act of 1991 would improve the agency s enforcement efforts against knowing and willful violators of the act. Currently, that act has a 1-year statute of limitations, while the Truth in Caller ID Act of 2009 has a 2-year statute of limitations. FCC officials told us that harmonizing the two acts statutes of limitations to 2 years would help FCC s enforcement efforts since spoofing often occurs with robocalling and the agency often uses the two statutes in tandem. A February 2019 FCC staff report on robocalls notes that FCC s enforcement efforts can be hindered by the requirement that in many instances FCC must warn a party of apparent robocalling violations and can only proceed with a monetary penalty if the party subsequently commits the same type of violation, a requirement in the Communications Act that applies to the Telephone Consumer Protection Act of 1991. According to the report, this requirement enables a warned offender to incorporate under a new name to evade further detection and begin illegal activity anew. In contrast, the report notes, the Truth in Caller ID Act of 2009 allows FCC to directly issue a proposed monetary penalty without first issuing a warning. Similar to the statutes of limitations just discussed, FCC officials told us that since spoofing often occurs with robocalling and the agency often uses the two statutes in tandem, their enforcement efforts would benefit from the elimination of this statutory requirement. In 2019, bills were introduced in Congress that, if passed, would implement the changes in law that FCC and FTC have recommended and could potentially help address other challenges faced by FCC and FTC. For example, in July 2019, a bill was introduced in the Senate that would remove the provision prohibiting FTC from taking action against common carriers. Also in 2019, two different bills were introduced, one in the House and one in the Senate, that would, among other things, address issues with harmonization of the FCC statute of limitations and eliminate the FCC pre-penalty warning requirement with respect to illegal robocalling. In addition, one of these bills, the Telephone Robocall Abuse Criminal Enforcement and Deterrence Act (TRACED Act), would require DOJ, in consultation with FCC, to assemble an interagency working group to study and report to Congress on how to enhance enforcement against robocalls by examining issues like the types of laws, policies, or constraints that could be inhibiting enforcement of the Truth in Caller ID Act of 2009. The interagency working group would also be tasked with identifying existing and potential international policies and programs that could encourage and improve coordination between countries. We have reported in past work that collaborative mechanisms such as interagency working groups can help the federal government achieve many of the meaningful results it seeks to achieve, and that such mechanisms all benefit from certain key features, which raise issues to consider when implementing these mechanisms. As of November 2019, no federal legislation had been enacted on these issues. <2.6. Agencies Efforts to Collaborate on Enforcement Efforts Generally Align with Key Practices> We found that FCC s and FTC s efforts to collaborate on spoofing investigations and enforcement actions align with seven key practices we have previously identified to enhance and sustain interagency collaboration. FCC and FTC officials explained that their close collaboration helps ensure that they share relevant information and avoid duplicating efforts. In addition, we found that DOJ s collaboration with FCC and FTC aligns with five of the seven key practices. Although we did not find evidence that DOJ had taken steps in line with the other two key practices, officials at all three agencies stated that DOJ s collaborative efforts were appropriate given its broader jurisdiction and wider focus. More specifically, we found that all three agencies have incorporated five key practices. Our prior work has found that one way agencies can incorporate three of these practices (1) defining and articulating a common outcome, (2) establishing mutually reinforcing or joint strategies, and (3) agreeing on roles and responsibilities is through a memorandum of understanding. In 2003, FCC and FTC agreed to a memorandum of understanding that calls for the agencies to cooperate and coordinate to implement consistent, comprehensive, efficient, and non-redundant enforcement of federal telemarketing statutes and rules. The memorandum also calls for the agencies to meet quarterly to discuss matters of mutual interest, share consumer complaints, and engage in joint enforcement actions when necessary. Consistent with the memorandum, FTC officials told us that FTC and FCC hold quarterly meetings to discuss how they are targeting robocalls and spoofing investigations and enforcement cases to avoid duplication. FTC and FCC officials stated that in addition, their collaboration with DOJ is enhanced through the participation of all three agencies in a monthly conference call hosted by the National Association of Attorneys General to coordinate efforts to combat illegal robocalls across the government. Although DOJ officials told us that DOJ does not have a memorandum of understanding with FCC or FTC regarding spoofing or robocall-related enforcement, officials we interviewed at all three agencies identified collaborative efforts that DOJ engages in that are consistent with the three key practices cited above. FCC and DOJ officials stated they are developing procedures to share information on a particular enforcement case, and that these procedures could be used on other cases as needed in the future. In addition, officials from all three agencies stated that DOJ s participation in the monthly conference calls and additional informal outreach as needed was sufficient to ensure effective collaboration. With regard to the fourth and fifth key practices (4) identifying and addressing needs by leveraging resources, and (5) establishing compatible policies, procedures, and other means to operate across agency boundaries, FCC and FTC officials described regularly sharing information from their complaint databases, which is in line with these practices. FTC officials stated they regularly review FCC s complaint information to help their enforcement efforts. Moreover, FTC has established policies and procedures whereby DOJ and FCC and other law enforcement entities have access to FTC s complaint database, and FCC and DOJ officials stated that they frequently analyze FTC s complaint database to inform their investigative decisions. Furthermore, DOJ officials stated that DOJ recently contributed funds to FTC to enhance capabilities to analyze the database. FCC and FTC have also leveraged resources by co-hosting a public event in 2018 on reducing robocalls and spoofing that included discussions of recent policy changes and enforcement actions to stop illegal robocalls. We found that FCC and FTC follow two additional key practices for collaborating on spoofing-related investigations and enforcement actions that DOJ does not: (1) developing mechanisms to monitor, evaluate, and report the results of collaborative efforts, and (2) reinforcing agency accountability for collaborative efforts through agency plans and reports. For example, FCC and FTC collaborated on a robocall report published by FCC in 2019 that discussed both agencies enforcement actions related to robocalls and spoofing, and each discussed their collaborative efforts related to robocalls in key agency documents related to accountability and performance. DOJ officials stated that they would be unlikely to have such materials specifically related to spoofing given the agency s focus on fraud itself rather than spoofing or robocalling, which it views as a means to fraud. DOJ officials stated that DOJ s general commitment to interagency collaboration is emphasized in its fiscal year 2020 budget submission to Congress and many press releases related to its enforcement cases. We reviewed DOJ s budget submission and several DOJ press releases and found that they mention collaboration between DOJ and other agencies. <3. FCC and FTC Have Robust Consumer Education Efforts That Follow Key Practices for Consumer Education and Interagency Collaboration> FCC and FTC use a number of methods to educate consumers on ways to protect themselves against spoofed and other unwanted calls. According to FCC documentation, the agency has made combatting illegal robocalls and caller ID spoofing its top consumer protection priority and uses consumer education as a means to address this priority. Similarly, according to FTC s chairman, consumer education is a critical element of FTC s efforts to fulfill its consumer protection mission. The methods that FCC and FTC use both independently and collaboratively to educate consumers on ways to combat caller ID spoofing and unwanted calls include the following. Posting online consumer alerts, videos, blog posts, and other informative materials: Both FTC and FCC post information and warnings about caller ID spoofing scams on their websites. FTC, for example, developed Pass It On, a print- and web-based campaign to educate seniors about various types of scams that target seniors, including spoofing. FCC launched an animated video initiative on how to avoid spoofing scams and also posted a consumer alert about neighbor spoofing scams. The alert explains that scammers use such spoofing to increase the likelihood that consumers pick up the phone and provides tips such as to not answer calls from unknown numbers and to not provide any personal information to such callers. Additionally, FCC and FTC post other information, including tip cards and graphics such as those illustrated in figure 4. Visiting vulnerable communities: FCC has conducted speaking tours, such as tours through rural Appalachia and the Pacific Northwest in 2018 to educate communities about spoofing, and to build partnerships to help improve the effectiveness of future outreach efforts. Similarly, FTC has hosted briefings in underserved communities with law enforcement, consumers, and community advocates to place more attention on consumer protection issues such as spoofing and other types of fraud. We found that FCC s and FTC s consumer education efforts related to spoofing and other unwanted calls aligned with nine key practices for consumer education that we identified in our prior work (see table 1). For example, FCC and FTC have developed consistent and clear consumer education messages related to spoofing and unwanted calls: specifically, consumers: should not answer unknown calls; should not push any numbers if directed to do so; and should hang up immediately once it is clear that the caller is unknown. In addition, FCC and FTC officials have worked with credible messengers to help disseminate consumer education messages, including to potentially vulnerable populations. For example, since 2017, FCC has worked with the National Asian American Coalition to train grassroots volunteers to engage local community members and distribute educational tip cards printed by FCC in languages such as Mandarin Chinese, Korean, Vietnamese, and Tagalog. In addition, FTC has collaborated with AARP to develop three videos for Asian American and Pacific Islander communities on robocall, IRS, and Medicare scams. In addition, we found that, similar to their enforcement efforts, FTC and FCC s efforts to collaborate on public education in this area are consistent with the seven key collaboration practices we discussed earlier in this report. For example, FCC and FTC agreed to a second memorandum of understanding in 2015 that states that the agencies will collaborate with each other on consumer and industry outreach and education efforts, as appropriate. FCC and FTC also collaborate with other entities, including federal, local, and private entities, to educate consumers on ways to combat spoofing. For example, FCC officials told us that beginning in October 2018, they collaborated with Department of Veterans Affairs officials to send out three joint emails (from November 2018 through March 2019) to veterans and veterans organizations on ways to protect themselves against illegal robocalls, including spoofed calls. These officials also noted that each email reached approximately 5.5 million targeted recipients. <4. Industry-Led Technical Effort to Reduce Spoofing Is Moving Forward, with FCC s Support in Line with Federal Guidance Some Providers Are Deploying a Caller ID Verification System with a December 2019 Implementation Target> According to officials with industry groups, voice service providers, and FCC, the voice service provider industry has taken key steps towards successfully putting in place a caller ID verification system throughout much of the IP-based U.S. telephone network by the end of 2019. As discussed previously, the system is commonly referred to as STIR/SHAKEN or SHAKEN/STIR. According to the Alliance for Telecommunications Industry Solutions (ATIS), which spearheaded this industry-led effort along with the Session Initiation Protocol (SIP) Forum, the system is intended to enable voice service providers to verify that a caller has a right to use the caller ID transmitted with the call. Under the system, the voice service provider that first initiates the call onto the network (originating service provider) generates a digital signature that attaches to the phone call indicating that the caller has this right. This occurs only when the originating provider knows this information and is considered the highest level of verification, referred to by the industry as attestation. The signature is transmitted along with the call as it is routed from one service provider to another. The terminating service provider, which passes the call onto the call recipient, can verify that the signature was not tampered with before sending the call to the call recipient (see fig. 5). According to an FCC Notice of Proposed Rulemaking, as of June 2019, several major providers had deployed or were in the process of deploying the system on their own networks, and a few had started exchanging signed calls with a second provider. In addition, ATIS has announced a number of key steps taken to fully implement the system s framework. For example, in September 2018, ATIS launched the system s governance authority, whose board consists of representatives from a variety of U.S. voice service providers and relevant industry associations, and which, according to ATIS, is overseeing the system to ensure that it is effective and secure. In August 2019, ATIS issued a press release stating that the governance authority board had determined the requirements service providers must meet in order to get certificates to digitally sign calls and had contracted a private firm tasked with ensuring that only authorized service providers get these certificates. According to an industry official who worked on this effort, once most U.S. carriers deploy the system and are sharing information across their networks, the technical experts who developed the standards will be able to see how it works and improve and enhance the system through additional technical developments. Because it is not always possible for the originating service provider to determine whether the caller has a right to use the phone number that will be displayed, in addition to the top level of verification, the system was designed with a middle level and a lowest level of verification. The originating service provider digitally signs the call with the middle level of attestation when the provider has an established relationship with the caller but does not know whether the caller has the right to use the phone number it will display. According to ATIS officials, the originating service provider may use this level of attestation, for example, when a call comes from a corporate call center, which displays all outbound calls as originating from a central number or set of numbers. The originating service provider signs the call with the lowest level of attestation when it is responsible for originating the call onto its network but it does not have a relationship with the caller (such as when the call comes in from another country). When using either the middle or lowest level of attestation, the provider cannot determine if the call is spoofed. However, according to ATIS officials, the information that provided the basis for the attestation level is still likely to be helpful. For example, this information may better position the terminating service provider or call blocking and analytics apps to determine, in combination with other data the terminating service provider or such apps may have analyzed, whether to block or warn the consumer about the call. According to officials from several carrier associations or voice service providers, the new system should substantially improve the industry s ability to combat spoofing and block unwanted calls by providing carriers with immediate verification information. These stakeholders, as well as FCC officials, also stated that enabling voice service providers to instantly identify the provider that initiated the call onto the network through the digital signature attached to the call could help facilitate federal investigations by accomplishing in an instant what can now take significant time and effort as the call must be traced back from provider to provider. One stakeholder who played a key role in the development of the system stated that as some U.S. service providers deploy this system and more calls are able to be verified, it is likely to incentivize other U.S. providers to deploy verification systems so that their calls will not stand out as unverified. This stakeholder said that the hope is that other countries, including those with many legitimate call centers that send calls to the United States, such as India, will also implement verification systems that eventually can be integrated with the U.S. system. And as more calls are able to be verified, the stakeholder explained that the system will become more valuable and useful. An ATIS representative and other stakeholders identified other examples of ongoing technical challenges and open issues: Information provided to consumers: The industry has not reached agreement about what, if any, information should be presented to call recipients to inform them that the call has or has not been verified. Stakeholders we spoke with noted that it is important to educate consumers on the limitations of any such information. For example, although a call may be verified, the provider cannot guarantee that the caller is not trying to defraud the call recipient just that the caller is not using a spoofed phone number to do so. Further, if a provider is unable to verify the caller ID information, it does not necessarily mean the call is fraudulent or the caller has malicious intent. For these reasons, several industry stakeholders we spoke with emphasized that the information provided by this system can be most useful when combined with other methods service providers use to analyze call traffic to identify unwanted or illegal calls. IP-only system: Several stakeholders also emphasized that the system only works for calls carried entirely over IP networks, not those using traditional wireline networks. One industry group representing smaller providers that may use traditional wireline networks expressed concerns that its members may need more time to deploy the caller ID verification system because of the resources needed to transition to an IP network. This issue was discussed by industry stakeholders at FCC s July 2019 summit on the caller ID verification system. One industry stakeholder stated that when calls that begin on a traditional wireline network are uploaded to an IP network, the originating service provider on that IP network will sign the call with the lowest level of verification, and that that information, in combination with analytics, will help providers to know whether these calls can be trusted. Verification of certain calls: As of June 2019, ATIS and industry stakeholders were also working to determine how to ensure that calls from 911 operators or video relay service calls for deaf and hard of hearing users are not blocked if providers are unable to verify the caller is authorized to use the phone number. <4.1. FCC Has Actively Encouraged Deployment of the Caller ID Verification System and Been Engaged with Its Development> Since 2013, FCC has taken several steps to encourage the industry s caller ID verification initiative. In doing so, FCC s efforts have aligned with federal guidance for agency participation in private-sector standards activities to help address national priorities. That guidance states that federal engagement in standards activities should aim to produce timely, effective standards that address legitimate regulatory, procurement, and policy objectives. The guidance also states that the federal government should assume an active role where necessary to ensure a rapid, coherent response to national challenges. Key steps FCC took to initiate and accelerate industry efforts in line with the OMB guidance to produce timely and effective standards are summarized below. In March 2013, FCC s Chief Technology Officer presented a vision of developing a caller ID verification system to combat spoofing at an Internet Engineering Task Force meeting, later referred to as a call to action by a technology stakeholder who played a key role in developing this system. In July 2016, FCC s Chairman issued a call to action for providers to accelerate their efforts to develop this system. FCC also called for responses detailing provider efforts. In December 2017, FCC directed one of its advisory bodies to, among other things, define criteria for selecting the system s governance authority and recommend milestones for system deployment. Consistent with the guidance that federal engagement should aim to produce timely, effective standards, FCC s Chairman urged service providers and standards groups to accelerate the development and deployment of these technical standards. In November 2018, the FCC Chairman sent letters to 14 U.S. providers and publicly demanded that that they adopt the caller ID verification system by the end of 2019. While the demand did not legally require providers to deploy the system, the Chairman stated that if industry s progress lagged in 2019, FCC would take action to ensure widespread deployment. This demand and warning represent preliminary steps consistent with the guidance s call for the federal government to assume an active role where necessary to ensure a rapid, coherent response to national challenges. In June 2019, FCC issued a notice of proposed rulemaking that would require all providers to implement the technical system if major providers fail to do so by the end of 2019. The notice also requested comments on how FCC should determine whether it is necessary to mandate implementation of the technical system and how to evaluate whether major voice service providers have met the FCC s end of 2019 deadline for implementation. According to FCC officials and consistent with the federal guidance, FCC has engaged with ATIS, providers, and relevant technical stakeholders throughout their caller ID verification efforts. For example, FCC officials attended key meetings, and an FCC official submitted technical suggestions on standards development related to the caller ID verification system. ATIS representatives told us that FCC s engagement in these technical efforts was helpful, as FCC was able to ask questions and prompt those working on the standards to consider some of the broader issues that various stakeholders would be concerned about and needed to be addressed. Furthermore, FCC is considering how, if at all, its role should evolve in the future. Notably, FCC s June 2019 notice also requested comments on what role FCC should have in the governance of the caller ID verification system, how to encourage carriers that maintain some portion of their network on legacy technology to implement elements of the system, and how FCC and industry can best leverage this system to combat illegal calls originating outside of the United States. FCC also directed staff to develop two reports over the next 2 years that, among other things, provide information on the state of deployment of this caller ID verification system. FCC officials stated that their efforts related to these issues encompass more than what is in the proposed regulations, as FCC will continue to monitor the work of the governance authority, the progress of service providers implementation of the system, and industry s efforts to improve the effectiveness of the system and address remaining technical issues. Moreover, at FCC s July 2019 summit on the caller ID verification system, FCC s Chairman stated that FCC is prepared to issue rules in 2020 mandating that major providers implement the caller ID verification system if these major providers do not meet the 2019 deadline. <5. Agency Comments> We provided a draft of this report to FCC, FTC, and DOJ for review and comment. Each agency provided technical and editorial comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Chairman of the FCC, the Chairman of the FTC, the Attorney General, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff any have questions about this report, please contact me at (202) 512-2834 or VonahA@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: List of Stakeholders GAO Interviewed Appendix II: Summary of Federal Agencies Enforcement Actions Involving Telephone Calls that Allegedly Used Spoofed Caller ID Appendix II: Summary of Federal Agencies Enforcement Actions Involving Telephone Calls that Allegedly Used Spoofed Caller ID alleged conduct. We selectively mention spoofing in this column to provide additional context for some of these cases. Appendix III: GAO Contact and Staff Acknowledgments <6. GAO Contact:> Andrew Von Ah, (202) 512-2834 or VonahA@gao.gov. <7. Staff Acknowledgments:> In addition to the individual named above, other key contributors to this report were Alwynne Wilbur, Assistant Director; David Goldstein, Analyst- in-Charge; Mark Canter; Joshua Cicala; Jennifer Clayborne; Kristen Farole; Jeffery Haywood; Gina Hoover; Delwen Jones; Jenna Lada; Hannah Laufe; Harold Podell; Cheryl Peterson; and Malika Rice. | Why GAO Did This Study
Unwanted phone calls, which may also involve spoofing, consistently rank among the top consumer complaints to FCC and FTC. In recent years, consumers have lost millions of dollars—and been deceived into providing financial or other sensitive information or purchasing falsely advertised products—due to schemes using these calls. FCC, FTC, and DOJ have efforts aimed at combatting the fraudulent use of caller ID spoofing.
Recently enacted federal legislation included a statutory provision for GAO to review federal efforts to combat the fraudulent use of caller ID spoofing. This report examines (1) what is known about caller ID spoofing schemes, including any recent trends; (2) federal agency enforcement and consumer education efforts; and (3) the status of industry efforts to develop technologies to combat spoofing, and FCC's role in these efforts.
To address these objectives, GAO reviewed consumer complaint data from FCC and FTC from 2015 through 2018; reviewed investigation and enforcement information from FCC, FTC, and DOJ; and interviewed agency officials and representatives from 23 nonfederal stakeholders, including industry associations, voice service providers, call blocking and analytics services, mobile phone manufacturers, consumer groups, and a standards body. GAO also reviewed relevant agency documentation and assessed agency efforts against key practices for consumer education and interagency collaboration identified in GAO reports.
What GAO Found
Transmitting fake caller ID information with a phone call, also referred to as “spoofing,” is in many cases illegal—and is used in schemes to obtain money and personal information or generate telemarketing leads. Complaints submitted to the Federal Communications Commission (FCC) and the Federal Trade Commission (FTC), both of which work to protect consumers from spoofing, suggest that spoofing is a growing issue.
FCC, FTC, and the Department of Justice (DOJ) identified 62 enforcement cases they have brought since 2006 involving spoofing. Enforcement can be challenging, as it can be difficult to identify the source of spoofed calls, and scammers may be based overseas. Nevertheless, GAO found that the agencies prioritize their spoofing-related enforcement actions based in part on the level of harm perpetrated against the public and generally follow key practices identified by GAO for effective collaboration. Additionally, FCC and FTC have proposed changes to law to enhance the effectiveness of their enforcement efforts, such as a change that would allow FCC more time to bring certain enforcement actions. Furthermore, FCC's and FTC's consumer education efforts related to spoofing align with key practices for collaboration and consumer education. For example, FCC and FTC have developed consistent and clear messages related to spoofing.
Several major telecommunications carriers are taking key steps to put an industry-developed technical system in place designed to reduce spoofing by December 2019, which FCC has encouraged in line with federal guidance. This system is intended to enable carriers to verify whether a caller has a right to use the caller ID being transmitted with the call. Carriers can use this information to better determine whether to block or warn consumers about the incoming call. Stakeholders cautioned that the system cannot determine whether a caller has fraudulent intentions but only whether the caller is using a spoofed number. FCC has followed relevant federal guidance in participating in the development of this system by, for example, encouraging industry to accelerate deployment of the system, monitoring industry's progress, and providing input into the process. |
gao_GAO-19-581T | gao_GAO-19-581T_0 | <1. Background> VHA s Family Caregiver Program is designed to provide support and services to family caregivers of post-9/11 veterans who have a serious injury that was incurred or aggravated in the line of duty. The program provides approved primary family caregivers with a monthly financial stipend as well as training and other support services, such as counseling and respite care. The Family Caregiver Program has a series of eligibility requirements that must be satisfied in order for family caregivers to be approved. To meet the program s initial eligibility criteria, the veteran seeking caregiver assistance must have a serious injury that was incurred or aggravated in the line of duty on or after September 11, 2001. According to the program s regulations, a serious injury is any injury, including traumatic brain injury (TBI), psychological trauma, or other mental disorder, that has been incurred or aggravated in the line of duty and renders the veteran or servicemember in need of personal care services. The veteran must be in need of personal care services for a minimum of 6 continuous months based on any one of the following clinical eligibility criteria: (1) an inability to perform one or more activities of daily living, such as bathing, dressing, or eating; (2) a need for supervision or protection based on symptoms or residuals of neurological or other impairment or injury such as TBI, post-traumatic stress disorder, or other mental health disorders; (3) the existence of a psychological trauma or a mental disorder that has been scored by a licensed mental health professional, with a Global Assessment of Functioning score of 30 or less, continuously during the 90-day period immediately preceding the date on which VHA initially received the application; or (4) the veteran has been rated 100 percent service connected disabled for a qualifying serious injury and has been awarded special monthly compensation that includes an aid and attendance allowance. To be considered competent to care for the veteran, family caregivers must meet certain requirements including (1) having the ability to communicate and follow details of the treatment plan and instructions related to the care of the veteran; (2) not determined by VA to have abused or neglected the veteran; (3) being at least 18 years of age; and (4) either being a family member such as a spouse, son or daughter, parent, step-family member, or extended family member or an unrelated person who lives or will live full-time with the veteran. Family caregivers must also complete required training before being approved for the program. <1.1. Family Caregiver Program Organizational Structure> VHA s Caregiver Support Program office is responsible for developing policy and providing guidance and oversight for the Family Caregiver Program. It also directly administers the program s stipend, provides support services such as a telephone hotline and website, and arranges coverage through the Civilian Health and Medical Program of the Department of Veterans Affairs (CHAMPVA) for eligible caregivers if they have no other coverage. Further, the office provides funding to VAMCs to cover certain program costs. These costs may include the salaries of the caregiver support coordinators (CSC), who implement and administer the Family Caregiver Program at the local VAMC level, and the costs VAMCs incur for having their clinical staff, such as nurses, conduct the program s required in- home visits to approved caregivers and their veterans. CSCs are generally licensed social workers or registered nurses, and they have both clinical and administrative responsibilities. Their clinical responsibilities may include identifying and coordinating appropriate interventions for caregivers or referrals to other VA or non-VA programs, such as mental health treatment, respite care, or additional training and education. Their administrative responsibilities may include responding to inquiries about the program, overseeing the application process, entering information about applications and approved caregivers into IT systems, and facilitating the processing of appeals. As of May 2014, there were 233 CSCs assigned to 140 VAMCs or health care systems across the country. Additionally, each regional VISN office has a VISN CSC lead for the program, who provides guidance to CSCs and helps address their questions or concerns. <1.2. GAO Has Previously Reported on the Family Caregiver Program IT System Limitations> CAT, which was deployed in May 2011, is a web-based system that was designed to facilitate the exchange of information about approved caregivers between VAMCs and other VHA entities. Such entities include the Health Administration Center, which processes the caregiver stipend payments and administers CHAMPVA. In 2014, we reported that the Caregiver Support Program office was not able to easily retrieve data from CAT that would allow officials to better assess workload trends at individual VAMCs such as the length of time applications are delayed or the timeliness of home visits even though these data were already captured in the system. Caregiver Support Program officials only retrieved workload data on an ad hoc, as-needed basis, which limited their ability to assess the scope and extent of workload problems comprehensively at individual VAMCs and on a system-wide basis. Program officials also expressed concern about the reliability of the system s data. As we noted in our report, program officials also identified the need for a more capable and flexible system that could interface with other departmental systems. The officials told us that they had taken initial steps to obtain another IT system to support the Family Caregiver Program; however, the officials were not sure how long it would take to implement the system. Accordingly, we recommended that VA expedite the process for identifying and implementing a system that would fully support the Family Caregiver Program. VA concurred with our recommendation and subsequently began taking actions in 2015 to implement a replacement system. These actions included taking steps toward implementing short-term improvements to CAT that were to be followed by the implementation of a long-term replacement system. The recommendation continues to remain open. <1.3. Statute Directs VA to Implement an IT System to Support the Family Caregiver Program> The John S. McCain III, Daniel K. Akaka, and Samuel R. Johnson VA Maintaining Internal Systems and Strengthening Integrated Outside Networks Act of 2018 (VA MISSION Act), which was enacted in June 2018, included provisions directing VA to implement an IT system to support the Family Caregiver Program and the incremental expansion of program eligibility. Specifically, the act required VA to implement an IT system to fully support the Family Caregiver Program by October 1, 2018. According to the act, the system is to allow for data assessment and comprehensive monitoring of the program. In particular, the system is to have, among other things, the ability to (1) retrieve data to monitor workload trends at the medical center and aggregate levels; (2) manage an increased number of caregivers as the program expands; and (3) integrate with other relevant IT systems at VHA. The act also stated that VA was to submit an initial report to Congress regarding the status of the planning, development, and deployment of this system within 90 days of enactment of the VA MISSION Act, and that the department is to submit a final report to Congress by October 1, 2019. The final report is to include a certification by the VA Secretary that the system has been implemented, along with a description of how the Secretary is using the system to monitor the workload of the program. <2. VA Has Not Yet Implemented an IT System That Effectively Supports the Family Caregiver Program> Although we previously recommended that VA expedite implementation of a replacement for CAT, and the MISSION Act directed the department to implement an IT system to support the Family Caregiver Program, VA has not yet been successful in its multiple efforts to implement such a system. Specifically, VA has faced a number of difficulties in developing and implementing short-term improvements as well as a long-term replacement system for CAT. In July 2015, VHA and the Office of Information and Technology (OIT) initiated a joint acquisition project, called CAT Rescue, to update CAT and improve the system s data reliability. However, the department reported in January 2017 that this project had experienced delays and identified a large number of defects during system testing. VA terminated the project in April 2018 before any new system capabilities were implemented. A companion project to CAT Rescue that VA initiated in September 2015 was to develop the Caregivers Tool (CareT), a new system intended to be a long-term replacement for CAT. As envisioned, this system was to use the improved data from CAT Rescue while also adding new system capabilities. However, the user acceptance testing of CareT identified the need for the department to develop more system capabilities than originally planned. Further, the department determined that the time period needed to perform additional system development would have extended beyond the term of the development contract, which ended in April 2017. VA subsequently awarded a new CareT development contract in July 2017. However, after additional system development, the department determined during user acceptance testing that the system was not performing as expected and implementation of CareT was further delayed. In October 2018, the department reported to congressional committees that implementing a system to fully support the Family Caregiver Program by the VA MISSION Act deadline was not feasible. Subsequently, the department determined that CareT was not a viable solution and VHA and OIT terminated work on the system in February 2019. VHA and OIT began a third effort in March 2019 to acquire a replacement system that is to be based on an existing commercial product. According to OIT officials, the new IT solution, referred to as the Caregiver Record Management Application (CARMA), is intended to replace CAT. However, the department has not yet established a date for completing CARMA. Thus, VA s efforts to implement an IT system that supports the Family Caregiver Program have been continuing with no end in sight. We have ongoing work to further evaluate the status and progress of the department s efforts to implement a system to support the Family Caregiver Program consistent with the VA MISSION Act requirements. Figure 1 provides a timeline of the various IT projects that VA has undertaken to support the program. <3. Critical Factors Underlying Successful IT Acquisitions> Our prior work has determined that successfully overcoming IT acquisition challenges can best be achieved when critical success factors are applied. Specifically, we reported in 2011 on common factors critical to the success of IT acquisitions, based on seven agencies having each identified the acquisition that best achieved the agency s respective cost, schedule, scope, and performance goals. These factors remain relevant today and can serve as a model of best practices that agencies can apply to enhance the likelihood that the acquisition of an IT system such as CARMA will be successfully achieved. Among the agencies seven IT investments, agency officials identified nine factors as having been critical to the success of three or more of the seven investments. These nine critical success factors are consistent with leading industry practices for IT acquisition. The factors are: Active engagement of program officials with stakeholders. Qualified and experienced program staff. Support of senior department and agency executives. Involvement of end users and stakeholders in the development of requirements. Participation of end users in testing system functionality prior to formal end user acceptance testing. Consistency and stability of government and contractor staff. Prioritization of requirements by program staff. Regular communication maintained between program officials and the prime contractor. Sufficient funding. Officials for all seven selected investments cited active engagement with program stakeholders individuals or groups (including, in some cases, end users) with an interest in the success of the acquisition as a critical factor to the success of those investments. Agency officials stated that stakeholders, among other things, reviewed contractor proposals during the procurement process, regularly attended program management office sponsored meetings, were working members of integrated project teams, and were notified of problems and concerns as soon as possible. Further, officials from two investments noted that actively engaging with stakeholders created transparency and trust, and increased the support from the stakeholders. Additionally, officials for six of the seven selected investments indicated that the knowledge and skills of the program staff were critical to the success of the program. This included knowledge of acquisitions and procurement processes, monitoring of contracts, large-scale organizational transformation, Agile software development concepts, and areas of program management such as earned value management and technical monitoring. Finally, officials for five of the seven selected investments identified having the end users test and validate the system components prior to formal end user acceptance testing for deployment as critical to the success of their program. Similar to this factor, leading guidance recommends testing selected products and product components throughout the program life cycle. Testing of functionality by end users prior to acceptance demonstrates, earlier rather than later in the program life cycle, that the functionality will fulfill its intended use. If problems are found during this testing, programs are typically positioned to make changes that would be less costly and disruptive than ones made later in the life cycle. In conclusion, VA has invested considerable time in multiple efforts toward improving and replacing its IT system to better serve the Family Caregiver Program. However, even with these efforts, the department has not yet implemented a system and the program is not prepared for expansion. Going forward, it is important that VA take steps to improve its efforts to implement a replacement IT system for the Family Caregiver Program. In this regard, the department could benefit from applying critical success factors we previously reported as leading to successful federal IT acquisitions. These factors can serve as a model of best practices that the department can apply to enhance the likelihood that its effort to replace the IT system for the Family Caregiver Program will be successful. Chairs Lee and Brownley, Ranking Members Banks and Dunn, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to respond to any questions that you may have. <4. GAO Contact and Staff Acknowledgments> If you or your staffs have any questions about this testimony, please contact Carol C. Harris, Director, Information Technology Management Issues, at (202) 512-4456 or harriscc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony statement. GAO staff who made key contributions to this testimony are Mark Bird (Assistant Director), Rebecca Eyler, Jacqueline Mai, Monica Perez-Nelson, Scott Pettis, and Jennifer Stavros-Turner (Analyst in Charge). This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
To provide greater support for caregivers of post-9/11 veterans, Congress and the President enacted legislation requiring VA to establish a program to assist caregivers with the rigors of caring for seriously injured veterans. In May 2011, the Veterans Health Administration (VHA), which operates VA's health care system, established the Family Caregiver Program at each of its VA medical centers across the United States. At that time, the department implemented an IT system, called CAT, to help support the program. Subsequently, the VA MISSION Act was enacted in June 2018, requiring VA to implement an IT system to fully support the Family Caregiver Program by October 1, 2018. Further, VA's Secretary is to certify the system by October 1, 2019.
GAO was asked to discuss its September 2014 report that examined how VHA is implementing the Family Caregiver Program. In addition, the statement includes relevant information VA provided on its actions toward addressing GAO's prior recommendation. The statement also discusses critical success factors related to IT acquisitions as identified in GAO's prior work. The reports cited throughout this statement include detailed information on the scope and methodology of GAO's prior reviews.
What GAO Found
In September 2014, GAO reported on the Department of Veterans Affairs' (VA) Program of Comprehensive Assistance for Family Caregivers (Family Caregiver Program) and found that the program office had limitations with its information technology (IT) system—the Caregiver Application Tracker (CAT). Specifically, the program did not have ready access to workload data that would allow it to monitor the effects of the program on VA medical centers' resources. VA has initiated various projects since 2015 to implement a new system, but has not yet been successful in its efforts. (See figure.) Specifically, in July 2015 VA initiated a project to improve the reliability of CAT's data, called CAT Rescue. However, the department reported in January 2017 that it had identified numerous defects during system testing. The project ended in April 2018 before any new system capabilities were implemented. A companion project was initiated in September 2015 to develop the Caregivers Tool (CareT), a new system intended to replace CAT. The CareT project was expected to use improved data from CAT Rescue, while also adding new system capabilities. However, the user acceptance testing of CareT identified the need for the department to develop more system capabilities than originally planned. Further, VA reported that implementing a system by October 1, 2018, as specified in the Maintaining Internal Systems and Strengthening Integrated Outside Networks Act of 2018 (MISSION Act), was not feasible. Subsequently, VA terminated CareT in February 2019. The department initiated another project in March 2019 to implement a new system, the Caregiver Record Management Application (CARMA). GAO has ongoing work to evaluate the department's efforts to implement an IT system to support the Family Caregiver Program as required by the MISSION Act.
GAO's prior work has determined that successfully overcoming IT acquisition challenges can best be achieved when critical success factors are applied. These factors can serve as a model of best practices that VA could apply to enhance the likelihood that the acquisition of a replacement IT system for the Family Caregiver Program will be successfully achieved. Examples of these critical success factors include, maintaining active engagement of program officials with stakeholders, involving end users and stakeholders in the development of requirements, and ensuring participation of end users in testing system functionality prior to formal end user acceptance testing.
What GAO Recommends
GAO recommended in 2014 that VA expedite the process for identifying and implementing an IT system that would fully support the Family Caregiver Program. VA concurred with the recommendation and subsequently began taking steps to implement a replacement system. The recommendation remains open. |
gao_GAO-19-382 | gao_GAO-19-382_0 | <1. Background> <1.1. Sources of Retirement Income> There are three main pillars of retirement income in the United States: Social Security benefits, employer-sponsored or other retirement savings plans, and individual savings and assets. <1.1.1. Social Security> Social Security is a cash benefit that partially replaces earnings when an individual retires or becomes disabled. The monthly benefit amount depends on a worker s earnings history and the age at which he or she chooses to begin receiving benefits, as well as other factors. Social Security benefits are paid to workers who meet requirements for the time they have worked in covered employment, that is, jobs through which they have paid Social Security taxes. To qualify for retirement benefits, workers must typically have earned a minimum of 40 quarters of coverage (also referred to as credits) over their lifetime. Social Security benefits are calculated based on the highest 35 years of earnings on which workers paid Social Security taxes. Those who wait until the full retirement age, which has gradually increased from 65 to 67, to claim Social Security receive unreduced benefits. Social Security provides larger benefits, as a percentage of earnings, to lower earners than to higher earners. Social Security makes up a large portion of income for many older Americans, and older Americans face greater risk of poverty without Social Security benefits. We previously reported that data from the Federal Reserve Board s most recent Survey of Consumer Finances showed that in 2016, among households age 65 and over, the bottom 20 percent, ranked by income, relied on Social Security retirement benefits for 81 percent of their income, on average. According to a 2014 Census report, about 43 percent of people age 65 or older would have incomes below the poverty line if they did not receive Social Security. <1.1.2. Employer-Sponsored or Other Retirement Savings Plans> The most common type of employer-sponsored retirement plan is a defined contribution plan, such as a 401(k) plan. Defined contribution plans generally allow individuals to accumulate tax-advantaged retirement savings in an individual account based on employee and employer contributions, and the investment returns (gains and losses) earned on the account. Individuals or employers may make contributions up to statutory limits. Individuals typically pay fees for account maintenance, such as investment management or record keeping fees. An employee may take funds out of the account prior to age 59 , but will owe taxes, possibly including an additional tax, for early withdrawal. Workers can also save for retirement through an individual retirement account (IRA). IRAs allow workers to receive favorable tax treatment for making contributions to an account up to certain statutory limits. Most IRAs are funded by assets rolled over from defined benefit and defined contribution plans when individuals change jobs or retire. Individuals must have taxable earnings to contribute to an IRA, and the amount of their contribution cannot exceed their earned income. IRAs also have account maintenance fees, which are generally higher than those charged to participants in employer-sponsored plans. IRAs are a major source of retirement assets. As we reported in 2017, IRAs held about $7.3 trillion in assets compared to $5.3 trillion held in defined contribution plans. <1.1.3. Individual Savings and Assets> Individuals may augment their retirement income from Social Security and employer-sponsored plans with their own savings, which includes any home equity and other non-retirement savings and investments. Non- retirement savings and investments might include income from interest, dividends, estates or trusts, or royalties. <1.2. Selected Federal and State Efforts to Support Caregivers> Through our review of literature and interviews with experts, we identified several federal and state efforts that may provide support to caregivers: Medicaid. This federal-state health financing program for low-income and medically needy individuals is the nation s primary payer of long- term services and supports for disabled and aged individuals. Within broad federal requirements, states have significant flexibility to design and implement their programs based on their unique needs, resulting in 56 distinct state Medicaid programs. Under Medicaid requirements governing the provision of services, states generally must provide institutional care to Medicaid beneficiaries, while home and community based long-term services and supports is generally an optional service. All 50 states and the District of Columbia provide long-term care services to some Medicaid beneficiaries in home and community settings under a variety of programs authorized by statute. Some of these programs include self-directed services under which participants, or their representatives if applicable, have decision- making authority over certain services and take direct responsibility for managing their services with the assistance of a system of available supports. Under one such program, participants can hire certain relatives to provide personal care services. Tax-related provisions. Caregivers may be able to use dependent care accounts, tax credits, or tax deductions for financial assistance with caregiving costs. Dependent care accounts are set up through an employer and allow individuals to set aside pre-tax funds to care for a qualifying individual, such as a spouse who is unable to care for himself or herself. As an example of a tax credit, beginning in 2018, caregivers may be eligible to obtain a $500 non-refundable credit for qualifying dependents other than children, such as a parent or a spouse. As an example of a deduction, taxpayers may deduct the cost of qualifying medical expenses. The Family and Medical Leave Act of 1993 (FMLA). This act generally provides up to 12 weeks of unpaid leave per year for eligible employees to help care for a spouse, child, or parent with a serious health condition or for their own serious health condition, among other things. Employees are generally eligible for FMLA leave if they have worked for their employer at least 12 months, at least 1,250 hours over the past 12 months, and work at a worksite where the employer employs 50 or more employees or if the employer employs 50 or more employees within 75 miles of the worksite. The Older Americans Act of 1965. This act was passed to help older individuals remain in their homes and includes grant funding for services for older individuals. Since its reauthorization in 2000, the Older Americans Act of 1965 has provided supports for caregivers through programs such as the National Family Caregiver Support Program. This program provides grants to states to fund a range of supports to help caregivers. For example, the program provides access to respite care. According to the National Institute on Aging, respite care provides in-home or facility-based care by a trained care provider to give the primary caregiver short-term relief from caregiving. Paid sick leave. This form of leave provides pay protection to workers for short-term health needs, and paid family leave is used by employees for longer-term caregiving. No federal sick or paid family leave policy exists. However, as of March 2019, 10 states (AZ, CA, CT, MA, MD, NJ, OR, RI, VT, WA) and the District of Columbia (DC) have guaranteed paid sick days for specific workers, according to the National Partnership for Women and Families, with eligibility varying by state. As of February 2019, six states (CA, NJ, NY, RI, MA, and WA) and DC have paid family leave laws in effect or soon will be implementing them, according to the National Partnership for Women and Families. The covered family relationships, wage replacement rate, and funding mechanism of these programs vary by state. <2. About One in 10 Americans Provided Parental or Spousal Care, with Women and Minority Caregivers Providing More Frequent Care> <2.1. Most Eldercare Providers Cared for a Parent or Spouse> An estimated 45 million people per year provided unpaid eldercare from 2011 through 2017, according to American Time Use Survey (ATUS) data. About 26 million people roughly one in 10 adults in the U.S. population cared for their parent or spouse, and about 22 million people cared for other relatives, such as grandparents, aunts and uncles, or non- related adults (see fig. 1). Among parental and spousal caregivers, 88 percent (about 23.4 million people) provided care to a parent, and 12 percent (3.2 million people) provided care to a spouse. About 7.4 million parental or spousal caregivers (close to 30 percent) provided care for more than one person. <2.2. Parental and Spousal Caregivers Had Similar Demographic Characteristics but Different Economic Circumstances> We examined several demographic and economic characteristics of parental and spousal caregivers compared to the general population. <2.2.1. Gender> Women and men were almost evenly divided in the general population, but women were more likely than men to be parental or spousal caregivers, according to ATUS data from 2011 through 2017. Women made up 52 percent of the general population, but represented 56 percent of parental caregivers and 63 percent of spousal caregivers (see fig. 2). Parental caregivers were younger than spousal caregivers, but both groups were older, on average, than the general population. The average age of parental caregivers was 50, and the average age of spousal caregivers was 70, according to ATUS data. While about half of the general population was under 45, most parental caregivers were over 50, and most spousal caregivers were over 65 (see fig. 3). While far fewer in number, spousal caregivers were considerably older than parental caregivers. Almost three-quarters of spousal caregivers were over Social Security claiming age for full retirement benefits compared to less than 10 percent of parental caregivers. The racial/ethnic distribution of parental and spousal caregivers was consistent with the general population in that a significant majority of caregivers were white. When compared to the general population, caregivers were more likely to be white and less likely to be minorities. <2.2.2. Marital Status> The distribution in the marital status of parental caregivers was similar to the general population in that most people in the general population were married, followed by single, divorced, widowed, and separated. About two-thirds of parental caregivers were married, and not surprisingly, almost all spousal caregivers were married. <2.2.3. Education> Parental caregivers were more educated than spousal caregivers and the general population, according to ATUS data. For example, 38 percent of parental caregivers had completed college compared to 26 percent of spousal caregivers (see fig. 4). These differences may reflect that spousal caregivers are generally older and may come from a generation in which women were less likely to attend college. Parental caregivers were more likely to be employed and to have higher earnings than spousal caregivers and those in the general population. Over 70 percent of parental caregivers worked either full-time or part-time compared to 26 percent of spousal caregivers and 62 percent of the general population (see fig. 5). This may be related to the older age of many spousal caregivers, as the percentage of spousal caregivers out of the labor force was about equal to the percentage over age 65. Further, parental caregivers tended to earn higher wages than spousal caregivers. Among wage and salary workers with a single job, parental caregivers earned $931 per week while spousal caregivers earned $513 per week, and the general population earned $743 per week, according to ATUS data. <2.3. Women Caregivers Were More Likely to Work Part- time and Have Lower Earnings than Men Caregivers> We found that women who provided parental or spousal care were more likely to be employed part-time and to have lower earnings than men who were parental or spousal caregivers (see fig. 6). Women caregivers were less likely to work than men caregivers, but among those who worked, women caregivers were more likely to work part-time, according to ATUS data. For example, among parental caregivers, 66 percent of women were employed either full-time or part-time compared to 77 percent of men, but 17 percent of women worked part-time compared to 10 percent of men. Similarly, among spousal caregivers, women were less likely to be employed than men. In addition, differences in the employment status of women and men caregivers are similar to differences between women and men in the general population. When we examined the distribution of men and women caregivers in earnings quartiles, we found that men caregivers were more likely to be among the highest earners. For parental caregivers, 43 percent of men compared to 25 percent of women were among the highest earners. For spousal caregivers, 22 percent of men compared to 14 percent of women were among the highest earners. Regression results show that these differences between men and women caregivers were significant for parental and spousal caregivers, and remained significant after controlling for caregiver age and years of education. In terms of education, women parental caregivers were more likely to have completed some college or more (69 percent) while women spousal caregivers were less likely to have done so (50 percent) compared to men parental and spousal caregivers (63 and 56 percent, respectively). Similar to the education levels of the parental and spousal caregiving populations generally, these results may reflect generational differences. <2.4. Women, Minorities, and Those with Lower Education and Earnings Levels Provided More Frequent Care> Spousal caregivers were more likely to provide care daily compared to parental caregivers, and parental caregivers who lived in the same house as their parents were unsurprisingly more likely to provide care daily than those who did not, according to ATUS data. The vast majority of spousal caregivers (81 percent) provided care on a daily basis compared to 21 percent of parental caregivers. When we examined the frequency of caregiving among those who lived in the same house as their parents, we found that about 63 percent of these parental caregivers provided care daily, suggesting there is a positive relationship between frequency of care and cohabitation (see fig. 7). Experts we spoke with said the frequency of care may depend on whether the care recipient has a disability and the type of disability. For example, someone with a severe disability may be more likely to require care daily compared to someone with a less severe disability. Women and minorities tended to provide care more frequently. Among parental and spousal caregivers, 30 percent of women provided care daily compared to 25 percent of men. While the majority of caregivers were white, as discussed above, black and Hispanic caregivers were more likely to provide daily care than white caregivers 35 percent of black caregivers and 39 percent of Hispanic caregivers provided care daily compared to 26 percent of white caregivers (see fig. 8). While most parental caregivers were married, parental caregivers who were never married were more likely to provide daily care than divorced, widowed, separated, and married caregivers. Daily caregiving may be concentrated among those with the fewest financial resources. Parental or spousal caregivers with lower levels of education and earnings were more likely to provide care daily (see fig. 9). For example, 48 percent of caregivers without a high school degree provided care daily compared to 21 percent who had completed college. Those who worked part-time were also more likely to provide care daily compared to those who worked full-time (27 percent versus 18 percent, respectively). Those who provided care daily were also more likely to be among the lowest earners. In addition to examining frequency of care, we also found that most parental or spousal caregivers provided care that lasted several years. The majority of parental or spousal caregivers (54 percent) provided care for at least 3 years, and 16 percent provided care for 10 years or more. On average, parental or spousal caregivers provided care for about 5 years, regardless of gender. The number of years of care provided increased with the age of the parental or spousal caregivers (see fig. 10). Women caregivers, spousal caregivers, and Hispanic caregivers were more likely to provide long-term daily care. Among parental or spousal caregivers who said they provided care daily and provided care for at least 5 years, 61 percent were women. In comparison, among all parental and spousal caregivers, 56 percent were women. Twenty-nine percent of spousal caregivers provided long-term daily care compared to 8 percent of parental caregivers. In addition, 16 percent of Hispanic caregivers provided long-term daily care compared to 10 percent of whites and 12 percent of blacks. <3. Some Caregivers Experienced Adverse Effects on Their Jobs and on Their Retirement Assets and Income> <3.1. Parental and Spousal Caregivers Said Caregiving Affected Their Work> An estimated 68 percent of working parental and spousal caregivers said they experienced at least one of eight job impacts about which they were asked, according to our analysis of data used in the 2015 National Alliance for Caregiving and AARP sponsored study, Caregiving in the U.S. The highest percentage of parental and spousal caregivers more than half reported that they went in late, left early, or took time off during the day to provide care (see fig. 11). Spousal caregivers were more likely to experience adverse job impacts than parental caregivers. About 81 percent of spousal caregivers said they experienced at least one of the eight job impacts they were asked about compared to 65 percent of parental caregivers. Spousal caregivers were more likely to reduce their work hours, give up work entirely, or retire early, compared to working parental caregivers. For example, 29 percent of spousal caregivers said they went from working full-time to part-time or cut back their hours due to caregiving, compared to 15 percent of parental caregivers. Our prior work has reported that some older workers felt forced to retire for professional or personal reasons and that individuals approaching retirement often have to retire for reasons they did not anticipate, including caregiving responsibilities. In addition, our prior work has reported that job loss for older workers, in general, can lead to lower retirement income, claiming Social Security early, and exhaustion of retirement savings. We also found that older workers face many challenges in regaining employment. Consistent with these results, we also found that spousal caregiving was negatively associated with the number of hours caregivers worked. Specifically, spousal caregivers who were ages 59 to 66 worked approximately 20 percent fewer annual hours than married individuals of the same age who did not provide spousal care, according to HRS data from 2002 to 2014. <3.2. Spousal Caregivers Nearing Retirement Had Less in Retirement Assets and Income While Parental Caregivers Did Not> We found that spousal caregivers who were at or near the age of full retirement eligibility had lower levels of IRA assets, non-IRA assets, and Social Security income compared to those who did not provide care. We did not detect the same relationship between parental caregiving and retirement income, which may be due, in part, to the older age of the caregivers we examined. <3.2.1. Retirement Assets and Income of Spousal Caregivers> Spousal caregivers at or near retirement age had lower levels of retirement assets and income compared to married individuals who did not provide spousal care. Spousal caregivers tended to have lower levels of IRA assets, non-IRA assets such as real estate or stocks and Social Security income than non-caregivers (see table 1). After controlling for certain characteristics of caregivers, we found that spousal caregivers still had less retirement assets and income than non- caregivers. For example, spousal caregivers had an estimated 39 percent less in non-IRA assets than non-caregivers, after controlling for characteristics such as level of education and race/ethnicity. When we compared women and men spousal caregivers, we found both had less in IRA and non-IRA assets than non-caregivers, but only women had less in Social Security income. Specifically, we found that women and men caregivers had 37 to 54 percent less in IRA and non-IRA assets than non-caregivers, after controlling for demographic and other characteristics. However, the effect of spousal caregiving on Social Security income was only significant among women. Women caregivers had 15 percent less in Social Security income than married women who did not provide care. Many older Americans rely on Social Security for a significant portion of their retirement income. Therefore, a lower Social Security benefit could have serious consequences for these individuals retirement security. One possible explanation experts offered for why spousal caregivers may have less in retirement income and assets than non-caregivers is that the care recipient may be in poor health, resulting in reduced workforce participation of both members of the household, which could then have a large negative impact on household wealth. This scenario could leave spousal caregivers in a precarious financial situation heading into retirement. <3.2.2. Retirement Assets and Income of Parental Caregivers> We did not find that parental caregivers at or near retirement age had lower levels of retirement assets or income than non-caregivers. We compared the retirement assets and income of parental caregivers to the retirement assets and income of individuals who did not provide parental care and did not find a statistically significant effect of parental caregiving on IRA assets, non-IRA assets, defined contribution balances, or Social Security income. See appendix I for more information on this analysis. We may not have seen a significant effect of parental caregiving for a few reasons. First, because of the scope of the HRS data we used, we limited the analysis to individuals who provided care in the 6 years leading up to ages 65 or 66. Therefore, this analysis does not capture the possible effects of parental caregiving prior to age 59, which may be during the middle of a person s career or during their peak earning years. Second, similar to spousal caregivers, experts said a caregiver may reduce their workforce participation to care for a parent; however, parental caregiving may not affect household income because married caregivers spouses may be able to continue working and offset any lost earnings. In addition, unlike spousal care, parental care may be provided by multiple individuals, so the effect on retirement security may be distributed across siblings. <3.2.3. Challenges in Comparing Caregivers to Non-caregivers> Our analysis could not definitively identify the causal effect or lack of effect of caregiving on retirement income due to three main limitations. First, because caregiving is not random but is a function of an individual s circumstances, it is difficult to isolate its effect. For example, individuals who provide care may do so because they have jobs that are more flexible, or because they have better family support. Second, there may be other ways of providing care beyond an individual giving their time that were not captured in the HRS data and therefore could not be included in our analysis. For example, a child may provide financial assistance to a parent rather than providing time. However, the HRS does not capture whether financial help to parents was specifically used for caregiving expenses. Third, common to analyses of this type, alternate measures of certain variables may produce different estimates. For example, we controlled for a caregiver s level of education based on data included in the HRS; however, a measure of education that included the type of education, such as whether the person was a trained caregiver, might have changed our estimates. As a result of these limitations, our estimates may not capture the effect of caregiving on retirement income for the broader population. <4. Experts Said a Comprehensive Framework That Incorporates Actions across Policy Categories Could Improve Caregivers Retirement Security> <4.1. Caregivers Face Several Retirement Security Challenges> Our analysis of literature and expert interviews found that parental or spousal caregivers could face several retirement security challenges: Caregivers may have high out of-pocket expenses. Caregivers may face immediate out-of-pocket expenses that could make it difficult to set aside money for retirement or that could require them to prematurely withdraw funds from existing retirement accounts. These financial burdens can include, for example, travel and medical expenses for a care recipient. AARP s study, Family Caregiving and Out-of-Pocket Costs, estimated that family caregivers spent an average of nearly $7,000 on caregiving costs in 2016. Caregiving costs amounted to about 14 percent of income for white family caregivers and 44 percent and 34 percent for Hispanic and black caregivers, respectively. Caregivers may reduce their workforce participation. In addition to foregone earnings, caregivers who reduce their workforce participation may also lose access to employer-provided retirement benefits, such as participating in an employer-sponsored 401(k) plan or receiving an employer s matching contributions. About 68 percent of working parental and spousal caregivers reported job impacts due to caregiving responsibilities, which included reducing their workforce participation. For those who leave the workforce, re-entry can be challenging, and wages and retirement savings can be negatively affected long-term. Caregivers may not contribute to retirement accounts. Caregivers may face challenges contributing to retirement accounts due to caregiving, and some working caregivers may not be eligible for employer-sponsored retirement benefits. For example, some part-time employees may not be eligible to participate in employer-sponsored retirement plans, or some employees may lose access if they reduce their workforce participation. Individual and employer-sponsored retirement accounts serve as important supplements to Social Security as income replacements in retirement. Caregivers may have lower Social Security benefits. Caregivers may have less in Social Security benefits if they reduce their workforce participation. Social Security benefits are calculated using the highest 35 years of earnings. If a caregiver retires after working for 33 years, he or she would have 2 years of zero income in their benefit calculation, which would result in lower benefits throughout retirement compared to what their benefit would have been if they had a full 35- year earnings history. Social Security makes up a large portion of retirement income from many older Americans, so a lower Social Security benefit could have significant consequences for financial security. <4.2. Four Policy Categories Encompass Actions That Could Improve Caregivers Retirement Security> We identified four policy categories that could potentially address retirement security challenges faced by caregivers. To do so, we identified specific actions that could improve caregivers retirement security based on a review of literature and interviews with experts. We then grouped these actions into four categories: 1) decrease caregivers out of-pocket expenses, 2) increase caregivers workforce attachment and wage preservation, 3) increase caregivers access or contributions to retirement accounts, and 4) increase caregivers Social Security benefits. See figure 12 for example actions in each category. <4.3. Experts Said Some Policy Categories Could Better Help Women and Low- Income Caregivers and All Have Costs> Experts we interviewed identified potential benefits of each of the four policy categories. They also identified specific groups of parental or spousal caregivers who could benefit, including women, lower-income caregivers, and working caregivers (see table 2). As discussed previously, women were more likely to provide parental and spousal care, to work part-time, and to have lower earnings than men caregivers. In addition, over one-third of parental caregivers and almost two-thirds of spousal caregivers were in the bottom two income quartiles, and caregivers in the bottom earnings quartile were more likely to provide care daily. Experts also said all four categories have potential costs and challenges (see table 3). Experts identified three implementation issues that would need to be addressed regardless of the policy category. Determining responsibility for implementation. It is unclear who would be responsible for implementing and funding certain actions under each approach, according to experts. Some may require legislative changes, steps by employers, or public-private partnerships that integrate both sectors. The RAISE Family Caregivers Act enacted in January 2018 requires the Department of Health and Human Services (HHS) to develop a strategy, including recommendations related to financial security and workforce issues, to support family caregivers and to convene an advisory council to help develop the strategy. The advisory council will include representatives from federal agencies, employers, state and local officials, and other groups. Between October 12, 2018 and December 3, 2018, HHS sought nominations for individuals to serve on the advisory council. Defining caregiving for benefit eligibility. Experts said some actions may require a definition of caregiving to use in determining eligibility for benefits. Current definitions related to federal caregiving policy vary. For example, FMLA defines a caregiver by specific familial relationships. In contrast, the RAISE Family Caregivers Act defines a family caregiver more broadly as an adult family member or other individual who has a significant relationship with, and who provides a broad range of assistance to, an individual with a chronic or other health condition, disability, or functional limitation. Identifying and verifying caregivers. Experts said some actions may require a mechanism for identifying and verifying a caregiver s status. Experts noted that many caregivers do not identify themselves as such, particularly those caring for a spouse, and therefore do not claim existing benefits. In addition, certain actions may require a decision about whether benefits extend to the primary caregiver or to all caregivers, for example, siblings who may jointly provide care to a parent. <4.4. Experts Said Implementing Actions across Policy Categories and Enhancing Public Awareness Would Help Address Caregivers Needs> Several experts we interviewed said caregivers could benefit more from a retirement system that incorporates actions across the policy categories so that actions can work in tandem to address caregivers needs. For example, if caregivers have lower out-of-pocket caregiving costs, they might be able to contribute more to their retirement savings. If caregivers can contribute more to their retirement savings because they have better access to accounts, they might have to rely less on Social Security in retirement. Some experts pointed to Hawaii s Kupuna Caregivers Program as an example of a program with complementary goals to alleviate out-of-pocket expenses and reduce barriers to staying fully employed while providing care for a family member. Specifically, according to experts, the program provides a financial benefit of $70 per day for up to 365 days to caregivers who work at least 30 hours a week to spend on respite care, home health care workers, meal preparation, and transportation costs for a care recipient age 60 or older. Although the program is in the early stages of implementation, experts said several states already see it as a model for meeting these two goals. Experts also said it would be helpful to implement actions that address the needs of caregivers in the long- and short-term and across their lifespans. In general, experts said each of the policy categories could help longer-term caregivers more than short-term caregivers. However, they said certain actions to decrease caregivers out-of-pocket expenses or to increase workforce attachment could also help in addressing immediate needs. For example, experts said actions such as paid time off and flexible work schedules could help those caring for individuals with acute conditions to attend doctor s appointments. Experts also said policies should address the needs of caregivers with different levels of workforce attachment. For example, one expert said there are disparate policy impacts to consider depending on whether someone is a salaried worker, an hourly worker, or a caregiver who does not work. Similarly, someone who depends on other types of government assistance, such as Social Security Disability Insurance, may also have different needs. Another expert said the age at which caregiving takes place may impact retirement security; people may be caring for older parents or a spouse at a point in their careers when they are supposed to be catching up on retirement contributions or have peak earnings, so they may not be able to make up for lost time in terms of retirement savings. Finally, several experts mentioned public awareness as critical to helping people understand the implications of caregiving on retirement security. They stressed the importance of financial literacy and making caregivers aware of existing and new benefits. Experts said people are not well informed about their Social Security benefits or their options for private retirement savings. In addition, it can be difficult to understand the long- term impacts of becoming a caregiver, and experts pointed to the need for education about how the decision, along with those to leave the workforce or reduce workforce participation, could affect caregivers long- term financial security. One expert noted that education and services that help families proactively think about their financial security and plan for caregiving needs could be useful. Educating the public about what supports exist, new supports as they become available, and eligibility and enrollment procedures, is critical to ensuring caregivers take advantage of available supports. <5. Agency Comments> We provided a draft of this report to the Department of Labor, the Department of Health and Human Services, the Department of the Treasury, and the Social Security Administration for review and comment. The Departments of Labor, Health and Human Services, and the Treasury provided technical comments, which we incorporated as appropriate. The Social Security Administration told us they had no comments on the draft report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretaries of Labor, Health and Human Services, and Treasury, the Acting Commissioner of Social Security, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or jeszeckc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology The objectives of this review were to (1) examine what is known about the size and characteristics of the parental and spousal caregiving population, including differences among women and men; (2) examine the extent to which parental or spousal caregiving affects retirement security; and (3) identify and discuss policy options and initiatives that could improve caregivers retirement security. This appendix provides information about the methods we used to answer these questions. Section I describes key information sources we used, and section II describes the empirical methods we used to answer the first and second research questions and the results of supplementary analyses. <6. Section I: Information Sources> To answer our research questions, we analyzed data from three nationally representative surveys the American Time Use Survey (ATUS), the Health and Retirement Study (HRS), and Caregiving in the U.S. conducted an extensive literature search, and interviewed relevant experts or stakeholders. This section provides a description of our data sources and the steps we took to ensure their reliability for the purposes of our review. <6.1. American Time Use Survey> To answer the first objective, we analyzed data collected through ATUS eldercare module from 2011 through 2017, the most recent year of data available. The ATUS which is sponsored by the Bureau of Labor Statistics and conducted by the U.S. Census Bureau provides nationally representative estimates of how, where, and with whom Americans spend their time. Individuals interviewed for the ATUS are randomly selected from a subset of households that have completed their eighth and final month of interviews for the Current Population Survey (CPS). Starting in 2011, the ATUS began asking questions about eldercare. We weighted the data and calculated relative standard errors to reflect CPS guidance on the sample design. A relative standard error is equal to the standard error of a survey estimate divided by the survey estimate. <6.2. Caregiving in the U.S.> We analyzed data used in the 2015 Caregiving in the U.S. study sponsored by the National Alliance for Caregiving and the AARP Public Policy Institute to estimate job impacts of parental and spousal caregiving for working caregivers. The survey was conducted through online interviews. To identify caregivers, respondents were asked whether they provided unpaid care to a relative or friend 18 years or older to help them take care of themselves. Respondents were also asked to whom they provided care, which allowed us to identify parental and spousal caregivers. We considered someone to be a parental caregiver if they provided care to a parent or a parent-in-law. We considered someone to be a spousal caregiver if they provided care to a spouse or partner. To determine the job impacts of caregiving, respondents were asked whether they were currently employed while providing care or whether they were employed in the last year while providing care and whether they experienced any of the following job impacts as a result of caregiving: Went in late, left early, or took time off during the day to provide care Went from working full-time to part-time, or cut back hours Took a leave of absence Received a warning about performance or attendance at work Gave up working entirely Turned down a promotion Lost any job benefits All estimates derived from random samples are subject to sampling error. All percentage estimates from this survey have margins of error at the 95 percent confidence level of plus or minus 5 percentage points or less, unless otherwise noted. <6.3. Health and Retirement Study> To analyze the effects of caregiving on retirement security, we analyzed data collected through the HRS, a nationally representative survey sponsored by the National Institute on Aging and the Social Security Administration and conducted by the Survey Research Center at the University of Michigan s Institute for Social Research. This biennial longitudinal survey collects data on individuals over age 50 and contains information on unpaid parental and spousal caregivers. Each biennial period is referred to as a wave. The HRS includes both members of a couple as respondents. There are currently 12 waves of core data available from 1992 to 2014 with about 18,000 to 23,000 participants in any given wave. The initial 1992 cohort consisted of respondents who were then ages 51 to 61, and these respondents have been interviewed every 2 years since 1992. New cohorts have been added over time to maintain the representation of the older population from pre-retirement through retirement and beyond. We used data from 2002 to 2014 for our analyses; we did not use data prior to 2002 because data on spousal caregivers were formatted differently. We adjusted asset and income values for inflation. We weighted the data and calculated standard errors to reflect HRS guidance on the sample design. <6.4. Data Reliability> For each of the datasets described above, we conducted a data reliability assessment of variables included in our analyses. We reviewed technical documentation, conducted electronic data tests for completeness and accuracy, and contacted knowledgeable officials with specific questions about the data. We determined that the variables we used from the data we reviewed were sufficiently reliable for the purposes of describing and comparing the caregiving populations to each other or to non-caregivers. We also cited studies conducted by other researchers to supplement our findings; each of these studies was reviewed by two social scientists with expertise in research methodology and was found to be sufficiently methodologically sound for the purposes of supplementing our descriptions or comparisons. <6.5. Literature Review and Interviews> To gain an understanding of policy options that could improve caregivers retirement security, we reviewed prior GAO work, conducted an extensive literature review of journal articles, working papers, and think-tank studies on caregiving and topics related to retirement security, and conducted preliminary interviews with experts in caregiving or retirement security. Based on this information, we identified specific actions that could affect caregivers retirement security, which we categorized into four different categories based on common themes. We then conducted semi- structured interviews with or received written responses from a range of experts and stakeholders including some of the experts we met with to identify specific policy actions to obtain their views on the benefits and costs of the specific policy options and approaches we identified, and we also asked them to identify any additional actions. We selected experts and stakeholders who are engaged in research or advocacy around caregiving or retirement issues, or those who might be affected by the actions identified. We also aimed to interview experts or stakeholders who might have different viewpoints regarding the identified actions. See table 4 for a list of the experts or stakeholders we interviewed or received written comments from over the course of our work. <7. Section II: Methods for Analyzing Parental and Spousal Caregivers Characteristics and the Effect of Caregiving on Retirement Security> This section discusses the quantitative analysis methods we used to describe the characteristics of parental and spousal caregivers and the regression analyses we conducted to estimate the impact of caregiving on retirement security. We used ATUS and HRS data for these analyses. <7.1. Characteristics of Parental and Spousal Caregivers> To describe the characteristics of parental and spousal caregivers, we conducted descriptive analyses to examine differences between parental and spousal caregivers and the general population. For all univariate and multivariate statistics calculated using the ATUS data, we constructed variance estimates using replicate weights. The ATUS eldercare module defines caregiving as assisting or caring for an adult who needed help because of a condition related to aging. The eldercare module contains one observation per eldercare recipient, and for each recipient, includes information about the duration of care provided to the recipient, the age of the recipient, the relationship of the recipient to the care provider, and whether the care recipient and the care provider share a household. To analyze data on eldercare providers rather than recipients, we restructured the data into a single observation per care provider. While any given care provider could provide care to multiple recipients, we defined care provider types as follows: Spousal caregivers were those who provided care to a spouse or cohabiting domestic partner, regardless of whether they also provided care to another person. Parental caregivers were those who provided care to a parent or parent-in-law, regardless of whether they also provided care to another person. Caregivers of another relative were those who provided care to someone related to them (such as a grandparent or aunt or uncle), regardless of whether they also provided care to another person. Caregivers of a non-relative were those who provided care to an unrelated person, such as a friend or neighbor, regardless of whether they also provided care to another person. Data on frequency of care how often a respondent provided eldercare is collected once for each care provider, rather than for each recipient, and therefore did not require restructuring. However, as noted above, data on the duration of care how long a respondent provided care is collected for each care recipient. Therefore, we analyzed the duration of care for the relevant care recipient (parent or spouse) using the same caregiver types as described above. For example, if someone provided both parental and spousal care, the duration of care for the relevant recipient would be used. We conducted descriptive analyses to examine parental and spousal caregivers characteristics including gender, age, race and ethnicity, marital status, level of education, employment status, and earnings. The following are important considerations of these analyses: Age. We examined caregivers who provided care to an adult recipient of any age, and, except where indicated in the text, we compared the characteristics of adult caregivers to the general adult population of all ages. We used four age categories (15 to 44, 45 to 50, 51 to 64, and 65 and older). We chose these age groups so that we could examine the characteristics of care providers with a similar age profile to those we examine in our analysis of household income and assets. Presence of a living parent. We did not have information in the ATUS to determine whether those who provided parental care had living parents; therefore, our analyses included all parental caregivers who said they provided care to a parent or parent-in-law within the past three to four months, even if the parent was deceased by the time of their interview. Certain analyses, where indicated in the text, control for the presence of a parent in the respondent s household. Earnings. ATUS provides current information on respondent s usual weekly earnings at their main job. Because we did not have current information on earnings from all jobs, for this analysis only, we restricted the sample to those respondents who have a single job. Because we did not have current information on self-employment income, we restricted our analysis of earnings to those respondents who are wage and salary workers. In our report, we present data on the unadjusted demographic and economic characteristics of caregivers and the general population. We present the unadjusted characteristics so that readers can view the actual demographic profile of caregivers. However, we also conducted logistic regression analyses that predict the likelihood of caregiving as a function of various demographic and economic characteristics and found that most characteristics are qualitatively similar in the multivariate and univariate context. Our independent variables for this multivariate analysis were age, education, gender, marital status, race, ethnicity, and labor force status employed, unemployed, or not in the labor force. Where indicated, as mentioned above, we included a categorical variable for whether the respondent s parent lives in the respondent s household. Where indicated, we included quartiles of usual weekly earnings; in logistic regressions that included weekly earnings as an independent variable, the analyses were restricted to wage and salary workers with a single job. See appendix III for more detail about these logistic regression analyses. <7.2. Effect of Parental and Spousal Caregiving on Retirement Security> To analyze the impact of caregiving on retirement assets and income, we compared the assets and retirement income of caregivers and non- caregivers. We conducted separate analyses for each type of care, as described below. <7.2.1. Spousal Care> To determine the effect of spousal caregiving on retirement security, we took two approaches: 1. We conducted descriptive analyses to examine differences between spousal caregivers and non-caregivers in terms of assets at or near retirement and Social Security income during retirement. We also examined differences between spousal caregivers and non-caregivers in terms of work, education, and health status of both the person providing and the person receiving care. 2. We conducted regression analyses to examine whether observed differences in assets and Social Security income were still statistically significant when we controlled for these differences in the spousal caregiving and non-caregiving populations. In order to construct our analysis sample of spousal caregivers, we took the following steps. First, we identified married individuals at ages 65 or 66. We chose these ages because they are at or near the full retirement age at which individuals can receive unreduced Social Security benefits. We then identified the respondents that provided spousal care in the current wave or in the prior two waves of data, a 6-year period of time. To determine whether someone provided spousal care, the HRS asks the respondent whether they received help with activities of daily living (ADLs) or with instrumental activities of daily living (IADLs) and who helped with these activities. If the respondent indicated that their spouse or partner provided help, we then identified that person as a spousal caregiver. This resulted in a sample of about 5,000 observations. We found that about 10 percent of the sample provided spousal care in the 6 years we examined. We also obtained information on the asset levels, hours worked, and other descriptive attributes at ages 65 or 66. To determine the level of Social Security retirement income, we looked ahead to the household s Social Security income at age 71 using data from future waves of the HRS because some individuals may receive benefits at a later age. We found differences between spousal caregivers and non-spousal caregivers, and differences were often statistically significant (see table 5). As the table shows, spousal caregivers tended to have lower asset levels IRA assets, non-IRA assets, or defined contribution account balances as well as lower levels of Social Security income. Although the asset levels of spousal caregivers did not increase as much as for non-caregivers, the differences were not statistically significant. Spousal caregivers also tended to work fewer hours, were less likely to have a college degree, and were more likely to be in self-reported poor or fair health. Spouses receiving care also had different characteristics than spouses not receiving care, indicating that the care recipient also could affect household assets. Spouses receiving care tended to work less and to be in poorer self-reported health. Spouses receiving care also worked fewer hours 1,100 compared to 2,700 for spouses who did not receive care (see table 5). About 66 percent of spouses that received care were in self-reported fair or poor health, as opposed to 15 percent of those who did not receive care. We also compared differences between spousal caregivers and non- caregivers by gender (see table 6). We found some of the same differences between men and women spousal caregivers and non- caregivers as we did among spousal caregivers and non-caregivers more generally. However, there were also additional differences. For example, among women, growth in assets was larger among caregivers, and was statistically significant. However, differences in the cumulative hours worked was not statistically significant. In order to investigate whether observed differences in retirement assets or income might be due to factors other than caregiving, we controlled for additional variables using a multiple regression. Specifically, we generated a binary variable which took the value of one if the respondent had provided spousal care and took the value of zero if not and examined the estimated coefficient on this variable. We ran six different regression models for each of the assets, with six different sets of controls, in addition to the spousal caregiving variable. The different models are as follows, with each building on the model prior. Unless otherwise noted, the findings presented in the report are from model 5. Model 1 estimated the differences, with only controls for the year of the wave. This helps control for the effects that would be experienced by all retirees in that year, like an economic recession. Model 2 included the controls from model 1 and also whether the person has a college degree. This helps control for the effects of education on assets and income. Model 3 included the controls from models 1 and 2 as well as earnings for the respondent in the period before we observed them caregiving. This helps control for caregivers having lower earnings before caregiving, which could affect assets and income. Model 4 included the controls from models 1, 2, and 3 and also demographic characteristics, such as race and ethnicity, which can be associated with assets or income. Model 5 included the controls from models 1, 2, 3, and 4 and also controlled for the self-reported health of the potential caregiver. Model 6 included the controls from models 1, 2, 3, 4, and 5 and also controlled for the self-reported health of the potential care recipient. Having a spouse in poor health might affect assets or income, even if no caregiving was provided. We estimated effects on four different types of assets and income at ages 65 and 66: IRA assets, non-IRA assets, defined contribution balances, and Social Security income (see table 7). We took the logarithm of the value before running the regression to normalize the distribution. We also considered the possibility that caregiving might not only affect the level of assets, but might affect the accumulation or growth of assets. We did that by including models that estimated the effect on the growth of IRA and non-IRA assets. The table below shows the parameter estimates of the effect of spousal caregiving with different levels of controls or dependent variables. In the table, the columns represent the different models (1 through 6). The rows represent different dependent variables different types of assets or Social Security income for which we estimated the effect of spousal caregiving. In the table, the upper panel shows the effects on women s assets and income based on caregiving. The middle panel shows the effects on men s assets and income based on caregiving, and the final panel shows the effect when the men s and women s samples were pooled. As the table shows: For women, men, and when the sample was pooled, we found significant negative effects of spousal caregiving on both IRA and non-IRA assets. However, the coefficient decreased in magnitude when we added additional controls. For example, when we controlled for the health of the person receiving the help, the coefficient almost fell by half, from about .5 to about .25 in the case of non-IRA assets. This indicates that it is difficult to differentiate the effect of spousal caregiving from the effect of having a spouse in poor self-reported health. For women, men, and when the sample was pooled, we found significant negative effects of spousal caregiving on Social Security income. But for men, the effect was only significant at the 10 percent level for models with fewer controls. In addition, when we added controls for demographics and health, the effect for men no longer was significant. For the growth of assets, we found negative effects for non-IRA assets for women, but not for men and not for the pooled sample. However, the effects were only significant at the 10 percent level and not significant when we controlled for the health of the care recipient. In addition to the regression coefficients, we also calculated the differences in percent terms, which may be easier to interpret (see table 8). We found results that were strongest when comparing women spousal caregivers to women who did not provide spousal care. The effect for women was resilient to the inclusion of controls. In the model that included the health of the recipient (model 6), the effect ranged from a 40 percent reduction in IRA assets, to an 8 percent reduction in household Social Security income. For men, we found effects for IRA assets, but the effects for Social Security income were not resilient to the inclusion of controls besides the education of the recipient. To determine the effect of parental caregiving on retirement security, we conducted descriptive analyses to examine differences between parental caregivers and non-caregivers in terms of assets at or near retirement age and Social Security income during retirement. In order to construct our analysis sample of parental caregivers, we took the following steps. First, we identified individuals at age 65 or 66 who had living parents or parents-in law. We made this restriction because having living parents at ages 60 to 66 (and the opportunity to provide care) might be associated with higher socio-economic strata. Therefore, we did not want to compare caregivers to those who did not provide care because their parents were deceased. We then identified the respondents that provided parental care in the current wave or in the prior two waves of data. To determine who is a parental caregiver, the HRS asks respondents two separate questions. The first asks whether a respondent spent a total of 100 hours or more since their last interview or in the last 2 years helping a parent or parent-in-law with basic personal activities like dressing, eating, or bathing. The second question asks whether a respondent spent a total of 100 hours or more since their last interview or in the last 2 years helping a parent or parent-in-law with other things, such as household chores, errands, or transportation. We limited the analysis to those with living parents or in-laws. This resulted in a sample of about 2,499 observations. We found that about 57 percent of the sample provided parental care in the 6 years we examined. Unlike our analysis of spousal caregivers, we found that parental caregivers had higher levels of assets at or near retirement than non- caregivers, but differences between parental caregivers and non- caregivers were not statistically significant (see table 9). Appendix II: Characteristics of Different Types of Caregivers The following tables provide information about the characteristics of various types of eldercare providers. Appendix III: Multivariate Analysis of the Probability of Providing Care Table 13 shows the adjusted odds of providing care for people with different economic and demographic characteristics, from multivariate analyses. Models 1, 2, 3 and 4 show the adjusted odds of providing parental care, and models 5 and 6 show the adjusted odds of providing spousal care. Model 1 estimates the probability of providing parental care as a function of gender, age, marital status, race, education, and labor force status. Model 2 estimates the probability of providing parental care as a function of gender, age, marital status, race, education, and income quartiles. This model is restricted to employed workers, and therefore does not include labor force status as a regressor. Model 3 is identical to model 1, except that model 3 includes an indicator for whether the parental caregiver and the parental care recipient live in the same household. Model 4 is identical to model 2, except that model 4 includes an indicator for whether the parental caregiver and the parental care recipient live in the same household. Model 5 estimates the probability of providing spousal care as a function of gender, age, marital status, race, education, and labor force status. Model 6 estimates the probability of providing spousal care as a function of gender, age, marital status, race, education, and income quartiles. Like model 2, this model is restricted to employed workers, and therefore does not include labor force status as a regressor. Appendix IV: GAO Contact and Staff Acknowledgments <8. GAO Contact> <9. Staff Acknowledgments> In addition to the contact named above, Erin M. Godtland (Assistant Director), Nisha R. Hazra (Analyst-in-charge), Benjamin Bolitzer, Jessica Mausner, and Rhiannon C. Patterson made key contributions to this report. Also contributing to this report were Susan Aschoff, Deborah Bland, Justin Fisher, Avani Locke, Michael Naretta, Mimi Nguyen, Rachel Stoiko, Shana Wallace, and Adam Wendel. | Why GAO Did This Study
According to the U.S. Census Bureau, the number of people in the United States over age 65 is expected to almost double by 2050. As Americans age, family caregivers, such as adult children and spouses, play a critical role in supporting the needs of this population. However, those who provide eldercare may risk their own long-term financial security if they reduce their workforce participation or pay for caregiving expenses. GAO was asked to provide information about parental and spousal caregivers and how caregiving might affect their retirement security.
This report (1) examines what is known about the size and characteristics of the parental and spousal caregiving population, including differences among women and men; (2) examines the extent to which parental or spousal caregiving affects retirement security; and (3) identifies and discusses policy options and initiatives that could improve caregivers' retirement security.
GAO analyzed data from three nationally representative surveys; conducted an extensive literature review; and interviewed experts who are knowledgeable about caregiving or retirement security, engaged in research or advocacy around caregiving, or represent groups that might be affected by the identified policy approaches.
What GAO Found
An estimated one in 10 Americans per year cared for a parent or spouse for some period of time from 2011 through 2017, and women were more likely than men to provide care, according to Bureau of Labor Statistics survey data. Both parental and spousal caregivers were older than the general population, with spousal caregivers generally being the oldest. In addition, spousal caregivers were less likely to have completed college or to be employed, and they had lower earnings than parental caregivers and the general population. Most parental and spousal caregivers provided care for several years, and certain groups were more likely to provide daily care, including women and minorities.
Some caregivers experienced adverse effects on their jobs and had less in retirement assets and income.
According to data from a 2015 caregiving-specific study, an estimated 68 percent of working parental and spousal caregivers experienced job impacts, such as going to work late, leaving early, or taking time off during the day to provide care. Spousal caregivers were more likely to experience job impacts than parental caregivers (81 percent compared to 65 percent, respectively).
According to 2002 to 2014 data from the Health and Retirement Study, spousal caregivers ages 59 to 66 had lower levels of retirement assets and less income than married non-caregivers of the same ages. Specifically, spousal caregivers had an estimated 50 percent less in individual retirement account (IRA) assets, 39 percent less in non-IRA assets, and 11 percent less in Social Security income. However, caregiving may not be the cause of these results as there are challenges to isolating the effect of caregiving from other factors that could affect retirement assets and income.
Expert interviews and a review of relevant literature identified a number of actions that could improve caregivers' retirement security, which GAO grouped into four policy categories. Experts identified various benefits to caregivers and others from the policy categories—as well as pointing out possible significant costs, such as fiscal concerns and employer challenges—and in general said that taking actions across categories would help address caregivers' needs over both the short-term and long-term (see figure). Several experts also said public awareness initiatives are critical to helping people understand the implications of caregiving on their retirement security. For example, they pointed to the need for education about how decisions to provide care, leave the workforce, or reduce hours could affect long-term financial security. |
gao_GAO-19-630 | gao_GAO-19-630_0 | <1. Background> <1.1. Clinical Trials> When patients are seeking access to investigational drugs, their first option is to consider whether they can obtain them through participation in a clinical trial. Clinical trials are a step in the drug development process through which a drug manufacturer assesses the safety and effectiveness of its investigational drug through human testing. A clinical trial can take place in a variety of settings (e.g., research hospitals, universities, and community clinics) and geographic locations, and is led by a principal investigator that is typically a physician. Manufacturers establish clinical trial eligibility criteria to define the patient population to be studied, and only patients who meet those criteria can participate. These criteria can vary depending on the drug being studied and its intended use. Patient eligibility criteria consist of both inclusion and exclusion criteria. Inclusion criteria specify the characteristics of the patient that are required for participation, such as the stage or characteristics of a disease, and typically identify a patient population in which it is expected that the manufacturer can demonstrate the effect of an investigational drug. In comparison, exclusion criteria specify the characteristics that disqualify patients from clinical trial participation and can include factors that could mask the effect of an investigational drug, such as the presence of comorbidities or simultaneous use of other drugs. Certain patient populations, such as children and pregnant women, may also be excluded from clinical trial participation because of ethical reasons. Drug manufacturers, FDA, and IRBs each have responsibilities as part of the clinical trial process. In order to test an investigational drug on human volunteers in clinical trials, a manufacturer must first submit an investigational new drug application (IND) to FDA. FDA is responsible for reviewing the IND, which includes various components such as the clinical trial protocol that describes the patient eligibility criteria, the medications and dosages to be studied, and other details. In turn, an IRB is responsible for reviewing and approving the clinical trial protocol as well as reviewing the informed consent form for the study. In general, clinical trials that involve human volunteers can begin after FDA has reviewed and allowed the IND to proceed and the IRB has given its approval. An investigational drug typically goes through three phases of clinical trials before an application is submitted to FDA for marketing approval. At any point during the clinical trials, FDA could issue a clinical hold on the existing IND that would delay the proposed clinical trials or suspend the ongoing clinical trials. When a proposed or ongoing study is placed on a complete clinical hold, the investigational drug cannot be administered to any human volunteers. Traditionally, the three clinical trial phases are the following: Phase I: This clinical trial phase generally tests the safety of the drug on about 20 to 80 healthy volunteers. The goal of this phase is to determine the drug s most frequent side effects and how it is metabolized and excreted. If the drug does not show unacceptable toxicity in the phase I clinical trials, it may move on to phase II. Phase II: This clinical trial phase assesses the drug s safety and effectiveness on people who have a certain disease or condition, and typically the assessment is conducted on a few dozen to hundreds of volunteers. Generally, during this phase some volunteers receive the drug and others receive a control, such as a placebo. If there is evidence that the drug is effective in the phase II clinical trials, it may move on to phase III. Phase III: This clinical trial phase generally involves several hundreds to thousands of volunteers who have a certain disease or condition and gathers more information about the drug s safety and effectiveness, again while being compared to a control. If phase III clinical trials are successfully completed, the drug may move on to FDA s review and approval process. When seeking FDA s approval to market a drug in the United States, the manufacturer submits an application to FDA that includes the data from the safety and efficacy clinical trials for FDA to review. Safety data include clinical trial results about a drug s toxicity (e.g., the highest tolerable dose) and adverse events that may result from exposure to the drug. Efficacy data include information on whether the drug demonstrated a health benefit over a placebo. FDA reviews the information in the application to either approve or not approve the drug. <1.2. FDA s Expanded Access Program> If a patient seeking access to an investigational drug is not able to participate in the drug s clinical trial (e.g., because of the study s eligibility criteria or geographic location), another pathway to potentially obtain access to the drug outside of a clinical trial is through FDA s expanded access program. Under the program, a licensed physician can submit a request for access to an investigational drug for treatment use on behalf of a patient and may do so during or after phase I, II, or III of clinical trials. To allow access to an investigational drug under the program, FDA must determine that a patient has a serious or immediately life-threatening disease or condition and has no other comparable medical options, among other criteria. FDA s goals for the program are to facilitate the availability of investigational drugs when appropriate, ensure patient safety, and preserve the clinical trial development process. FDA is responsible for determining whether to allow individual requests to proceed to treatment once the manufacturer has agreed to provide access. If FDA allows the request to proceed, an IRB must approve the clinical treatment plan that is submitted as part of the individual request and review the informed consent form. The licensed physician treating a patient under expanded access would be required to report to FDA any unexpected serious adverse reactions that occur during treatment for which there is a reasonable possibility that the drug caused the reaction. <1.3. The Federal RTT Act> In 2018 the federal RTT Act established another pathway through which patients may potentially obtain access to investigational drugs outside of clinical trials. To be eligible under the law, a patient must have been diagnosed with a life-threatening disease or condition, have exhausted approved treatment options, and be unable to participate in a clinical trial involving the investigational drug. Obtaining access to investigational drugs through the federal RTT Act primarily requires the involvement of the manufacturer and treating physician. Similar to FDA s expanded access program, treatment can only proceed if the drug manufacturer allows the patient access to its drug. Under the federal RTT Act, the manufacturer is responsible for providing to FDA an annual summary of any use of its drugs under this pathway that includes information on any known serious adverse events. The treating physician is responsible for requesting access to the investigational drug for the patient and for obtaining written informed consent from or on behalf of the patient if the manufacturer agrees to provide access. Eligibility of an investigational drug for patient use through this pathway is based on certain criteria, including that the drug has completed phase I clinical trials, the manufacturer has not discontinued clinical development of the drug, and the drug has not been placed on a clinical hold. Unlike FDA s expanded access program, the federal RTT Act does not require the FDA or an IRB to review individual requests for access. Figure 1 shows a summary of the three pathways through which patients may obtain access to investigational drugs. <2. FDA Issued Guidance to Help Manufacturers Broaden Clinical Trial Eligibility Criteria and Two Manufacturers We Interviewed Took Steps to Broaden Their Criteria> Some patients, such as those with compromised liver and kidney function, have traditionally been excluded from clinical trials. FDA has ongoing efforts to help drug manufacturers identify the circumstances under which they could broaden their eligibility criteria to include such patients without compromising study results. These efforts include issuing recent guidance with recommendations for including certain patients in clinical trials for cancer drugs. Officials from one of the 10 drug manufacturers we interviewed told us they had broadened their eligibility criteria and another one was taking steps to do so, but these officials and others noted challenges to broadening eligibility criteria. FDA public workshop on broadening eligibility criteria. In April 2018, FDA held a public workshop with stakeholders including drug manufacturers, patient advocacy groups, and government agencies to discuss ways drug manufacturers and other investigators could safely broaden eligibility criteria for clinical trials and to inform FDA guidance on this topic. In July 2018 FDA publicly released a report summarizing the workshop, in accordance with FDARA. According to the report, stakeholders at the meeting emphasized the importance of broadening clinical trial eligibility, when appropriate, to include more patients who will likely use the drug if it is approved. Stakeholders recommended that investigators ensure that the eligibility criteria for each of their clinical trials are scientifically justifiable, rather than, for example, copying and pasting a narrow set of criteria from a prior study without considering if the exclusions are valid for scientific reasons. According to the report, this practice can unnecessarily limit eligibility for certain patients. While stakeholders commented that assessing whether eligibility criteria are scientifically justifiable may require additional time and resources, they emphasized it could lead to the removal of unnecessarily restrictive eligibility criteria and thereby increase participation among patient populations that have been typically excluded from clinical trials, such as pediatric patients and patients with compromised liver and kidney function. FDA guidance on eligibility criteria. In March 2019, FDA issued four new draft guidance documents and finalized one guidance document with recommendations for drug manufacturers to broaden clinical trial eligibility criteria for drugs that treat cancer. The guidance recommends that manufacturers include certain patient populations that have typically been excluded from participation. The patient populations are adolescents; pediatrics (children and adolescents); patients with human immunodeficiency virus (HIV), hepatitis B virus (HBV), or hepatitis C virus (HCV) infections; patients with brain metastases (i.e., cancer that has spread to the brain); and patients with compromised kidney, heart, or liver function, or who have a history of (or concurrent) cancer. According to FDA, the guidance documents are intended to help drug manufacturers and other investigators broaden cancer trial eligibility criteria. This will help improve patient access to investigational drugs and ensure that the results from the clinical trials are generalizable to patients likely to use the drugs once they are approved. In addition, FDA officials have noted that including broader patient populations in clinical trials can lead to new information in a drug s labeling, which will help communicate the safe and effective use of these drugs. Table 1 provides a summary of each of the five guidance documents. In June 2019, FDA issued draft guidance for manufacturers on broadening clinical trial eligibility criteria, in accordance with FDARA. The guidance applies to a wider range of clinical trials beyond cancer trials and includes recommendations to broaden eligibility criteria and considerations for the use of clinical trial designs and other methodologies to help facilitate patient participation. For example, FDA recommends that manufacturers examine each exclusion criterion to determine if it is needed to help assure the safety of trial participants or to achieve the study s objectives. If not, the manufacturer should consider eliminating or modifying the criterion to expand the study population as well as tailoring the exclusion criteria as narrowly as possible to avoid unnecessary restrictions to the study population. Two manufacturers efforts to broaden eligibility criteria. Officials from one of the 10 drug manufacturers we interviewed told us they broadened their clinical trial eligibility criteria and another manufacturer we interviewed reported that it was taking steps to do so. These two manufacturers told us they were taking these steps in part because both believe it will facilitate the drug approval process. Officials from one manufacturer stated that they broadened their eligibility criteria by removing exclusions after determining they were not critical to clinical trial designs, including exclusions related to liver function, infections (e.g., HIV), and the use of other medications (e.g., steroids). The officials explained that, since 2015, they have systematically evaluated their eligibility criteria to ensure that they do not unnecessarily exclude patient populations from their clinical trials. Officials from the second manufacturer told us they have begun evaluating whether to remove certain exclusion criteria that they typically use in clinical trials, and added that their efforts are partially in response to FDA s 2018 public workshop report, as described above. For example, the manufacturer is reviewing its exclusion of adolescents in prior clinical trials and officials told us they will likely include adolescents in an upcoming study if they determine that patient safety would not be compromised. Officials from both manufacturers stated that broader eligibility criteria will allow more patients to access investigational drugs through clinical trial participation. It can also, officials said, help them obtain FDA approval for a drug that extends to a wider range of patients, if the drug is found to be safe and effective. Further, officials from one of the two manufacturers noted that broader eligibility criteria, such as criteria that include patients with infections, could help streamline the process for conducting clinical trials for example, by eliminating the need to conduct clinical testing to screen for the presence of infections. Although most drug manufacturers in our review did not report efforts to broaden their eligibility criteria, many noted efforts to address other barriers to clinical trial participation. For example, to address geographic barriers, officials from six of the 10 manufacturers told us they help cover costs for patients to travel to clinical trial sites, such as by reimbursing transportation and hotel costs for patients who travel long distances. In addition, officials from one manufacturer said they completed a pilot clinical trial on diabetes in 2019 that used decentralized trial locations in three states, such as retail health clinics and patients homes, to help patients overcome challenges with obtaining transportation to trial sites. Similarly, within the next 2 years, another manufacturer is planning to conduct a pilot clinical trial that is fully remote and expects the design to improve patient participation in rural communities. To address the lack of information about upcoming and ongoing clinical trials that is available to and tailored to patients, two manufacturers launched clinical trial registries in 2015 and 2016, respectively. Officials from one of the manufacturers stated they designed their registry to bridge the gap between the information that patients want about clinical trials (e.g., information targeted to medical conditions that uses basic terminology), and what is available in ClinicalTrials.gov, a federal database that includes information on privately and publicly funded clinical trial studies. Officials explained that ClinicalTrials.gov is, in general, more targeted to physicians. In addition, to address barriers associated with the mistrust of research stemming from historical events among African-Americans and other communities, one manufacturer has several ongoing efforts to increase the participation of racially and ethnically diverse populations in its clinical trials. For example, the manufacturer conducts workshops to train minority investigators who conduct clinical trials and requires certain clinical trial sites to be located in areas with minority patient populations of more than 25 percent. Challenges with broadening eligibility criteria. Officials from four of the 10 drug manufacturers we interviewed including the two taking steps to broaden their clinical trial eligibility criteria told us broadening eligibility criteria is challenging. They stated that broader criteria must be carefully balanced with the need to collect evidence from a well-defined population. Officials from one manufacturer explained that removing standard exclusion criteria, such as excluding patients who use other medications, could interfere with the success of their clinical trial if those medications make it difficult to identify the effects of the studied drug. In addition, officials from another manufacturer emphasized that determining whether to remove exclusion criteria takes time and resources because it involves additional study, which could slow down the clinical development of a drug. <3. FDA Took Several Recent Actions to Facilitate Access to Investigational Drugs Outside of Clinical Trials> <3.1. FDA Simplified the Institutional Review Board Process and Launched a Pilot Program to Facilitate Access to Investigational Drugs Outside of Clinical Trials> To facilitate access to investigational drugs outside of clinical trials, FDA has simplified its expanded access program s IRB review requirements for individual patient requests. FDA made this change in October 2017, in accordance with a provision in FDARA. This provision addressed concerns that FDA s requirement to convene a full IRB to review an expanded access request could result in delays of approvals because full IRBs may not meet regularly. Under the revised process, FDA now allows for a waiver of the requirement for full IRB review when concurrence is obtained by the IRB chair or another designated member. According to FDA officials, the updated process will help reduce the potential burden for physicians, who are responsible for obtaining IRB approval, while still protecting patients. In addition, to further simplify its expanded access process for individual patient requests, in June 2019 FDA launched a pilot program called Project Facilitate for oncologists and other health care professionals that treat patients with cancer. According to FDA officials, the pilot program is focused on oncology because the agency receives a large number of individual expanded access requests from oncologists. Under the pilot program, FDA established a new call center that provides a single point of contact where FDA staff are available to answer questions, assist in filling out appropriate paperwork, and facilitate the overall process for requesting and obtaining access to investigational drugs. For example, FDA officials told us that FDA staff may assist oncologists in locating an IRB, if needed. As part of the pilot program, FDA will follow up on individual requests and gather data, such as how many patients received investigational drugs, and if not, why the requests were denied by manufacturers. According to FDA, the agency can use these data to determine how the process is benefiting patients. Twenty of the stakeholders we interviewed were familiar with FDA s simplified IRB review requirements, and of those, 18 told us these updates were helpful for physicians and patients. For example, officials from one drug manufacturer commented that the new IRB review requirements reduce the amount of time it takes for patients to obtain access to investigational drugs, which is especially important for patients who are very sick. In addition, we spoke to 12 stakeholders about FDA s plans for its pilot program, and of those, nine generally had positive views of the agency s planned activities. Officials from one manufacturer explained that the pilot program could help reduce the burden on oncologists seeking access to investigational drugs for their patients through the expanded access program. On the other hand, the officials from this same manufacturer raised concerns about the potential for FDA to intentionally or unintentionally pressure companies to make their investigational drugs available to patients, should FDA have increased involvement with drug manufacturers as part of the pilot program. <3.2. FDA Increased Communication about the Expanded Access Program and the Federal RTT Act to Facilitate Access to Investigational Drugs Outside of Clinical Trials> FDA has also taken recent actions to facilitate access to investigational drugs outside of clinical trials by increasing its communication about the expanded access program and the federal RTT Act. FDA s increased communication about the expanded access program. In November 2018, FDA updated the web pages for its expanded access program in response to findings from an external assessment that the web pages were difficult to navigate and contained unclear information. FDA created separate web pages for patients, physicians, and drug manufacturers, and tailored information about the expanded access process to each of these stakeholders. In addition, FDA added a new web page with information that is commonly requested by physicians and patients, such as the instructions for completing the form for submitting individual requests and definitions of keywords associated with the expanded access process (e.g., IRB, informed consent). In addition, in October 2017, in response to a recommendation in our July 2017 report, FDA clarified its guidance for drug manufacturers on how the agency reviews adverse events that occur under FDA s expanded access program. In the 2017 report, we found that some drug manufacturers were concerned that use of adverse event data may influence FDA in making final approval decisions, and that this possibility could contribute to a manufacturer deciding not to grant patients access to their drugs through the expanded access program. In response, we recommended that FDA clearly communicate how the agency will use adverse event data from expanded access use when reviewing drugs and biologics for approval. FDA s updated guidance states that FDA is not aware of instances in which adverse event information prevented the agency from approving a drug, and that it is very rare for FDA to place a clinical hold on an investigational drug due to adverse events observed during expanded access treatment. The guidance also explains that several factors make it difficult for FDA to link an adverse event to the expanded use of a drug being considered for approval. For example, the guidance acknowledges that the use of investigational drugs though the expanded access program generally occurs outside of a controlled clinical trial setting and patients receiving such drugs may be sicker than patients participating in a clinical trial, making it more difficult to determine whether the use of the investigational drug has led to the adverse event. In responding to questions about increased FDA communication about the expanded access program, 19 of the stakeholders we interviewed were familiar with FDA s updated expanded access web pages, and of those, 16 told us they were an improvement. Officials from one physician organization stated that the updated web pages were easier to navigate than the previous web pages and presented information about the process more clearly. Among the 10 manufacturers we interviewed, we found varying views of FDA s updated guidance on the use of adverse event data. Officials from seven of the 10 manufacturers viewed the updated guidance as an improvement. Officials from one of the seven explained that it contributed to their company s decision to allow access to investigational drugs, when appropriate. Officials from two of the 10 manufacturers did not view the guidance as an improvement. Officials from both manufacturers stated that they still had significant concerns about the potential use of adverse event data by FDA to adversely affect the development of their investigational drugs, such as being used to issue a clinical hold. An official from one of the two manufacturers commented that these concerns remained despite FDA s statement in the guidance that it is difficult for FDA to link expanded access use to a particular adverse event. In addition, officials from two other manufacturers who viewed the guidance as an improvement similarly expressed remaining concerns that adverse events could negatively affect the development of their investigational drugs. One manufacturer was unfamiliar with the updated guidance. Further, officials from four of the 10 drug manufacturers we interviewed, including two who viewed the updated guidance as an improvement, said they believed that manufacturers concerns about this issue may never be fully resolved even with additional FDA guidance. In other comments related to FDA s communication on its use of adverse events data from the expanded access program, some drug manufacturers we interviewed noted the merits of using efficacy and safety data from the expanded access program to inform FDA s drug approval decisions. Officials from two of the 10 manufacturers told us they believe that FDA s potential use of adverse event data from expanded access use, but not efficacy data, would be unfair. Officials from one of these two manufacturers cited FDA s updated guidance on adverse events as contributing to their view, referring to FDA s statement that it is unlikely that FDA s program would yield data that is useful to FDA in considering an investigational drug s effectiveness. However, FDA officials told us that efficacy and safety data from the expanded access program have been used to support drug approvals in several instances. For example, in January 2018 FDA approved the drug Lutathera to treat rare tumors in the pancreas and gastrointestinal tract using efficacy and safety data the manufacturer submitted to FDA from a subset of the roughly 1,200 patients who received the drug through the expanded access program. Officials from four of the 10 manufacturers expressed interest in discussing further with FDA how the agency would evaluate efficacy and safety data from the expanded access program and use these data to help support a drug s approval and other regulatory decisions. FDA s communication about the federal RTT Act. In November 2018, FDA launched a new federal RTT web page that outlines both the eligibility requirements for patients interested in seeking access to investigational drugs and the criteria that must be met for an investigational drug to be eligible for use through this pathway. For example, the web page states that patients must be diagnosed with a life- threatening disease or condition to be eligible to access investigational drugs under the federal RTT pathway. Further, the agency plans to issue proposed regulations in September 2019 to implement the federal RTT Act requirement for manufacturers to submit an annual summary to FDA on any use of their investigational drugs under this pathway. The regulations will include a due date for manufacturers to submit the annual summaries as well as information on what they are to contain, according to FDA. Fourteen of the stakeholders we interviewed were familiar with FDA s new web page on the federal RTT Act, and among those, eight stated that it communicated useful and balanced information for physicians and patients. Officials from the remaining six stakeholders told us they did not find it helpful for physicians or patients. For example, officials from two stakeholders (including one drug manufacturer) commented at the time of our review that the web page could be misleading to some patients if they interpret the federal RTT Act to mean that manufacturers must provide access to their investigational drugs. Both added that FDA should more clearly communicate on the web page that there is no such requirement. In addition, officials from another stakeholder stated at the time of our review that FDA should explain on the web page the agency s role in implementing the federal RTT Act. In May 2019 FDA clarified on its web page that the federal RTT Act does not require manufacturers to provide patients access to their investigational drugs and that FDA s role includes posting a consolidated annual summary report on the use of investigational drugs through the federal RTT pathway. <4. Most Selected Manufacturers Communicated Whether They Consider Requests for Access to Investigational Drugs Outside of Clinical Trials and Conditions for Approval> Most of the 29 drug manufacturers in our review used their websites to communicate to patients and physicians whether they would consider individual requests for access to their investigational drugs outside of clinical trials. Among those that would consider requests, most also communicated the conditions under which they would review requests and grant access. Manufacturers consideration of requests for access. Our review of drug manufacturers websites between January 31, 2019, and March 12, 2019, found that 23 of the 29 manufacturers in our review used their websites to communicate whether they considered individual requests for access to investigational drugs outside of clinical trials. In communicating this information, 19 of the 23 manufacturers stated they were willing to consider requests, while the other four stated they were not considering requests. The remaining six of the 29 manufacturers did not communicate information about whether they would consider requests for access to investigational drugs outside of clinical trials at the time of our review, but officials from all six told us they were in the process of developing content on this topic that they intended to post on their websites. Information communicated by manufacturers that consider requests. Among the 19 manufacturers willing to consider requests for access to investigational drugs outside of clinical trials, all communicated on their websites that they required physicians to submit requests on behalf of their patients and provided information on how physicians should submit these requests. In addition, 18 manufacturers communicated an estimated time frame within which they would respond to requests. The manufacturers provided additional information, including the following: Eighteen communicated information about the type of patient for whom they would consider granting access. Eighteen stated that patients must have a serious or life- threatening disease or condition; have no comparable or satisfactory alternative therapies available; and be unable to participate in a clinical trial to be eligible to obtain access. In addition, 17 stated that the treating physician must determine for the patient seeking access that the risk of taking the investigational drug is not greater than the anticipated benefit. Fifteen communicated other factors they would take into account during their review of requests. These factors included the following: Ten stated that the supply of their investigational drugs was a consideration. That is, a manufacturer must have a sufficient supply of the investigational drug to support the drug s clinical development before granting access to patients outside of clinical trials. Five referred to specific drugs to which they would consider granting access when describing the conditions under which they would consider reviewing requests. For example, one manufacturer stated that it would consider requests to access three of its investigational drugs (intended to treat bladder cancer, influenza, and HIV). One manufacturer communicated that after its initial review of individual requests, it uses an external advisory committee to further evaluate certain requests and ensure they are evaluated in an ethical and fair manner. The committee, which includes bioethical experts, physicians and patient representatives, makes recommendations to the manufacturer about providing access to individual patients. Many of the 19 manufacturers that communicated they were willing to consider individual requests for access stated that after they have approved a request they also required external entities to review the request. These included the following: Thirteen stated they require the relevant regulatory authority to review requests. Of these, six specified that they require FDA to review requests for access in the United States. One of these six explained that it required a review by FDA to ensure all available safety data for the investigational drug were considered, and added that FDA is uniquely aware of such safety data. Five stated they require the review of a research ethics committee or an IRB. Information communicated by manufacturers that do not consider requests. Among the four manufacturers that communicated on their websites they were not considering requests for access to investigational drugs outside of clinical trials at the time of our review, two provided reasons for their decision. Both cited safety concerns; for example, one explained that it wanted to ensure its investigational drugs were administered to patients only through clinical trials where safety could be closely monitored. One also cited limited resources, stating that it chose to focus its resources solely on conducting clinical trials. Both of the manufacturers that provided reasons for not considering requests for access communicated that they will periodically re-evaluate their policies. <5. Agency Comments> We provided a draft of this report to HHS for comment and HHS provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Health and Human Services, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or dickenj@gao.gov. Contact points for Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. Appendix I: GAO Contact and Staff Acknowledgments <6. GAO Contact Staff Acknowledgments> John E. Dicken at (202) 512-7114 or dickenj@gao.gov In addition to the contact named above, Gerardine Brennan, Assistant Director; Pamela Dooley, Analyst-in-Charge; Craig Gertsch; Gay Hee Lee; and Moira Lenox made key contributions to this report. Also contributing were George Bogart, Laurie Pachter, and Ethiene Salgado- Rodriguez. | Why GAO Did This Study
When investigational drugs show promise for treating serious or life-threatening diseases, patients are often interested in obtaining access to them. Congress included a provision in the FDA Reauthorization Act of 2017 for GAO to review actions taken to facilitate access to these drugs.
This report describes (1) actions FDA and drug manufacturers have taken to broaden eligibility criteria for clinical trials, (2) actions FDA has taken to facilitate access to investigational drugs outside of clinical trials, and (3) information drug manufacturers have communicated to patients and physicians about access to investigational drugs outside of clinical trials.
GAO reviewed laws, regulations, FDA documents, and manufacturer policies and interviewed FDA officials and a non-generalizable selection of 10 manufacturers and 14 other stakeholders (including patient advocacy and physician organizations). The manufacturers were developing drugs to treat serious or life-threatening diseases, and were selected for variation in company size. GAO also reviewed information that a non-generalizable selection of 29 manufacturers communicated through their websites about access to investigational drugs outside of clinical trials. GAO selected manufacturers for variation in the type of serious diseases their investigational drugs were intended to treat, company size, and other factors.
HHS provided technical comments on a draft of this report, which GAO incorporated as appropriate.
What GAO Found
Individuals may access investigational drugs—those not yet approved for marketing in the United States by the Food and Drug Administration (FDA)—by participating in clinical trials conducted by drug manufacturers to test drug effectiveness and safety. FDA has ongoing efforts to help manufacturers identify the circumstances under which they could broaden clinical trial eligibility criteria to include patients who are commonly excluded, such as pediatric patients and patients with impaired liver and kidney function, without compromising study results.
FDA issued guidance in March 2019 with recommendations on ways manufacturers could broaden eligibility criteria for cancer clinical trials, when clinically appropriate. In June 2019, FDA issued related guidance that applies to a wider range of clinical trials beyond cancer trials.
One of the 10 manufacturers GAO interviewed reported broadening its eligibility criteria to include more patients, such as those with HIV. Another manufacturer has begun reviewing its eligibility criteria and expects to include adolescents, as appropriate, in future studies—a population that has generally been excluded from trials. However, these and two other manufacturers cited challenges in these efforts. One stated that expanding participation to patients who use other medications, for example, could adversely affect a study's ability to identify the effects of the studied drug.
Outside of clinical trials, patients with certain medical conditions, who are unable to enroll in a clinical trial, and have no other comparable medical options, may request to obtain access to investigational drugs. This can occur under FDA's expanded access program, or through a 2018 federal law known as “Right to Try.” Under either pathway, a patient can only access the investigational drug if its manufacturer agrees to the request. FDA has taken steps to facilitate access to investigational drugs outside of clinical trials, and most manufacturers in GAO's review communicated information to patients and physicians through their websites about how to access their investigational drugs outside of clinical trials. For example:
Since 2017, FDA took steps to simplify its expanded access program to make it easier to participate. In addition, to address concerns raised by manufacturers, FDA clarified guidance on how it would review data resulting from the program. Seven of the 10 manufacturers GAO interviewed viewed the guidance as an improvement.
GAO's review of information communicated by 29 manufacturers on their websites found that 23 had policies about accessing investigational drugs outside of clinical trials. At the time of GAO's review, 19 of the 23 stated they would consider individual requests for access, while the other four stated they would not. More than half of the manufacturers stated that if they approve a request, they require additional steps, such as FDA review of the request. |
gao_GAO-20-309 | gao_GAO-20-309_0 | <1. Background Definition and Purpose of an ACSA> The Secretary of Defense may enter into ACSAs with authorized countries and international organizations for the reciprocal provision of logistic support, supplies, and services with the military forces of that country or international organization. DOD describes ACSAs as bilateral agreements that allow exchanges of logistic support, supplies, and services between the United States and partners in return for reimbursement in the form of cash or the reciprocal provision of support. As of February 2020, DOD had signed 125 ACSAs, including five that had expired, which span DOD s six geographic areas of responsibility identified in table 1. For a full list of past and present ACSA partners, see appendix II. According to DOD, it uses ACSAs primarily during wartime, combined exercises, training, deployments, contingency operations, humanitarian or foreign disaster relief operations, certain peace operations under the United Nations Charter, or for unforeseen or exigent circumstances. For example, ACSAs can give a commander increased flexibility to address logistical shortfalls in a contingency environment. DOD officials noted that the agreements provide DOD with flexibility, enhanced readiness at minimal cost, and increased military effectiveness by allowing partners and allies to access U.S. logistics capabilities and practice mutual support procedures, which is particularly valuable in planning international exercises and coalition operations. For example, DOD established ACSAs with 70 new partners during Operations Enduring Freedom and Iraqi Freedom, which together covered the 14 years from 2001 through 2014. DOD signed an additional 15 ACSAs from 2015 through February 2020. Figure 1 shows the cumulative growth in the number of ACSAs over time. <2. Process to Establish an ACSA> Under 10 U.S.C. 2342, DOD is authorized to enter into ACSAs with governments of NATO countries, subsidiary bodies of NATO, and international organizations. DOD can also enter into ACSAs with governments of non-NATO countries, but must first designate the country eligible for an ACSA by following a process that includes consulting with State, determining that the designation is in the interests of national security, and notifying Congress of its intent to make the designation. Within DOD, the OUSD (A&S) is the focal point for establishing ACSAs, as of December 2019, and officials from that office request State s authority to negotiate an ACSA and coordinate designees with DOD typically Combatant Command staff to negotiate and sign ACSAs. DOD officials told us that the amount of time it takes to negotiate and sign an ACSA varies because of a number of factors. For example, a lack of urgency or the complicated legal context of a potential partner can extend negotiations. As a result, the amount of time it takes to negotiate and sign an ACSA has varied greatly, from less than 1 year to more than 25 years. After an agreement is signed, State is required to notify Congress about international agreements that enter into force, including ACSAs. Although, according to agency documentation, most ACSAs enter into force at the time they are signed, an ACSA may enter into force on a later date, depending on the conditions outlined in each agreement. According to State officials, ACSAs, like some other international agreements, may be applied provisionally (the agreement has been signed and transactions may be executed) prior to entering into force. Figure 2 illustrates the process by which DOD and State generally establish new ACSAs. <3. ACSA Implementation> The Secretary of Defense generally delegates the responsibilities of managing ACSA implementation to various components including the OUSD (A&S), Chairman of the Joint Chiefs of Staff (CJCS), defense agencies, military departments and service components, Combatant Commands, and subordinate unified commands. Responsibilities and procedures for implementing ACSA transactions are set forth in DOD guidance and regulations including CJCS Instruction (CJCSI) 2120.01D, DOD Directive 2010.9, and DOD s Financial Management Regulation. For example, CJCSI 2120.01D calls for military departments and defense agencies to appoint primary ACSA program managers charged with maintaining financial and program records of all ACSA transactions. In addition to the primary guidance documents, DOD policy and legislation have modified ACSA implementation over time. For example, DOD issued memorandums in 2017, 2018, and 2019 to update or clarify requirements for managing ACSAs, and in October 2018, officials noted that DOD had begun a process to update each of the three primary guidance documents listed above. In addition, the NDAA for Fiscal Year 2020 was enacted on December 20, 2019, and Section 1203 modified the authorities related to ACSAs. The law includes a number of new requirements, including a requirement for the Secretary of Defense to designate an official who will have primary responsibility for overseeing and monitoring the implementation of ACSAs in coordination with the Under Secretary of Defense for Policy. Further, the law requires that, among other things, the Secretary of Defense shall prescribe regulations to ensure that adequate processes and controls are in place to provide for the accurate accounting of logistic support, supplies, and services received or provided under ACSAs. The legislation also instituted a new congressional notification requirement that DOD may not enter into an ACSA without notifying the appropriate congressional committees of its intent to do so at least 30 days in advance. DOD uses AGATRS as its system of record to create, track, and manage transactions executed under ACSAs. CJCSI 2120.01D requires the use of AGATRS to fully document all ACSA transfers of logistic support, supplies, and services. DLA has managed AGATRS since 2013, when, according to DLA officials, an updated version of the system was launched and historical data archived. As of November 2019, AGATRS included records of more than 31,000 ACSA sales and acquisitions orders authorized from fiscal years 2014 through 2019. According to DOD officials, AGATRS is the best source of automated information on ACSA transactions. According to DOD, it authorized more than 22,000 ACSA sale orders from October 2013 through September 2019 that provided approximately $5 billion of logistic support, supplies, and services for items ranging from water and fuel to bullets and munitions. Figure 3 shows examples of the types of support provided through ACSAs. According to AGATRS, more than 70 different DOD components executed ACSA order sales or acquisitions from October 2013 through September 2019. However, the seven components shown in table 2 accounted for about 92 percent of the reported total value and about 79 percent of the reported order volume. <4. Retransfers of ACSA Logistic Support, Supplies, and Services> In addition to direct transactions, the retransfer of support may also occur under ACSAs. CJCSI 2120.01D describes these retransfers as transfers from the original recipient to another foreign government or international organization, or to any entity other than the officers, employees, or agents of the foreign country or international organization whose military originally received the logistic support, supplies, or services. DOD Directive 2010.9 prohibits the retransfer of ACSA support without the prior written consent of the U.S. government. DOD records indicate that it approved 11 ACSA retransfers with six different partners from 2003 through 2019. These approvals, listed in appendix III, involved at least 15 final foreign recipients. Eight of these recipients did not have an ACSA at the time of DOD s authorization for a retransfer. For example, before DOD signed an ACSA with Saudi Arabia in 2016, DOD authorized a retransfer of general purpose bombs from the United Arab Emirates to Saudi Arabia to support its activities in Yemen. In August 2018, Congress amended 10 U.S.C. 2342 to prohibit DOD from using an ACSA to facilitate the transfer of logistic support, supplies, and services to a final recipient that has not signed an ACSA with DOD. <5. DOD and State Have Generally Provided Required Information about ACSAs to Congress, but Have Recordkeeping Gaps and Timeliness Issues DOD Notified Congress of Its Intent to Designate at Least 78 of 104 Non-NATO Partners for ACSAs, but Does Not Have Documentation of Remaining Notifications> DOD is responsible for providing information to Congress regarding its intent to designate non-NATO countries for an ACSA. Specifically, under 10 U.S.C. 2342, DOD must notify Congress of its intent to designate the government of a non-NATO country for an ACSA at least 30 days before making the designation. Of the 125 ACSAs DOD had signed as of February 2020, 21 were agreements with NATO countries and international organizations, which do not require congressional notification. For the remaining 104 agreements signed with the governments of non-NATO countries, DOD should have notified Congress at least 30 days before designating each country eligible for an ACSA. DOD records indicate that DOD transmitted notifications of its intent to designate at least 78 of the 104 countries as eligible for ACSAs. For these 78 ACSAs, we confirmed that notifications to Congress were dated on time, that is, at least 30 days before DOD signed the relevant agreements. However, as shown in figure 4, DOD did not have records of 26 of the 104 agreements for which DOD should have notified Congress, so we could not confirm whether the notifications had occurred. DOD estimates that these 26 notifications would have occurred between 1993 and 2009, with 20 being before or during 1996. According to DOD officials, DOD s ACSA recordkeeping procedures are not documented and have changed over time, which contributes to gaps in DOD notification records. DOD officials told us that while they had endeavored to save notifications and signed agreements, they had not systematically tracked notifications for each partner. Neither DOD Directive 2010.9 nor CJCSI 2120.01D specifically call for DOD to track ACSA signature or congressional notification transmittal dates, but DOD officials noted that recordkeeping procedures such as scanning and maintaining documents should be part of commonly understood proper administration practices. In addition, several different DOD offices have been responsible for various aspects of ACSA management over the years. Each office, according to DOD officials, may have had different recordkeeping practices, including some that predated electronic records. Further, DOD officials had difficulties finding paperwork from offices not currently involved with ACSAs and those that no longer exist. Poor recordkeeping has affected DOD s ability to provide Congress with full and accurate information about ACSAs. For example, DOD s January 2019 report to Congress on ACSA activities included inaccurate and incomplete information on notification and signature dates, including some for which DOD did not have documentation. DOD included estimated Congressional notification transmittal dates for the agreements for which it could not locate supporting documentation. Moreover, DOD included incorrect ACSA signature dates in the report for 16 other agreements. DOD officials responsible for compiling the report told us that they made some of these errors because they used the inaccurate data available at the time. In November 2019, DOD officials told us that they intended to create a consolidated list of ACSA partners including the date of eligibility designations and agreement signatures for each partner to be kept updated through a joint effort by OUSD (A&S) and the Joint Staff. As of January 2020, DOD had not formalized these intentions in written guidance. Documenting and implementing recordkeeping procedures would help ensure that DOD can report accurate and complete information to Congress. <6. State Provided Late Notifications to Congress for About a Third of the ACSAs That Had Entered into Force> While DOD is required to notify Congress about non-NATO partner eligibility for ACSAs, under 1 U.S.C. 112b (commonly referred to as the Case-Zablocki Act ), State is required to notify Congress when any international agreement to which the United States is a party, other than a treaty, enters into force. Under the Case-Zablocki Act, State is required to provide this notification as soon as practicable after the agreement has entered into force, but in no event later than 60 days thereafter. In addition, the law requires any department or agency of the U.S. government that enters into any international agreement on behalf of the United States to transmit the text of such an agreement to State no later than 20 days after such agreement has been signed. Of the 125 signed ACSAs, State and DOD officials confirmed that, as of February 2020, 118 had entered into force and, as such, required State notification to Congress. State s Office of the Assistant Legal Advisor s Office of Treaty Affairs is responsible for receiving texts of signed international agreements from the agencies that signed them, for recordkeeping associated with such agreements, and for transmitting the texts of such agreements to Congress in accordance with the Case-Zablocki Act. As of February 2020, records for the 118 ACSAs that had entered into force indicate that State s notifications to Congress for 68 (or 58 percent) were dated within 60 days, as required. However, 48 (or 41 percent) of the 118 notifications were late, that is, dated more than 60 days after entry into force, as shown in figure 5. According to agency records, these 48 agreements entered into force between 1995 and 2019. For two agreements that entered into force in 1983 and 2002, State records are insufficient to determine whether or not State notified Congress. For most of the 48 State notifications dated after the 60-day deadline, State attributed the delays to untimely DOD delivery of required information to State. Specifically, 32 (or 74 percent) of the 43 late notifications that included a reason for delayed transmittal attributed the cause to DOD elements having provided late or incomplete agreement information to State s Treaty Office. As described above, because DOD enters into ACSAs on behalf of the United States, it must provide State the text of the agreements no later than 20 days after signing or otherwise concluding such an agreement, to facilitate State s required notifications to Congress. However, DOD officials confirmed that they provided information on some ACSAs to State more than 20 days after signature. DOD officials and our analysis identified multiple causes that contributed to DOD s providing information on newly signed ACSAs to State after the 20-day deadline: Procedural complications. Procedural complications can affect DOD s ability to provide information to State within 20 days. For example, DOD officials noted that the standard DOD process to send a memo to State sometimes takes more than 20 days to complete. Further, for some agreements, DOD provided some information to State within 20 days, but did not include one or more necessary elements such as a language certification if the agreement was signed in a language other than English to determine whether such an agreement had been concluded. DOD officials told us that a significant amount of time can pass before they compile all the information State needs from DOD, resulting in State s inability to send notifications within 60 days of entry into force, as required. Lack of experience. DOD officials told us that the relevant DOD officials had overlooked the responsibility to send information to State about newly signed ACSAs, at times because of a lack of experience. For example, they explained that DOD missed the 20-day deadline to send information to State about the 2017 ACSA signing with Mexico because it had been 10 years since officials from DOD s Northern Command had negotiated an ACSA, and the officials had overlooked the requirement. Regarding two ACSAs about which State had not notified Congress as of September 2019, State officials told us they did not know those agreements had entered into force until we asked about their status. Subsequently, State notified Congress about one of these agreements in October 2019. For the second, as of February 2020, DOD had begun providing related information to State, and State was continuing to review related documentation to confirm that the agreement had entered into force. Inconsistent guidance. Our review of DOD s guidance found inconsistent language describing when DOD should provide information to State about new ACSAs that could affect DOD s transfer of such information. Specifically, the CJCSIs on international agreements and ACSAs note that DOD should provide State with information on new ACSAs no later than 20 days after an agreement is signed. However, DOD Directives on international agreements and ACSAs indicate that the relevant deadline is no later than 20 days after an agreement enters into force, which can be days or years after an ACSA is signed. DOD officials noted that the officials who drafted the guidance may not have understood the difference between the signing and entry into force of international agreements. Limitations in training. As of December 2019, DOD s standard online training on ACSAs did not address responsibilities to share information about newly signed agreements with State. Specifically, while DOD s two required training courses on ACSAs include some aspects of negotiation and signing new agreements, neither mentions DOD s responsibility to report signed ACSAs to State. According to DOD officials, the requirement may be included during in-person training conducted by personnel from DOD s Office of General Counsel for DOD s combatant command officials. Congress depends on State and DOD for information to oversee the use of ACSAs, which DOD officials have cited as important tools for furthering national security interests, particularly involving activities with broad coalitions. Without timely notification of entry into force, Congress will not have full information about countries and international organizations to and from which DOD can and may already be using ACSAs to transfer logistic support, supplies, and services. DOD Lacks Quality Data to Track ACSA Orders, and Has Not Received Reimbursement for Thousands of Orders <7. DOD Lacks Quality Data to Track ACSA Orders> CJCS Instruction 2120.01D contains policy and procedural guidance concerning the use of ACSA authorities, and addresses, among other things, maintenance of ACSA transaction orders. Specifically, the instruction establishes AGATRS as DOD s system of record for the Joint Staff, Combatant Commands, and the Military Services to manage ACSA orders; describes processes to execute an ACSA order; and notes that AGATRS will be used to fully record all transfers of ACSA support, including documentation such as invoices. Additionally, federal standards for internal control state that management should use quality information to make informed decisions and achieve agency objectives. Quality information is defined as information that is accurate, complete, and provided on a timely basis, among other attributes, and should include relevant data obtained from reliable sources. However, based on our analysis of a generalizable sample of orders, we found that DOD s ACSA system of record lacked quality data to track the status of ACSA order reimbursement. First, we found that DOD incorrectly recorded the reimbursement status in AGATRS of an estimated 7.3 percent of ACSA orders authorized from October 2013 through March 2018. For example, DOD recorded three of the 227 orders in our sample as completed, even though it had not received full reimbursement for them including at least one order that it had ceased processing. DOD records included five orders recorded as incomplete despite having received full reimbursement. We also identified six orders that DOD either improperly categorized as ACSA transactions or orders that DOD should have cancelled because the related transaction never took place or was a duplicate. Second, DOD could not determine the reimbursement status of an estimated 12.2 percent of ACSA orders authorized from October 2013 through March 2018 in AGATRS. Based on our generalizable sample, DOD would not be able to locate records to verify the status of reimbursement for an estimated 1,100 ACSA orders with authorization dates ranging from this time period. With regard to the reimbursement status of these orders, a DOD official noted that DOD could not determine the status based on available information. As a result, DOD does not know if the orders have been reimbursed, were processed for reimbursement, or even took place. According to DOD officials, data quality lapses occur because DOD does not have a process in place to reconcile reimbursement information with data recorded in AGATRS. Although AGATRS is DOD s system of record for ACSA transactions, DOD officials told us that the database does not have financial processing capabilities and is not integrated with DOD s financial processing systems. As a result, ACSA personnel must manually update information in AGATRS as orders are processed in other financial systems, but do not always do so, according to DOD officials. A DOD official told us that the military services vary greatly in the extent to which they regularly populate AGATRS, and even within a service, some personnel are better than others at including complete information. DOD officials explained that personnel may delay or fail to update information in AGATRS for multiple reasons. First, personnel may be on temporary duty in an operational environment where they do not have a secure internet connection and thus cannot upload information into AGATRS. Second, short-term rotations of personnel in the field can result in delays as new personnel learn how to use AGATRS and process transactions. Third, after negotiating the transfer of support, drafting the order, and receiving a unique order number assignment in AGATRS, ACSA orders change frequently. These changes can include price adjustments that result in DOD or the partner deciding not to move forward with the transaction, or significantly revising it. In such situations, DOD officials told us that DOD should cancel orders in AGATRS, but does not always do so. Further, DOD does not have quality data to track the extent to which DOD processes ACSA transactions in accordance with statutory requirements. Under 10 U.S.C. 2345, payment-in-kind or exchange entitlements through ACSA transactions shall be satisfied within 12 months of the date of the delivery of logistic support, supplies, or services. However, DOD officials told us that they did not have the information necessary to track such compliance because AGATRS lacks a mechanism to track these data. DOD officials explained that AGATRS has a field to record the delivery time for an order, but that field does not require users to enter data in a standard format. Our review of AGATRS data found instances in which users left the field blank, entered date ranges as opposed to a single date, or entered text information about the delivery, such as how quickly it should occur. DOD officials noted that they could not use the information in this field to determine the extent to which orders were reimbursed within 12 months of delivery, as outlined in the statute. Instead of using the date of delivery, DOD officials stated, and our analysis confirmed, that DOD used an order s date of authorization as an alternate metric to indicate whether an order was reimbursed within 12 months. However, DOD has transactions in which it delivers the support weeks or months after the order is authorized, according to DOD officials. When asked about such transactions, DOD officials acknowledged that the authorization date was not an appropriate alternate date to use to determine if ACSA orders were completed within 12 months of delivery. DOD has taken some steps including several since we began our review to improve tracking of ACSA orders in AGATRS, such as issuing memos reiterating requirements for personnel to use AGATRS, improving the system s functionality, and updating relevant training. For example, in October 2018, DOD introduced additional categories of order status in AGATRS to track an order s progress through the transaction process and in June 2019, DOD updated the AGATRS training course to reflect this and other updates to the system. Additionally, in October 2019, DOD updated AGATRS to help ensure that orders are assigned to appropriate DOD organizations and personnel in the system. According to DOD officials, as of October 2019, three military services were discussing processes that could improve record keeping and tracking for ACSA orders. For instance, U.S. Army officials told us that the Army had begun reconciling data from the service s financial accounting system with information recorded in AGATRS to address data quality issues. However, DOD has not finalized or fully implemented most of these steps, which, even if implemented, would not address historical inaccuracies in DOD s recorded data, according to DOD officials. According to DOD, from fiscal years 2014 through 2019, DOD used ACSAs to provide support valued at about $5 billion to foreign partners. Without a process to ensure that ACSA order data are accurate and without data to track the timeliness of transactions, DOD does not have sufficient information to oversee ACSA reimbursement. <8. DOD Has Received Reimbursement for an Estimated 64 Percent of Recorded ACSA Orders from October 2013 through March 2018, but Thousands of Orders Identified as Overdue Remain Unreimbursed> Section 2344(a) of Title 10 of the United States Code provides that the United States can use ACSAs to transfer logistic support, supplies, and services to partners in return for cash reimbursement or by replacement- in-kind or exchange of supplies or services of an equal value. DOD guidance and financial management regulations outline procedures for DOD to carry out these transactions and seek timely reimbursement. Additionally, federal standards for internal control state that management should perform ongoing monitoring as part of the normal course of operations to obtain reasonable assurance about the effectiveness of its internal controls. On the basis of a generalizable sample of ACSA orders recorded in AGATRS, we estimate that DOD received reimbursement for approximately 64 percent of ACSA orders recorded in AGATRS that it authorized from October 2013 through March 2018 (about 6,000), but did not receive full reimbursement for approximately 24 percent (about 2,300), as shown in figure 6. Some orders for which DOD did not receive full reimbursement included basic life support such as food, water, housing, and fuel, authorized in 2017. Further, DOD could not verify the accuracy of the reimbursement status for an estimated 12.2 percent of orders (about 1,100) recorded in AGATRS during this time period meaning that for these orders, DOD could not verify whether it had requested or received reimbursement, or whether the transaction had occurred. The orders in this category included, for example, helicopter transportation authorized for a partner in 2015 and valued by DOD at almost $150,000. DOD officials identified several factors that contributed to unreimbursed ACSA orders, including: Lack of invoicing. DOD officials said that DOD had not received reimbursement for 39 of the 221 ACSA orders in our sample, valued by DOD at more than $700,000, because it had not sent invoices to request reimbursement from partners. According to the officials, DOD had not processed these orders for invoicing in part because it had not assigned the orders to the appropriate officials who manage financial processing. Officials from two military services told us that while they aim to have strong communication between the personnel who manage logistics and finance processes for ACSA orders, factors such as staff rotations, contingency environments, and delayed training may affect the efficiency of order processing. DOD officials also noted that missing or incorrect order information, such as an incorrect billing address for a partner nation, may delay invoicing. Delays from partner countries. For some unreimbursed orders in our sample, DOD officials explained that DOD had sent invoices to partner countries but, as of August 2019, had not received reimbursement. The average time from the date of invoice to the date of reimbursement was 208 days for reimbursed cash transactions in our sample of 221 orders authorized from October 2013 through March 2018, and the longest amount of time was 751 days. Lack of a monitoring process. According to DOD officials, DOD did not appropriately monitor the reimbursement status of some orders in our sample and does not have a process to monitor delinquent debt. For example, DOD officials explained that they could not verify reimbursement for some orders recorded as overdue in our sample because personnel had not closely monitored the status of these orders. Additionally, in response to our inquiries, DOD acknowledged that it would need to reassign certain overdue orders to appropriate officials for processing. Although AGATRS produces reports that identify overdue orders, DOD does not have an agency-wide process to monitor and take action on unreimbursed orders that become delinquent. DOD officials told us that the Defense Finance and Accounting Service (DFAS), responsible for some ACSA billing, sends letters to partners for delinquent ACSA bills 30, 60, and 90 days after the end of the billing period outlined under the terms of the ACSA. However, after 90 days, DOD does not have a standardized approach to continue seeking delinquent ACSA debt according to DOD officials. In 2018, DOD updated the section of its Financial Management Regulation that addresses the collection of debt owed by foreign entities, but according to DOD officials, DOD had not implemented the updated policy as of October 2019. Officials from DFAS explained that the policy had not been implemented because they were working with officials from the military services to evaluate possible debt collection procedures. Unless it takes steps to ensure that it processes and invoices ACSA orders as required, and seeks unpaid debt, DOD may not receive reimbursement for thousands of orders for which it has provided support. As of November 2019, DOD indicated that the department had authorized more than $1 billion in ACSA sale orders for which reimbursement is now overdue. Seeking reimbursement for these ACSA orders and implementing oversight processes will help ensure that the United States receives reimbursement for current and future orders under the terms of these agreements. Conclusions In the past 5 years, DOD has exchanged billions of dollars in reimbursable ACSA support with military forces from more than 100 partner nations and international organizations through ACSA transactions. DOD uses ACSAs to exchange logistic support, supplies, and services with partners in a variety of circumstances, including international coalition efforts, such as those combating terrorist groups in Iraq and Syria. However, weaknesses in recordkeeping and management processes limit the extent to which agencies can (1) provide Congress with information requested for oversight and (2) monitor and secure reimbursement. First, DOD could not locate records related to required congressional notifications about designating 26 countries for an ACSA. Further, State transmitted almost half of its congressional notifications on ACSA entry into force after required deadlines, largely because DOD did not provide State with information about new agreements. Without full and timely information about new partners that DOD intends to designate for an ACSA or agreements that have entered into force, Congress will not be sufficiently informed to effectively oversee DOD s use of ACSAs as an element of security cooperation. Second, DOD lacks quality data necessary for tracking ACSA orders and has not received reimbursement for thousands of orders. Our review of 227 transactions confirmed at least $26 million of unreimbursed overdue transactions, but, as of November 2019, DOD records include additional overdue ACSA transactions for support provided to partners dating back to 2011, which DOD values at more than $1 billion. By establishing procedures to improve ACSA recordkeeping and processes to seek reimbursement, DOD can help ensure that reliable information is available for reporting and oversight of activities to secure reimbursement of hundreds of millions of dollars of support provided to our partners. Recommendations for Executive Action We are making a total of seven recommendations to DOD: The Secretary of Defense should ensure that written ACSA guidance includes recordkeeping procedures related to ACSA congressional notifications and signature dates to help enable the provision of complete information for Congress. (Recommendation 1) The Secretary of Defense should take steps, such as updating guidance, to help ensure the implementation of requirements related to providing information to State about newly signed ACSAs. (Recommendation 2) The Secretary of Defense should take steps to verify the accuracy of ACSA order statuses recorded in DOD s system of record, and make corrections as appropriate. (Recommendation 3) The Secretary of Defense should implement a process to reconcile data in financial systems with the data and associated documents collected and stored in DOD s ACSA system of record on a periodic basis. (Recommendation 4) The Secretary of Defense should develop and implement a mechanism to record and track the extent to which it is meeting required time frames to receive reimbursement for ACSA orders. (Recommendation 5). The Secretary of Defense should take steps to improve invoicing of ACSA orders, such as identifying ACSA orders recorded in DOD s system of record that have not been invoiced and sending invoices to partner countries. (Recommendation 6) The Secretary of Defense should implement a process to monitor ACSA orders recorded as overdue in DOD s system of record, and take steps to resolve outstanding reimbursements, as appropriate. (Recommendation 7) Agency Comments and Our Evaluation We provided a draft of this report to DOD and State for comment. In its comments, reproduced in appendix V, DOD concurred with the seven recommendations directed to it. DOD also provided information about actions it has taken to address recommendations 1 and 2. With respect to recommendation 1, DOD provided a copy of a February 2020 memorandum that outlines procedures to capture and preserve information about ACSA establishment, including the dates of DOD s congressional notifications of intent to designate countries for ACSAs and agreement signature dates. With respect to recommendation 2, DOD provided a copy of a February 2020 memorandum that issued guidance related to DOD s provision of ACSA information to State for State s congressional notifications under the Case-Zablocki Act. We plan to follow up with DOD to learn about the distribution of these memoranda. State provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Defense and State, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6881 or bairj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology Senate Report 115-262, accompanying a bill for the National Defense Authorization Act (NDAA) for Fiscal Year 2019, includes a provision for us to review several aspects of Acquisition and Cross-Servicing Agreement (ACSA) management, including information provided to Congress and Department of Defense (DOD) tracking of support and receipt of reimbursement. In this report, we examine the extent to which (1) agencies have provided information to Congress about ACSAs, and (2) DOD has tracked and received reimbursement for ACSA orders. To address these objectives, we reviewed legal authorities related to ACSAs in sections 2341-2350 of Title 10 of the United States Code, DOD policy and guidance on ACSA management and implementation, and DOD Inspector General (IG) reporting on DOD s management of ACSAs. We analyzed DOD and Department of State (State) documentation related to congressional notifications and the establishment of ACSAs, DOD ACSA transaction data, and DOD s Report to Congress Concerning Acquisition and Cross-servicing Activities for Fiscal Year 2018. We also discussed ACSA management, order tracking, and transactions, including for the Saudi-led Coalition, with DOD officials from the Air Force Central Command (AFCENT); Defense Finance and Accounting Services; Defense Logistics Agency (DLA), including DLA Energy; Office of the Chairman of the Joint Chiefs of Staff (OCJCS); Office of the Undersecretary of Defense for Acquisition and Sustainment (OUSD (A&S)); U.S. Air Force; U.S. Marine Corps; U.S. Army; and U.S. Central Command. With State officials from the Bureau of Political-Military Affairs and the Office of the Legal Adviser s Office of Treaty Affairs, we discussed the process to establish international agreements, State s notifications to Congress on ACSA entry into force, and recordkeeping for those notifications. We conducted fieldwork at AFCENT Headquarters at Shaw Air Force Base in Sumter, South Carolina, to discuss ACSA transactions related to support provided to the Saudi-led Coalition. To determine the extent to which agencies have provided information to Congress about ACSAs, we analyzed agency activities related to (1) DOD s requirement to notify Congress of its intent to designate a country eligible for an ACSA and (2) State s requirement to notify Congress no later than 60 days after the entry into force of international agreements, which includes ACSAs. First we reviewed DOD s congressional notification requirements under 10 U.S.C. 2342. The law authorizes the Secretary of Defense to sign ACSAs with the governments of North Atlantic Treaty Organization (NATO) countries, subsidiary bodies of NATO, and the United Nations Organization or any regional international organizations without an official designation of eligibility. However, for countries that are not members of NATO, DOD must notify Congress of its intent to designate the government of a country eligible for an ACSA at least 30 days before making the designation. Agency records indicate that DOD had signed 125 ACSAs as of February 2020. We included these 125 agreements in our analysis because, according to DOD, each agreement is considered to be an ACSA although some are named as other types of mutual logistics support agreements. To determine the extent to which DOD addressed requirements for notifying Congress of its intent to designate a non-NATO country for the purposes of entering into an ACSA, we conducted a content review of ACSA documents to identify signature and notification dates for each relevant ACSA, calculated the number of days between them, and compared our results to DOD s requirement to notify Congress of its intent to make a designation not less than 30 days before a country is designated. Although DOD is required to notify Congress at least 30 days before designating non-NATO countries for the purposes of entering into an ACSA, DOD officials told us that ACSA records do not include a precise designation date for each country. Therefore, we used ACSA signature dates as a proxy for designation dates. In addition, because some ACSAs are revised and re-signed over time, we planned to compare the date on which DOD transmitted notifications to Congress with the signature date of the first ACSA signed with each partner. However, DOD officials explained that they could not readily provide the signature dates of the first ACSA signed with each partner because they purposefully expunged electronic records related to expired or replaced agreements which would have noted signature dates to help ensure that officials planning ACSA transactions referenced the current version of the agreement. Although DOD did not systematically track the signature dates for agreements that had been revised and re-signed, we reviewed documents related to each ACSA partner, historical treaty records, and other agency documents and found the signature date for the first agreement DOD signed with each ACSA partner. We compared NATO accession dates with these first ACSA signature dates and determined that 19 ACSA partners were members or elements of NATO at the time the relevant ACSA was signed. An additional two ACSA partners were elements of other international organizations. Therefore, we determined that DOD had signed 21 of its 125 ACSAs with governments of NATO countries, subsidiary bodies of NATO, and other international organizations, which do not require an official designation of eligibility. Under the law, DOD was required to notify Congress at least 30 days prior to designating the remaining 104 countries for an ACSA. The Secretary of Defense typically submits these notifications to the Senate Committees on Armed Services and Foreign Relations and the House Committees on Armed Services and Foreign Affairs. We included in our analysis the 78 of these 104 countries for which DOD records included a copy of a dated notification letter addressed to at least one of these four committees. For these 78 countries, we compared DOD notification dates with the signature date of the initial agreement with each partner. DOD could not provide documentation of congressional notifications for the remaining 26 partners, which we excluded from our analysis. We also interviewed DOD officials from the OCJCS and the OUSD (A&S) to discuss DOD s congressional notification process. Second, we analyzed State s requirement under 1 U.S.C. 112b to notify Congress no later than 60 days after the entry into force of international agreements, which includes ACSAs. Under the law, often referred to as the Case-Zablocki Act, State is required to notify Congress of any international agreement to which the United States is a party, other than a treaty, as soon as practicable after the agreement has entered into force, but in no event later than 60 days thereafter. To determine the extent to which State had transmitted notifications about ACSA entry into force on or before the statutory 60-day deadline, we conducted a content analysis of DOD ACSA documents and State notification records to identify relevant entry into force and State notification dates. We then calculated the number of days between them and compared our results to State s reporting requirement under 1 U.S.C. 112b. Of the 125 ACSAs that DOD had signed, State officials confirmed that, as of February 2020, 118 had entered into force and, as such, required notification to Congress of entry into force under the Case-Zablocki Act. We excluded the remaining seven signed ACSAs from our analysis as follows. First, we excluded three agreements DOD signed with Benin, Iraq, and Uruguay that, according to State and DOD officials, had not entered into force as of February 2020, and therefore did not yet require notification under the Case-Zablocki Act. Second, we excluded two ACSAs signed with Canada and the United Kingdom, for which State officials explained that the legal arrangements governing acquisition and cross-servicing transactions are contained in government-to government chapeau agreements regarding defense cooperation rather than in agency-level ACSA agreements more commonly used with other partners. According to officials, these chapeau agreements are supplemented by nonbinding, agency-level implementing procedures that are not separately subject to Case-Zablocki Act reporting to Congress. Third, we excluded two agreement for which, as of February 2020, State officials were reviewing agreement documentation to confirm potential entry into force prior to notifying Congress. For one of these two agreements, if State determines the agreement to be entered into force, the date of entry into force will be retroactively dated to the date of signature, per the terms of the agreement. The retroactive entry into force date for the agreement is more than 60 days before February 2020, so if the entry into force date is confirmed, the related notification to Congress under the Case-Zablocki Act would be late as compared to the 60-day deadline. The second of these two agreements was signed on January 31, 2020. For the 118 ACSAs that had entered into force and thus required State s notification to Congress, we compared entry into force dates with notification dates to determine the extent to which State had provided notifications on or before its 60-day deadline. State provided documentation on entry into force notifications for all but two of the 118 relevant ACSAs. For these two agreements, signed in 1983 and 2002, State had no record of related notifications, so we were unable to conclude whether or not they had occurred. For the remaining 116 agreements, State provided copies of dated congressional notifications for 113 and notification dates from its Treaty Information Management System for three notifications for which copies of the letters were unavailable. We included in our analysis notifications that were transmitted to either the President of the Senate, the Speaker of the House, or both. We compared the date of these notifications with ACSA entry into force dates we verified using ACSA agreement documentation and State notification documents. We also analyzed information in State s notification documents to determine the causes for late transmittals. We interviewed DOD officials from the OCJCS and the OUSD (A&S), and State officials from the Bureau of Political-Military Affairs and the Office of the Legal Adviser s Office of Treaty Affairs to discuss State s congressional notification process. To determine the extent to which DOD has tracked and received reimbursement for support provided through ACSA orders, we analyzed a generalizable sample of ACSA orders that DOD had authorized from October 2013 through March 2018 in the ACSA Global Automated Tracking and Reporting System (AGATRS). AGATRS is DOD s system of record for management of ACSA transactions and designates orders as overdue if reimbursement is not completed within 12 months of the order authorization date. We selected a stratified random sample of 227 orders, which were sampled from a population of 9,761 orders within the population groups in table 3. Strata in table 3 are based on a combination of four features: order total (dollar amount); order status (completed versus incomplete); document upload requirement (required versus not required); and military service. With this probability sample, each order of the study population had a nonzero probability of being included, and that probability could be computed for any order. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample s results as a 95-percent confidence interval (e.g., the margin of error is plus or minus 7 percentage points). This interval would contain the actual population value for 95 percent of the samples we could have drawn. We calculated our sample analysis with survey software that accounts for the sample design (stratification and weights) and appropriate subpopulation reporting group statements. We designed stratification and sample sizes based on order status and document upload requirements to ensure that the 95-percent confidence intervals of attribute estimates (e.g., percentage of orders that have proper support) had margins of error within around +/- 10 percentage points for each of the following four reporting groups, which collapse over the following strata: complete orders, incomplete orders, document upload required, and document upload not required. We also designed stratification based on military service to ensure proportionate representation of each military service in our sample within each combination of order status and document upload requirement. All of the orders in our population had been authorized for 14 months, and thus should have been repaid according to DOD s 12-month system requirement, at the time we conducted our review of the sample from May 2019 through June 2019. For this sample, we analyzed order information and coordinated with DOD to validate the reimbursement status recorded in AGATRS. On the basis of (1) reporting from the DOD Inspector General, (2) interviews with DOD officials, (3) DOD s use of manual entry to populate the system, and (4) our review of DOD s use of ACSA orders to process reimbursement for unpaid transactions with members of the Saudi-led Coalition, we determined that DOD s data in AGATRS may not be fully reliable. DOD officials explained that although AGATRS was the single repository for global ACSA transaction data, the system was not integrated with any other DOD systems and thus relied on manual entry from personnel to populate ACSA order information. As a result, we took additional steps to determine the reliability of information reported in the system. Specifically, we requested a data report from DOD of all ACSA transactions recorded in AGATRS as of May 8, 2019. We reviewed supporting documentation and information recorded in AGATRS for each ACSA order in our sample to determine whether the data in the order status field were accurate. For the order status completed, which indicates that the ACSA order has been fully reimbursed, we reviewed available information to determine whether financial collection documentation had been recorded and compared the information in these documents to the information in AGATRS. We then took steps to verify with DOD the status of orders that (1) were recorded as completed, but for which we had not identified any financial documentation or the documentation did not contain sufficient information to verify reimbursement, and (2) were not recorded as completed as of the time of our review. Of the 227 orders in our sample, 138 fit into one of these two categories. For orders that were recorded as completed but did not have sufficient supporting documentation, we requested that DOD provide additional support. For orders that were recorded as incomplete, we requested that DOD verify whether the orders had been reimbursed, given that they had been in the system longer than 12 months and were categorized as overdue in the data report provided by DOD. DOD provided feedback on and validated the reimbursement status for 101 of the 138 orders sent for follow-up. DOD did not provide a response for the remaining 37 orders. DOD identified whether orders recorded as overdue in AGATRS had been partially reimbursed, which we incorporated into our calculation of unreimbursed dollar amounts for the orders in our sample. On the basis of this validation process, we report on whether ACSA orders authorized from October 2013 through March 2018 in AGATRS had been reimbursed or not fully reimbursed as of July 10, 2019, or whether DOD did not know the reimbursement status as of October 2019. We found that approximately 7 percent of the order status information recorded in AGATRS was inaccurate. For example, three of the 227 orders in our sample that DOD had recorded as completed were not fully reimbursed. Five of the 227 orders in our sample that DOD had recorded as incomplete were actually reimbursed; DOD uploaded supporting documents and closed these orders in AGATRS in response to our inquiry. Additionally, as described above, six of the 227 orders should not have been included in our scope but were misclassified in DOD s system. We also found orders under the purview of DLA Energy that were partially or fully settled (i.e., reimbursed or reconciled by netting sales and purchases between the United States and the partner nation), but whose status had not been updated in AGATRS. DLA Energy officials told us that AGATRS does not have sufficient functions to capture DLA Energy s fuel reconciliation process, in which sales and purchases with partners may be offset through specific implementing arrangements with the partners. In some cases DLA Energy provided us with the actual amounts, including unpaid amounts, but we were unable to verify this information further. In response to our verification questions, DOD took steps to correct some of the AGATRS data inaccuracies we identified. For instance, DOD reopened (i.e., redesignated as incomplete) some orders it had recorded as completed in AGATRS but for which it had not received full reimbursement. Similarly, DOD uploaded reimbursement information for orders that it had recorded in the system as incomplete, but for which it had received reimbursement. DOD also uploaded reimbursement information in AGATRS for ACSA orders from our sample that it had recorded as completed, but for which it lacked documentation to support that it had received reimbursement. Finally, DOD settled or requested and received reimbursement for five of the ACSA orders in our sample. We found that DOD data on ACSA transactions contained weaknesses that we describe in this report. Because of these weaknesses, we only used data from our sample in developing estimates on data quality and reimbursement. We checked all of the orders in our sample, and either verified or corrected them as needed, and report any data that could not be verified. Since our probability sample with verified and corrected information is generalizable to all in-scope orders, we were able to estimate population values based on the corrected sample information. We conducted this performance audit from September 2018 to March 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Signed U.S. Department of Defense Acquisition and Cross-Servicing Agreements as of February 2020 As of February 2020, DOD had 120 signed ACSAs that span DOD s geographic areas of responsibility. (See table 4.) Appendix III: List of Logistics Support Retransfers under Department of Defense Acquisition and Cross-Servicing Agreements The Department of Defense (DOD) describes the retransfer of logistic support, supplies, and services provided under Acquisition and Cross- Servicing Agreements (ACSA) as a transfer from the original recipient to another foreign government or international organization, or to any entity other than the officers, employees, or agents of the foreign country or international organization whose military originally received the support. DOD Directive 2010.9 prohibits the retransfer of ACSA support without the prior written consent of the U.S. government, obtained through applicable DOD channels. As of November 2019, DOD had information related to 11 ACSA transactions made with six different ACSA partners between 2003 and 2019 for which the United States approved retransfer of ACSA support, as detailed in table 5. Appendix IV: Use of Acquisition and Cross- Servicing Agreements to Seek Reimbursement from the Saudi-led Coalition In 2019, we presented preliminary observations to Congress about the extent to which the Department of Defense (DOD) had provided support to and requested reimbursement from the Saudi-led Coalition (SLC), and DOD s use of Acquisition and Cross-Servicing Agreements (ACSA) to do so. This appendix describes those observations and provides updates as appropriate with information obtained during the course of our review. In March 2018, DOD received a congressional inquiry regarding DOD s use of ACSAs to provide support to the SLC activities in Yemen. In November 2018, DOD informed Congress about (1) the legal justification for the provision of aerial refueling assistance to the SLC, since March 2015, and (2) the status of reimbursement. DOD reported that it had failed to process and seek reimbursement for some fuel and all aerial refueling support provided to members of the SLC from March 2015 through November 2018, and that it would use the ACSA authority to request retroactive reimbursement. Additionally, as of August 2019, DOD had not received full reimbursement for general purpose bombs provided through ACSA in April 2015. According to DOD officials, a Joint Staff Execute Order signed on March 27, 2015, directed DOD to provide aerial refueling support to the SLC, if requested, and stated that the support would be provided on a reimbursable basis either through foreign military sales (FMS) or an ACSA. The order also stated that, as of March 2015, Saudi Arabia had not signed an ACSA. Further, according to DOD officials, there was no FMS case through which DOD might have provided aerial refueling to Saudi Arabia in March 2015. Aerial refueling support includes flying hours to conduct refueling and the fuel exchanged. According to DOD officials, air crews recorded aerial refueling flight hours for members of the SLC at the time they occurred, but did not record them as related to SLC activities in Yemen or process them as reimbursable FMS or ACSA transactions. For fuel provided to SLC members during aerial refueling flights at this time, DOD documented and processed some, but not all, as ACSA transactions. DOD officials identified multiple factors, including inadequate planning and insufficient understanding of guidance in the Joint Staff order, that led to a process breakdown in which DOD did not invoice and request reimbursement. Following the congressional inquiry, DOD began a review of air tanker flight hours, Air Force fuel purchases, and data from Saudi Arabia to determine aerial refueling reimbursement charges for flying hours and fuel. Based on this review, DOD identified reimbursable amounts of more than $261 million for flying hours and $37 million for fuel provided to coalition members. Using this information, DOD requested retroactive reimbursement through the ACSA authority from the United Arab Emirates (UAE) and Saudi Arabia for the flight hours and fuel not previously reimbursed. According to DOD officials, DOD is treating these transactions as third-party transfers. According to DOD documents and officials, because Saudi Arabia did not have a signed ACSA prior to June 2016, UAE agreed to reimburse the United States for transactions supporting the SLC before this date. Saudi Arabia agreed to reimburse the United States for transactions after this date. As of February 28, 2019, UAE had submitted $103.7 million in retroactive reimbursement for air tanker flight hours and $15 million for fuel. In May 2019, DOD signed an agreement with Saudi Arabia for repayment of $151 million for aerial refueling support provided from June 2016 through September 2018. DOD and Saudi Arabia agreed that Saudi Arabia would submit payments in increments over the course of 12 months, after receiving approval from the crown prince, Mohammad bin Salman, and additional leadership in Saudi Arabia. As of February 2020, Saudi Arabia had submitted payment of approximately $114 million, according to DOD documents. A balance of about $37 million for flight hours remains unreimbursed as well as $22 million for fuel. In addition to aerial refueling support, in 2015 DOD provided about $2 million of general purpose bombs to UAE for which UAE had received U.S. approval for an ACSA retransfer to Saudi Arabia for operations in Yemen. However, DOD did not record this order in the ACSA system of record as required until August 2019 and, as of September 2019, had received reimbursement in the form of reciprocal support for only two- thirds of the value of the bombs initially provided. DOD officials told us that UAE planned to provide the remaining in-kind reimbursement in September 2020. Appendix V: Comments from the Department of Defense Appendix VI: GAO Contact and Staff Acknowledgments <9. GAO Contact> <10. Staff Acknowledgments> In addition to the contact named above, Biza Repko (Assistant Director), Kathryn Bolduc and Jasmine Senior (Analysts-in-Charge), Joe Carney, Debbie Chung, Martin de Alteriis, Neil Doherty, Adrian Good, Sally Newman, Cary Russell, Sonya Vartivarian, and Nicole Willems made key contributions to this report. | Why GAO Did This Study
According to DOD, from fiscal years 2014 through 2019, it used ACSAs to provide billions of dollars of logistic support, supplies, and services to more than 100 partner countries. For example, this support included fuel and ammunition to assist international exercises and coalition operations, among other efforts.
Senate Report 115-262 included a provision for GAO to review ACSA management. This report examines the extent to which (1) agencies have provided information to Congress about ACSAs, and (2) DOD has tracked and received reimbursement for ACSA orders. GAO conducted content analysis of DOD and State ACSA documents, and analyzed a generalizable sample of ACSA orders authorized from October 2013 through March 2018 and recorded in DOD's system of record for ACSA orders. An ACSA order, also referred to as a transaction, documents an exchange of support between the United States and a foreign partner. In addition, GAO interviewed agency officials and conducted fieldwork at Shaw Air Force Base in Sumter, South Carolina.
What GAO Found
While generally providing required information to Congress, poor recordkeeping by the Department of Defense (DOD) and late notifications by the Department of State (State) have limited the accuracy and timeliness of information provided to Congress on acquisition and cross-servicing agreements (ACSA). DOD and State have Congressional notification requirements pertaining to ACSAs—agreements through which DOD exchanges logistic support, supplies, and services with foreign partners in return for cash or in-kind reimbursement. Documents indicate that DOD provided notice to Congress before designating 78 of 104 countries eligible for an ACSA. However, DOD did not have records for the remaining 26, in part because it lacks documented recordkeeping procedures. While State generally notified Congress about ACSAs' entry into force, it transmitted 41 percent of them after the statutory deadline, largely because DOD did not provide required information to State. These gaps and issues have reduced the accuracy and timeliness of information provided to Congress about ACSAs.
DOD has not maintained quality data to track ACSA orders and has not received reimbursement for thousands of orders. First, DOD does not have complete and accurate ACSA data. For example, for an estimated 12 percent of ACSA orders authorized from October 2013 through March 2018 in DOD's system of record, DOD could not determine whether it had received reimbursement for support provided to partners. According to DOD officials, such inaccuracies occur in part because DOD does not have a process to validate data in its system. Second, GAO estimates that DOD received full reimbursement for 64 percent of ACSA orders authorized from October 2013 through March 2018 (about 6,000 orders), but did not receive full reimbursement for 24 percent. Orders remain unpaid in part because DOD has not requested timely repayment or monitored reimbursement. These management weaknesses limit DOD's ability to obtain reimbursement for overdue ACSA orders, which, according to DOD, were valued at more than $1 billion as of November 2019.
Note: These estimates are based on a generalizable sample of orders in which the United States provided support to foreign partners; have a margin of error of up to plus or minus 5.1 percentage points at the 95-percent confidence level; and represent the percentage of the number of orders, not the dollar value of orders.
What GAO Recommends
GAO is making seven recommendations to DOD to improve ACSA recordkeeping and reimbursement, through steps such as better monitoring, periodic data reconciliation, and timely invoicing. DOD agreed with all seven recommendations. |
gao_GAO-19-466 | gao_GAO-19-466_0 | <1. Background> <1.1. OMB Guidelines> In January 2018, OMB released (M-18-04) Monitoring and Evaluation Guidelines for Federal Departments and Agencies that Administer United States Foreign Assistance (the Guidelines ) in response to the 2016 FATAA legislation. (See appendix III for additional information on the requirements in the legislation). The Guidelines provide direction to federal departments and agencies that administer foreign assistance on monitoring the use of resources, evaluating the outcomes and impacts of the foreign assistance projects and programs, and applying the findings and conclusions of such evaluations to proposed project and program design. The goals of the Guidelines are to set forth key principles to guide each agency and to specify requirements, where appropriate, that agencies must cover in their own policies on M&E of foreign assistance. The Guidelines define monitoring and evaluation as follows: Monitoring is the ongoing and systematic tracking of data and information relevant to policies, strategies, programs, projects, and/or activities and is used to determine whether desired results are occurring as expected during program, project, or activity implementation. Monitoring often relies on indicators, quantifiable measures of a characteristic or condition of people, institutions, systems, or processes that may change over time. Evaluation is the systematic collection and analysis of information about the characteristics and outcomes of the program, including projects conducted under such program, as a basis for making judgments and evaluations regarding the program; improving program effectiveness; and informing decisions about current and future programming. Table 1 lists OMB s M&E requirements and key excerpts of the descriptions as noted in the OMB M-18-04. <1.2. GAO Leading Practices> In 2016, we reported on leading practices for foreign assistance program M&E. We identified 28 leading practices 14 for monitoring and 14 for evaluation. Table 2 lists and defines these monitoring practices. Table 3 lists the evaluation practice and corresponding definition. <2. OMB s Foreign Assistance Monitoring and Evaluation Guidelines Incorporate Most but Not All of GAO s Leading Practices> Based on our review, the Guidelines incorporate most of GAO s leading practices for monitoring and evaluation. However, they do not incorporate practices on developing monitoring plans that are based on risks, ensuring that staff are appropriately qualified to conduct monitoring, establish procedures to close out programs, developing staff skills for evaluation, and following up on evaluation recommendations. OMB indicated that it intended the Guidelines to focus on elements required by the FATAA legislation. Nevertheless, incorporating these leading practices in the Guidelines can help ensure that all agencies address impediments, effectively manage foreign assistance, and meet their assistance goals. <2.1. The Guidelines Incorporate Most of the GAO s Leading Practices for Monitoring, but Do Not Include Risk Assessments, Staff Qualifications, or Close-Out Procedures> Based on our review, OMB incorporates 11 of 14 GAO s leading practices. Figure 1 shows our assessment of the Guidelines with regard to monitoring foreign assistance. The OMB Guidelines do not incorporate practices on developing monitoring plans that are based on risks, ensuring that staff are appropriately qualified to conduct monitoring, and establishing close-out procedures for projects and programs. Developing monitoring plans based on an assessment of risk. The Guidelines do not incorporate GAO s leading practice of developing monitoring plans based on as assessment of risks related to achieving the defined objectives. Identifying and assessing risks can help agencies determine if impediments exist that they might need to mitigate in order to manage their foreign assistance more effectively. Additionally, determining which activities warrant greater oversight and which require less can also help agencies ensure the appropriate allocation of foreign assistance. Ensuring Staff qualifications for monitoring. The Guidelines do not incorporate GAO s leading practice for agencies to ensure that staff members responsible for monitoring programs or projects have the relevant knowledge, skills, and training. By having qualified staff for monitoring programs or projects, agencies can help ensure they meet their foreign assistance goals. By hiring qualified staff and providing them the right training, tools, structure, incentives, and responsibilities, agencies can make operational success possible. Establishing close out procedures for projects and programs. The Guidelines do not incorporate GAO s leading practice for agencies to establish program closeout procedures for all required work and administrative actions completed by the implementing partner. By establishing such procedures, agencies can help ensure their foreign assistance is less susceptible to fraud, waste, and mismanagement; addresses increases to potential costs in fees for maintaining foreign assistance; and increases their ability to redirect foreign assistance to other projects. <2.2. The Guidelines Incorporate Most of the GAO s Leading Practices for Evaluation, but Do Not Include Developing Staff Skills and Following Up on Recommendations> Based on our review, OMB incorporates 12 of 14 GAO s leading practices. Figure 2 shows our assessment of the Guidelines with regard to evaluating foreign assistance. The OMB Guidelines do not incorporate some practices such as developing staff skills for evaluation and following up on evaluation recommendations. Developing staff skills regarding evaluating. The Guidelines do not incorporate GAO s leading practice for agencies to establish requirements that the staff responsible for overseeing and using evaluations should continually undertake the relevant education, training, or supervised practice needed to learn new concepts, techniques, and skills. By having their staff continually undertake such education, training, or supervised practice, agencies can benefit more fully from program evaluations. Following up on recommendations. The Guidelines do not incorporate GAO s leading practice for agencies to determine whether management or programs have accepted the recommendations made in evaluation reports and taken the actions needed to address them. By developing mechanisms to track recommendations, agencies can better address inefficient, mismanaged, or costly programs or projects. <2.3. OMB Notes the Guidelines for Monitoring and Evaluation Include Elements Required in the FATAA Legislation> The FATAA requires the President to set forth guidelines according to best practices of monitoring and evaluation but does not define these best practices. Specifically, FATAA states, the President shall set forth guidelines, according to best practices of monitoring and evaluation studies and analyses, for the establishment of measurable goals, performance metrics, and monitoring and evaluation plans that agencies can apply with reasonable consistency to covered United States foreign assistance. OMB staff told us that the Guidelines were intended to focus on elements required by the FATAA legislation but noted that agencies are free to add additional requirements to their own M&E policies. However, we have previously reported that while some of these agencies have incorporated these leading practices, others have not. Furthermore, agencies that have incorporated these practices would not necessarily continue to include them if they are not required in the Guidelines. Regarding leading practices, officials noted that while these practices are important, there is no singular established standard for best monitoring practices. Nevertheless, both OMB s circulars and recent legislation note the importance of leading practices for M&E. For example, Circular A-123 notes that management should identify internal and external risks that may prevent the organization from meeting its objectives. Additionally, the Foundations for Evidence-Based Policymaking Act of 2018 requires OPM, in consultation with the OMB, to identify skills and competencies needed for program evaluation, establish a new occupational series or update an existing one for program evaluation, and establish a new career path for program evaluation. <3. Most Agencies Have Incorporated OMB s Guideline Requirements in Their Policies, and All Have Taken Initial Steps to Implement Them> Based on our review, most agencies incorporated all of OMB s Guidelines for monitoring in their policies. However, DOD did not include the requirement to establish roles and responsibilities among agencies that participate in funding transfers or ensure that verifiable, reliable, and timely information is collected and available to monitoring personnel. We also found that agencies incorporate most of OMB s Guideline requirements for evaluation in their policies, but some did not include the requirement to conduct impact evaluation on all pilot programs. Without incorporating these Guideline requirements, agencies risk losing accountability over their funding and monitoring and evaluating activities. They also risk replicating programs without fully understanding their effectiveness. We also found that all of the agencies we reviewed have taken initial steps to implement their M&E policies. <3.1. Most Agencies Have Incorporated OMB s Guideline Requirements for Monitoring> Based on our review of agency monitoring policies, all the agencies except DOD incorporated relevant Guideline requirements. All six agencies we reviewed incorporated the requirement to establish monitoring policies that apply to their major foreign assistance programs. For example, State, USAID, and MCC have agency-wide policies for foreign assistance M&E. USDA and HHS have policies relevant to their major foreign assistance programs for USDA, the Foreign Agriculture Service s food aid programs, and for HHS, the President s Emergency Plan for AIDS Relief (PEPFAR). All of the agencies with relevant monitoring policies DOD, HHS, MCC, State, USAID, and USDA incorporate the requirement to develop, collect, analyze, and report data on performance indicators. These policies help ensure the measurement of project implementation and progress, and promote the timely analysis and reporting of results that could identify any needed corrections. DOD did not incorporate Guideline requirements to establish agencies roles and responsibilities and ensure verifiable data for monitoring activities. Establishing agencies roles and responsibilities when funds are transferred. DOD did not include the Guideline requirement for agencies to establish roles and responsibilities in funding transfers. Without defined roles and responsibilities, agencies risk losing accountability over funding and monitoring activities. In addition, agencies could miss opportunities to collaborate and leverage interagency efforts to facilitate decision-making and address barriers across agency boundaries. Ensuring verifiable, reliable, and timely data. DOD did not include the Guideline requirement for agencies to ensure they collect and provide verifiable, reliable, and timely data to monitoring personnel. Without ensuring that such data are available to monitoring personnel, agencies risk employing inappropriate methods, continuing ineffective programs or projects, and making uninformed decisions. DOD officials told us these practices are currently not required because they are still in the process of fully aligning their policy with the Guidelines. Officials explained that working on prioritizing and directing resources towards M&E efforts has been a challenge. Officials noted they expect to update the policy to include these requirements in the future, but they have no specific timelines in place. <3.2. Agencies Incorporate Most but Not All of OMB s Guideline Requirements on Evaluation> The agencies we reviewed incorporated nearly all relevant Guideline requirements on evaluation. Three of the six agencies DOD, HHS and USDA did not include a requirement to conduct impact evaluations on all pilot programs or projects. Figure 4 shows our assessment of agencies evaluation policies against the Guidelines. All the agencies we reviewed have established project-specific evaluation plans. For example, HHS implements PEPFAR s evaluation plan which indicates specific requirements for describing the evaluation component, strategy, or intervention, the reason for the evaluation, the type of evaluation, the key evaluation questions, the data sources, the methods by question, and the dissemination and utilization plan. All the agencies we reviewed also had policies on distributing their evaluation reports internally and publicly reporting them. For example, State and USAID have a web-based, customized Evaluation Registry system that they jointly maintain for bureaus and independent offices to record and track planned, ongoing, and completed evaluations. Conduct impact evaluations for pilot programs or projects. DOD, HHS, and USDA did not include the Guideline requirement for agencies to conduct impact evaluations for pilot programs or to conduct only a performance evaluation and to provide a justification for not conducting an impact evaluation. Without a requirement to conduct impact evaluations of pilot programs, agencies risk duplicating or scaling up programs without fully understanding the factors that could lead to their success or failure. DOD. DOD officials told us they do not require this practice because they are still in the process of fully aligning their policy with the Guidelines. According to DOD, it has determined that impact evaluations are impractical and inappropriate for the planned evaluations; instead, it plans to conduct only performance evaluations and provide justifications for not conducting impact evaluations, as required by the OMB Guidelines. DOD plans to address the evaluation methodology of pilot programs in future updates, according to officials. However, DOD has no specific timelines in place for these updates. HHS. PEPFAR s M&E documents indicate that PEPFAR teams are encouraged but not required to evaluate all current pilot programs to see which should be taken to scale for specific populations. Officials from HHS and the Office of the U.S. Global Aids Coordinator noted that they conduct their own evaluation of pilot programs and use routine program data to inform scaling of programs. However, PEPFAR policies do not specifically require that such evaluations be like the impact evaluations described in the Guidelines. USDA. FAS s M&E documents indicate that when selecting projects to undergo impact evaluation the agency will consider pilot projects. USDA officials told us they have no requirement to conduct impact evaluations on all pilot projects because impact evaluations may be cost prohibitive and project lifecycles are short (i.e., 3 to 5 years). Officials further noted that implementing partners can conduct an impact evaluation on pilot programs, but are not required to do so. Although the Guideline requirement indicates that agencies can forgo impact evaluations, they must provide a justification in their M&E policy. USDA officials have not provided such a justification provided in their M&E policy. Establish agencies roles and responsibilities for evaluation activities when funds are transferred. DOD did not include the Guideline requirement for agencies to define roles and responsibilities when there are funding transfers between or among U.S. government agencies to ensure accountability for evaluation activities. Without defined roles and responsibilities, agencies risk losing accountability over funding and evaluation activities. In addition, they could miss opportunities to collaborate and leverage interagency efforts to facilitate decision-making and address barriers across agency boundaries. Evaluate all programs at least once whose dollar value equals or exceeds that of a median sized program within the agency. DOD did not include the Guideline requirement for agencies to evaluate all programs, at least once during their existence, whose dollar value equals or exceeds that of a median sized program in the agency. Without a mechanism to evaluate all these types of programs, agencies risk continuing inefficient, mismanaged, or costly projects. DOD officials told us they do not currently require these practices because they are still in the process of fully aligning their policy with the Guidelines. They noted that they expect to update the policy to include these requirements, but they have no specific timelines in place. <3.3. Agencies Have Taken Initial Steps to Implement Their M&E Policies> Since the six agencies we reviewed recently updated their M&E policies to align with the OMB Guidelines, many existing assistance projects and programs may not be governed by these requirements. Nonetheless, the agencies we reviewed have taken initial steps to help ensure implementation of agency M&E policies. In interviews, agencies provided us with the following examples of such steps. State. State developed a guidance document and tool-kit to operationalize and oversee its M&E policy to ensure it implements the Guidelines. According to State officials, they provide classroom training on the M&E policy and are piloting a revised online and classroom evaluation courses for staff. Officials also noted that they have dedicated staff to assist bureaus in implementing the Guidelines, among other agency policies. USAID. USAID has an approval process to ensure key deliverables include Activity plans that meet Guideline requirements. Additionally, USAID s policy requirements indicate that each mission program office must identify a point of contact for monitoring and evaluation to ensure that USAID and its partners are complying with the agencies policies and foreign assistance M&E guidelines. MCC. MCC also has an approval process through their Department of Policy and Evaluation to ensure implementation of the Guidelines. As part of the process, the MCC Board of Directors or the appropriate partner country must approve initial M&E plans. HHS. Within HHS, the Centers for Disease Control and Prevention (CDC) are responsible for implementing the monitoring and evaluation guidance for their PEPFAR programs. CDC officials told us that they have existing mechanisms and supervisory structures in place to ensure that the Guidelines requirements are met in PEPFAR programs. USDA. USDA officials told us that the current M&E policy applies only to food assistance programs within FAS and not for other USDA programs. Officials explained they are trying to develop a structure that allows FAS to ensure all USDA components are implementing the OMB Guidelines. DOD. DOD developed guidance for fiscal year 2020 on implementing its M&E policy. DOD officials we spoke to noted they are working on identifying resources, skills, and capabilities to fully implement DOD s M&E policy. <4. Conclusions> OMB s Guidelines set forth key principles to guide agencies and to specify requirements, where appropriate, which they must cover in their own policies on M&E of foreign assistance. However, they do not include key leading practices for M&E that GAO identified for ensuring agencies meet their foreign assistance goals and objectives. While OMB allows agencies discretion to include these or other best practices, it is unknown if the agencies will do so. By ensuring that OMB s government-wide Guidelines include these best practices, agencies can help address impediments, effectively manage foreign assistance, and meet their goals. Although all agencies we reviewed developed or updated their M&E policies to align with the Guidelines, not all of them include important requirements. DOD, HHS, and USDA did not include the requirement for agencies to conduct impact evaluations for pilot programs or to conduct performance evaluations and provide a justification for not doing an impact evaluation. Without a requirement to conduct impact evaluations of pilot programs, agencies risk duplicating or scaling up programs without fully understanding the causes that could lead to their success or failure. <5. Recommendations for Executive Action> We are making the following seven recommendations, including one to OMB, four to DOD, one to State, and one to USDA. The Director of the Office of Management of Budget should update the Guidelines to include GAO s leading practices of developing monitoring plans that are based on risks, ensuring that monitoring staff have appropriate qualifications, establishing procedures to close-out programs, developing staff skills regarding evaluations, and establishing mechanisms for following up on evaluation recommendations. (Recommendation 1) The Secretary of Defense should update the Department s monitoring and evaluation policies to define roles and responsibilities among agencies that participate in interagency funding transfers. (Recommendation 2) The Secretary of Defense should update the Department s monitoring and evaluation policies to ensure verifiable, reliable, and timely data are available to monitoring personnel. (Recommendation 3) The Secretary of Defense should update the Department s monitoring and evaluation policies to ensure that it evaluates all programs, at least once in their lifetimes, whose dollar value equals or exceeds that of the median program in the agency. (Recommendation 4) The Secretary of Defense should update the Department s monitoring and evaluation policies to require the agency to conduct impact evaluations on all pilot programs before replicating or expanding, or conduct performance evaluations for those programs and provide a justification for not conducting an impact evaluation. (Recommendation 5) The Department of State s U.S. Global AIDS Coordinator, in collaboration with HHS and other implementing agencies, should update the PEPFAR monitoring and evaluation policies to require these agencies to conduct impact evaluations on all pilot programs before replicating or expanding, or conduct performance evaluations for those programs and provide a justification for not conducting an impact evaluation. (Recommendation 6) The Secretary of Agriculture, in collaboration with the Foreign Agriculture Service, should update their monitoring and evaluation policies to require USDA to conduct impact evaluations on all pilot programs before replicating or expanding, or conduct performance evaluations for those programs and provide a justification for not conducting an impact evaluation. (Recommendation 7) <6. Agency Comments and Our Evaluation> We provided a draft of this product to the DOD, HHS, MCC, OMB, State, USDA, and USAID for comment. OMB commented on the draft report in an email from the staff responsible for economic policy, federal financial management, and international affairs. In the email, OMB disagreed with the recommendation to revise the Guidelines. It emphasized that an interagency group had developed the Guidelines and had consulted a number of expert sources on monitoring and evaluation policies and practices, including GAO s leading practices. OMB also developed the guidelines to achieve the objectives contained in the Foreign Aid Transparency and Accountability Act of 2016 within the context of other existing OMB guidance. OMB suggested that it would be more effective to remind agencies that, in addition to the Guidelines specified in M-18-04, they should follow all guidance OMB had issued affecting monitoring and evaluation activities. This guidance includes policies for closeout procedures in the Uniform Guidance, for the Enterprise Risk Management and Internal Control in A-123, and for the Foundations for Evidence- Based Policymaking Act on using evaluation information and monitoring and evaluation staff skills and qualifications. We acknowledge that relevant monitoring and evaluation guidance is available to agencies in other forms beyond the Guidelines. However, we believe it is important for OMB to incorporate this guidance into its Guidelines, if only by reference, in order to emphasize the importance of these practices in the context of monitoring and evaluation of foreign assistance. This step would help ensure that OMB had integrated this guidance into the management of foreign assistance programs as appropriate. DOD concurred with our recommendations and indicated that it would address many of them in the next iteration of its M&E policy for security assistance (see appendix IV for written comments). DOD noted that two of our recommendations had limited applicability to DOD for security assistance, but described how it would implement them. First, DOD stated that it has not used its authority to transfer funds for security cooperation assistance to other departments and agencies. However, DOD indicated it would implement our recommendation to define roles and responsibilities among agencies that participate in interagency funding transfers, should such transfers become necessary. Second, DOD stated that conducting impact evaluations was not a feasible in the context of security assistance. Instead, DOD plans to conduct only performance evaluations, but it would provide justifications for not conducting impact evaluations, as required by the Guidelines. By documenting these approaches in its M&E policies, DOD would help ensure that those departments conducting M&E for DOD security assistance initiatives implement them as required. State agreed with the intent of the recommendation (see appendix V for written comments). State explained that impact evaluations are often not feasible in the context of assistance provided under PEPFAR and described its alternative approach to evaluating new initiatives. State indicated it would update appropriate PEPFAR policies to clarify when agencies should conduct impact and/or performance evaluations. These clarifications will reflect how State evaluates PEPFAR programs in practice in accordance with OMB guidance and legislation, according to State. USAID provided written comments (see appendix VI). HHS and USDA provided technical comments, which we incorporated as appropriate. MCC did not provide comments. We are sending copies of this report to the appropriate congressional committees, the Director of the Office of Management and Budget; Secretaries of Agriculture, Defense, Health and Human Services, and State; Administrator of the U.S. Agency for International Development; and the Executive Officer of the Millennium Challenge Corporation and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149 or GootnickD@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology This report examines the extent to which (1) the Office of Management and Budget s (OMB) monitoring and evaluation (M&E) Guidelines incorporate GAO leading practices and (2) agencies incorporate the OMB Guidelines in their M&E policies and plans. To address objective one, we examined the OMB Guidelines against GAO s 28 leading practices 14 for monitoring and 14 for evaluation identified in GAO-16-861R. In 2016, GAO developed the 28 leading practices. In 2019, we provide specific definitions for each of the practices noted. We made slight modifications to the language to align with the definitions provided. For monitoring, we developed this list of leading practices based on our review of the GPRA Modernization Act of 2010; OMB s Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards; GAO s Standards for Internal Control in the Federal Government (Greenbook); and others. The list of leading practices for monitoring includes developing monitoring plans; collecting, reviewing, and analyzing monitoring data; and establishing roles and responsibilities of personnel responsible for monitoring. For evaluation, we developed a list of leading practices based on the American Evaluation Association s (AEA) 2016 An Evaluation Roadmap for a More Effective Government (AEA Roadmap) and Preface to Evaluators Ethical Guiding Principles. The list of leading practices for evaluation include developing evaluation plans; ensuring evaluator independence; developing staff skills regarding evaluation and use of evidence; and establishing roles and responsibilities of personnel responsible for evaluation. To perform these analyses, two analysts assessed if the Guidelines incorporated specific GAO leading practices. The analysts worked iteratively, comparing notes and reconciling differences at each stage of the analysis. In addition, GAO staff independent of the two analysts reviewed the final analysis, and made modifications as appropriate. We also interviewed relevant OMB officials in Washington D.C. involved in developing the memorandum and inquired about specific requirements and plans to ensure the implementation of these Guidelines. To address our second objective, we examined U.S. agency M&E policies against the requirements noted in the OMB Guidelines. We identified the six major agencies administering the most foreign assistance funds. The six agencies are the U.S. Agency for International Development (USAID), the Department of State (State), the Millennium Challenge Corporation (MCC), the Department of Health and Human Services (HHS), the U.S. Department of Agriculture (USDA) and the Department of Defense (DOD). We asked these agencies to identify or provide all relevant policies and guidance relating to foreign assistance M&E, including, where appropriate, standard operating procedures or other guidance. For USDA, we reviewed the Foreign Agricultural Service s food assistance; for HHS, the President s Emergency Program for AIDS Relief; and for DOD, security cooperation programs. To perform these analyses, two analysts assessed agency M&E policy documents against the requirements in the OMB Guidelines. We identified requirements as phrases that included the following language required, must, mandatory, or should. The analysts worked iteratively, comparing notes and reconciling differences at each stage of the analysis. In addition, other GAO staff independent of the two analysts reviewed the final analysis, and made modifications as appropriate. We also interviewed relevant OMB staff and agency officials in Washington D.C. involved in developing and implementing the M&E policies and inquired about specific requirements, and plans to ensure their M&E policies are implemented. We conducted this performance audit from July 2018 to July 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: The Office of Management and Budget Monitoring and Evaluation Guidelines (OMB Memorandum M-18-04) Appendix II: The Office of Management and Budget Monitoring and Evaluation Guidelines (OMB Memorandum M-18-04) In January 2018, the Office of Management and Budget (OMB) released (M-18-04) Monitoring and Evaluation Guidelines for Federal Departments and Agencies that Administer United States Foreign Assistance (the Guidelines ) in response to the Foreign Aid Transparency and Accountability Act of 2016 (FATAA). Table 4 shows the complete description of the requirements noted in the Guidelines. Appendix III: Assessment of the Foreign Aid Transparency and Accountability Act of 2016 and the Office of Management and Budget Monitoring and Evaluation Guidelines The Foreign Aid Transparency and Accountability Act of 2016 (FATAA) has required objectives on monitoring and evaluation for the Office of Management and Budget (OMB) to include in the Guidelines. We compared the 13 required objectives for the Guidelines set forth in the FATAA legislation with those in the OMB Guidelines. We found that all of the monitoring and evaluation requirements set forth in the legislation are included in the OMB Guidelines. Table 5 shows the FATAA legislation requirements, OMB Guidelines, and our assessment of the alignment between the legislation and OMB s Guidelines. Appendix IV: Comments from the Department of Defense Appendix V: Comments from the Department of State Appendix VI: Comments from the United States Agency of International Development Appendix VII: GAO Contact and Staff Acknowledgments <7. GAO Contact> <8. Staff Acknowledgments> In addition to the individual named above, James B. Michels (Assistant Director), Farahnaaz Khakoo-Mausel (Analyst-in-Charge), Paulina Maqueda-Escamilla, Mark Dowling, Martin De Alteriis, Benjamin Licht, John Hussey, Neil Doherty, Aldo Salerno, Carolina Morgan and Michael Simon made key contributions to this report. Related GAO Products Government Auditing Standards 2018 Revision (Supersedes GAO-12-331G. GAO-18-568G. Washington, D.C.: July 17, 2018. Foreign Assistance: Agencies Can Improve the Quality and Dissemination of Program Evaluations. GAO-17-316. Washington, D.C.: March 3, 2017. Foreign Assistance: Selected Agencies Monitoring and Evaluation Policies Generally Address Leading Practices. GAO-16-861R. Washington, D.C.: September 27, 2016. Program Evaluation: Some Agencies Reported that Networking, Hiring, and Involving Program Staff Help Build Capacity. GAO-15-25. Washington, D.C.: November 13, 2014. Government Efficiency and Effectiveness: Inconsistent Definitions and Information Limit the Usefulness of Federal Program Inventories. GAO-15-83. Washington, D.C.: October 31, 2014. Standards for Internal Control in the Federal Government. GAO-14-704G. Washington, D.C.: September 10, 2014. State Department: Implementation of Grants Policies Needs Better Oversight. GAO-14-635. Washington, D.C.: July 21, 2014. Program Evaluation: Strategies to Facilitate Agencies Use of Evaluation in Program Management and Policy Making. GAO-13-570. Washington, D.C.: June 26, 2013. President s Emergency Plan for AIDS Relief: Agencies Can Enhance Evaluation Quality, Planning, and Dissemination. GAO-12-673. Washington, D.C.: May 31, 2012. Grants Management: Action Needed to Improve the Timeliness of Grant Closeouts by Federal Agencies. GAO-12-360. Washington, D.C.: April 16, 2012. Designing Evaluations: 2012 Revision. GAO-12-208G. Washington, D.C.: January 31, 2012. International School Feeding: USDA s Oversight of the McGovern-Dole Food for Education Program Needs Improvement. GAO-11-544. Washington, D.C.: May 19, 2011. Program Evaluation: Experienced Agencies Follow a Similar Model for Prioritizing Research. GAO-11-176. Washington, D.C.: January 14, 2011. Managing for Results: Enhancing Agency Use of Performance Information for Management Decision Making. GAO-05-927. Washington, D.C.: September 9, 2005. Program Evaluation: An Evaluation Culture and Collaborative Partnerships Help Build Agency Capacity. GAO-03-454. Washington, D.C.: May 2, 2003. Managing For Results: Federal Managers Views Show Need for Ensuring Top Leadership Skills. GAO-01-127. Washington, D.C.: October 20, 2000. Performance Plans: Selected Approaches for Verification and Validation of Agency Performance Information. GAO/GGD-99-139. Washington, D.C.: July 30, 1999. Agency Performance Plans: Examples of Practices That Can Improve Usefulness to Decisionmakers. GAO/GGD/AIMD-99-69. Washington, D.C.: February 26, 1999. Executive Guide: Effectively Implementing the Government Performance and Results Act. GAO/GGD-96-118. Washington, D.C.: June 1, 1996. | Why GAO Did This Study
The Trump Administration requested $28.5 billion in foreign assistance in fiscal year 2019, to be administered by at least 22 federal agencies. Almost 95 percent of this assistance is administered by six agencies—the Departments of Agriculture (USDA), Defense (DOD), State (State), Health and Human Services (HHS), the Millennium Challenge Corporation (MCC), and the U.S. Agency for International Development (USAID). FATAA required the President to set forth guidelines for M&E of U.S. foreign assistance. In January 2018, OMB issued the required guidelines for federal agencies. FATAA also contained a provision for GAO to analyze the guidelines established by OMB; and assess the implementation of the guidelines by the agencies.
In this report, GAO examined the extent to which (1) OMB's M&E Guidelines incorporate GAO leading practices, and (2) agencies incorporate the OMB Guidelines in their M&E policies and plans. GAO assessed the OMB Guidelines against GAO's 28 leading practices identified in GAO-16-861R . GAO also assessed the six agencies' foreign assistance M&E policies against the Guidelines and interviewed OMB and relevant agency officials in Washington, DC.
What GAO Found
The Office of Management and Budget's (OMB) foreign assistance Guidelines incorporate most of GAO's leading practices for monitoring and evaluation (M&E), but gaps exist (see figure).
Monitoring : The Guidelines define monitoring as the continuous tracking of program or project data to determine whether desired results are as expected during implementation. The Guidelines do not require GAO's leading practices on risk assessments, staff qualifications, and program close-out procedures.
Evaluation : The Guidelines define evaluation as the systematic collection and analysis of program or project outcomes for making judgments and informing decisions. They do not require GAO's leading practices on developing staff skills and following up on recommendations.
OMB officials indicated the Guidelines are focused on elements required in the Foreign Aid Transparency and Accountability Act of 2016 (FATAA), but noted that agencies can add additional requirements to their own M&E policies. FATAA requires the President to set forth guidelines “according to best practices of monitoring and evaluation.” OMB staff acknowledged that GAO's leading practices are important, but stated that there is no singular established standard for best monitoring practices. Nevertheless, all of GAO's leading practices can help agencies address impediments, effectively manage foreign assistance, and meet their goals.
When assessing agencies' M&E policies against OMB Guidelines, GAO found that agencies incorporated most of the requirements. However, for monitoring, one of the six agencies GAO reviewed—DOD—did not include the requirements to establish agencies' roles and responsibilities and ensure verifiable data for monitoring activities. For evaluation, agencies required most Guideline requirements, but not all. For example, DOD, HHS, and USDA did not require conducting impact evaluations for pilot programs or projects. Without a clear requirement to do such evaluations, agencies risk duplicating or scaling up programs without fully understanding the factors that could lead to their success or failure. Agencies GAO reviewed have plans or mechanisms in place to oversee the implementation of their M&E policies. For example, State developed a guidance document to operationalize and oversee its M&E policy to ensure the implementation of the Guidelines.
What GAO Recommends
GAO is making recommendations to OMB, DOD, State, and USDA. OMB did not agree with the recommendation to update the Guidelines, but GAO maintains that doing so can help to emphasize the importance of the M&E practices we identified. DOD, State, and USDA agreed with GAO's recommendations. |
gao_GAO-20-62 | gao_GAO-20-62_0 | <1. Background> <1.1. Purposes and Scope of Import Alerts> According to FDA documents and officials, import alerts serve several purposes, including the following: Prevent products that appear to violate FFDCA from being distributed in the United States. Free up agency resources to examine other shipments by automatically detaining shipments on import alerts on a case-by-case basis without examining them. Place the responsibility on the importer to ensure that the products being imported into the United States comply with federal laws and FDA regulations. Import alerts may apply to (1) one or more products produced by all firms in a specific geographic area, (2) one or more products produced or shipped by a specific firm, or (3) a specific product because of concerns about the product regardless of what firm produces it or where. Import alerts covering a specific geographical area may apply to an area within a country, to one or more entire countries, or worldwide. For example, FDA established an import alert covering all firms processing shrimp in India because of the presence of filth, decomposition, and Salmonella. For import alerts that apply to geographical areas, all firms in the area that produce the products specified in an import alert are initially placed on that alert, and the specified products are subject to detention without physical examination. If a firm presents evidence establishing that the conditions that gave rise to the appearance of the violation associated with the alert have been resolved and the agency has confidence that future entries will comply with FFDCA, FDA indicates that the firm may be removed from the alert by placing it on a green list that FDA creates for the alert. For import alerts that apply to products that a specific firm produces, FDA individually determines for example, through testing or examination whether a firm and its products are potentially violative and may be identified for potential detention without physical examination. If so, FDA places them on a red list that it creates for the alert. Import alerts that apply to a specific product of concern generally have neither a red list nor a green list because such products cannot be removed from the alerts. Products detained via import alerts may be (1) refused entry, in which case they must be exported to another country or destroyed, or (2) allowed to enter U.S. commerce if they can be shown to not violate FFDCA or can be reconditioned to be brought into compliance with the act. <1.2. Federal Agency Roles in Overseeing Seafood Imports> DHS, through CBP, is charged with facilitating international trade at the ports-of-entry for seafood and other imports, while FDA examines or inspects certain seafood imports. CBP is responsible for, among other things, collecting the duties, taxes, and fees assessed on products, including seafood, and managing the import process. CBP collects import entry data through its Automated Commercial Environment/International Trade Data System. These entry data are submitted by a filer (typically, the product importer or a broker) and include a description of the product, manufacturer information, and the country of origin. Generally, FDA electronically receives notification from CBP of all entries of products under FDA jurisdiction at ports of entry through the CBP system described above, which links to FDA s OASIS. Once entry information is received in OASIS, FDA uses its Predictive Risk-based Evaluation for Dynamic Import Compliance Targeting (PREDICT) screening tool to evaluate each entry line. PREDICT is a computerized tool designed to estimate the risk of imports using information such as the history of the importer or processing facility, inspection history, and country of origin. FDA staff use these risk estimates to target for examination shipments with high levels of risk. FDA cannot physically examine every shipment of such products, owing in part to the volume of imported products; we previously reported that the agency examines about 1 percent of entry lines annually. FDA uses PREDICT to electronically screen all imported food shipment information filed electronically to determine which imports to physically examine at the border. PREDICT uses a variety of data and analyzes data by applying rules conditional statements that tell PREDICT how to react when encountering particular information to generate risk scores for imported food. The electronic screening process consists of two phases: Prior notice screening is intended to protect against potential terrorist acts and other public health emergencies. Prior notice screening requires that an importer, broker, or other entity submit information to FDA on food being imported or offered for import into the United States before that food arrives at the port of entry. FDA targets, screens, and reviews the information to ensure that the information meets the prior notice requirements and to determine whether the food potentially poses a terrorism threat or other significant health risk. Admissibility screening is intended to ensure that the food is admissible under FFDCA. As part of admissibility screening, FDA electronically screens entry lines using PREDICT to determine, among other things, whether the product on the entry line is on an import alert. If the product on an entry line is on an import alert, then the entry line may be detained without physical examination. If the product is not on an import alert, then the entry line goes through the typical admissibility screening process through which FDA uses PREDICT to calculate a risk score and determine whether the entry line is identified for potential examination or sampling. <2. Under Its Import Alert Process for Seafood Products, FDA Detains Affected Products and Removes Firms and Products from Alerts When Violations Are Resolved> Our review of FDA s Regulatory Procedures Manual found that FDA s import alert process for seafood products includes three key components: (1) establishing new import alerts to respond to human health risks, (2) placing firms and products on new or existing import alerts (placement decisions), and (3) removing firms and products from existing import alerts when violations are resolved (removal decisions). <2.1. FDA Establishes New Seafood Import Alerts to Respond to Human Health Hazards> According to FDA s Regulatory Procedures Manual, FDA establishes new seafood import alerts to respond to human health hazards. FDA officials may recommend new import alerts for a variety of reasons, including the following: FDA officials detain one or more products for a violation of FFDCA that poses a significant health hazard (e.g., the presence of Salmonella); FDA officials notice a large number of violations affecting firms or products from a specific country or area (e.g., the presence of filth in canned crabmeat from Thailand); FDA enforces regulatory requirements affecting importers that the agency decides could be implemented, in part, through the use of an import alert (e.g., HACCP requirements); or FDA addresses concerns about the safety of specific products, including puffer fish, which contain a deadly neurotoxin, or products produced in geographic areas with known contamination, such as those from areas surrounding Fukushima, Japan, which are at risk of radionuclide contamination. FDA officials in the field or at headquarters may recommend new import alerts. FDA s Division of Import Operations reviews the recommendations and decides whether to approve them (called the clearance process). After approval, according to FDA officials, FDA revises its screening process at the ports of entry via PREDICT to screen for products, firms, or countries on the new alert. According to FDA s import alert data, as of July 3, 2018, FDA had 52 active import alerts affecting imported seafood that addressed a wide range of seafood products and violations of FFDCA. The range of violations that these alerts address included the presence of foodborne pathogens, such as Salmonella and E. coli; the presence of unapproved animal drug residues, such as chloramphenicol and nitrofurans; the presence of pesticide chemical residues that are not allowed or do not meet tolerance levels, such as diuron; the presence of decomposition or insect, rodent, or other filth; the presence of illegal or undeclared colors, undeclared food additives, such as high fructose corn syrup, or undeclared food allergens, such as milk; the failure of the firm to meet HACCP requirements; and the failure of the firm to operate in conformity with current good manufacturing practices. According to FDA s import alert data, overall, from October 1, 2011, through July 3, 2018, the 52 import alerts for imported seafood affected a total of 3,765 unique firms in 111 countries. (See app. I for information describing these 52 alerts.) <2.2. FDA Places Certain Seafood Firms and Products on Existing Import Alerts and Detains Affected Products> According to FDA s Regulatory Procedures Manual, after an import alert has been established, FDA places certain seafood firms or products on the alert and may detain affected products at the port of entry to prevent them from entering U.S. commerce pending the importer of record s response. The manual specifies that FDA may place firms or products on a new or existing import alert for the following violations of FFDCA: (1) products are manufactured, processed, or packed under insanitary conditions; (2) products are forbidden or restricted for sale in the country in which they were produced or from which they were exported; or (3) products appear to be adulterated or misbranded based on information such as the product s history of violations, among other things. Examples of adulteration may include pathogens, such as Salmonella, and residues of drugs or pesticides above accepted levels. FDA s Regulatory Procedures Manual also specifies the following types of evidence that FDA generally may rely on to show that violative conditions exist: one violative sample from FDA s examination of the product, if the product may have adverse health consequences; information and historical data, such as a firm showing a pattern of exporting violative products, if evidence indicates the product could pose a health hazard; multiple violative samples, for violations (such as decomposition, filth, or labeling) that do not pose a significant public health hazard; and violations identified during inspections of importers or foreign processing facilities. According to FDA officials, about 90 percent of the recommendations to place firms or products on an import alert result from FDA analysis of imported seafood samples that identified product violations, such as drug residues above acceptable levels. Officials stated that the remaining 10 percent of the recommendations arise from FDA inspections of importers or processing facilities that identify firm violations, such as violations of FFDCA related to HACCP requirements. According to FDA s Regulatory Procedures Manual, once a firm or product has been placed on an import alert, future shipments may be detained without physical examination, and the importer of record must decide how to respond. The importer of record receives a notice stating that the associated entry line is being detained and subject to refusal. The importer of record may request that FDA immediately refuse entry of the product, in which case the product must either be exported or destroyed. Alternatively, the importer of record may (1) submit evidence showing that the product does not appear to be violative or (2) request to recondition the product for example, relabel the product or convert the product into a type of product FDA does not regulate. According to FDA s Regulatory Procedures Manual, FDA will hold a hearing to determine whether the detained product should be released. If FDA determines that the importer of record has provided sufficient information to overcome the appearance of a violation, the importer of record receives a notice stating that the product is released. If FDA determines that the importer of record s actions did not bring the product into compliance, the product would be refused and must be exported elsewhere or destroyed. <2.3. FDA May Remove Firms and Products from Existing Import Alerts When Violations Are Resolved> FDA may decide to remove a firm or product from an import alert if there is evidence that the conditions that led to placement on the alert have been resolved, according to FDA s Regulatory Procedures Manual. Our review of the manual and interviews with FDA officials indicate that FDA sampling and inspections are key activities that support the agency s removal decisions. Generally, firms petition FDA to remove one or more products or the firms themselves from seafood import alerts, and FDA s Division of Import Operations reviews the petitions. FDA s procedures specify the evidence that firms are to submit, which varies depending on the nature of the import alert and the violation of FFDCA. FDA may require one or a combination of the following: a minimum of five consecutive nonviolative commercial shipments as determined by a private laboratory hired by the firm, an on-site inspection of the importer or foreign processing facility, or documentation showing that the cause of the violation has been fully corrected. For example, according to FDA s procedures, firms or products placed on an import alert based on a violative facility inspection may generally be removed from the alert following a reinspection that shows that corrective actions to resolve the violation have been taken. Private laboratories usually collect and analyze the samples used as evidence to indicate that a commercial shipment does not violate FFDCA and provide support for FDA s decisions to remove firms and products from import alerts. The procedures also call for the agency to have confidence that future shipments will comply with FFDCA, but they do not specify how FDA should ensure continued compliance. According to FDA officials, when the agency relies on documentation to support a removal decision, FDA generally relies on subsequent inspections of the importers or foreign processing facilities and sampling of their products to have confidence that the firms and their products continue to comply. FDA s Regulatory Procedures Manual, as supplemented by the ORA Laboratory Manual, specifies that the agency should conduct checks to review whether the work performed by such laboratories can be used as an appropriate basis for FDA s removal decisions. These checks include the following: Audit samples. FDA s manuals specify the following two audit goals to ensure that the private laboratories analyses that FDA uses to support its removal decisions are valid: (1) to audit samples from at least one of the five nonviolative entries, as determined by a private laboratory that the firm hired, to support a removal decision to ensure the validity of the laboratory s analysis and (2) to audit at least 10 percent of the work that a private laboratory performed to ensure that the laboratory submits scientifically sound data. In the course of its audits, FDA is to collect samples, called audit samples, to verify analytical results from a private laboratory that demonstrates a product complies with FFDCA. According to FDA, private laboratory analyses are a critical element in public health protection because they support FDA decisions to release detained goods. FDA s collection of audit samples is intended to provide confidence in the laboratories analytical results. On-site assessments. FDA s ORA Laboratory Manual states that, at times, FDA visits a private laboratory to ascertain that it has the capability or capacity to perform analyses that FDA often relies on to support removal decisions. The manual also states that on-site assessments provide the opportunity to observe that equipment and standards, among other things, needed to conduct the proposed analyses are present and in good order; to review the adequacy of the laboratory s quality assurance and record-keeping programs; and to observe the techniques and practices of the analysts. Furthermore, the manual states that the on-site assessments are voluntary and that a private laboratory may decline to participate. <3. FDA s Oversight of Key Activities to Support Import Alert Removal Decisions Is Limited> FDA has established audit goals, requirements, and expectations related to sampling and inspections key activities to support import alert removal decisions but does not monitor the extent to which it is meeting them. In our review of FDA s CMS data for 274 removal decisions from a nongeneralizeable selection of seven import alerts from October 1, 2011, through July 3, 2018, we found that FDA conducted audit sampling and inspections to support removal decisions and subsequent sampling and inspections to ensure continued compliance for a small percentage of the decisions. Specifically: Audit samples prior to removal decisions. For almost all of the 274 removal decisions we reviewed, FDA did not meet its first audit goal to audit samples from at least one of the nonviolative shipments used to support a removal decision to ensure the validity of the analysis of the private laboratory hired by the firm. All seven of the import alerts we reviewed were established for violations of FFDCA for which FDA s Regulatory Procedures Manual specifies that firms should enter into U.S. commerce at least five consecutive nonviolative commercial shipments, as determined by a private laboratory hired by the firm, before FDA may consider a removal. Therefore, FDA should have audited samples from at least one nonviolative shipment for all 274 removal decisions related to these seven import alerts. As described earlier, FDA collects audit samples from shipments of imported seafood to conduct such audits. However, we found that FDA did not conduct any sampling, including audit sampling, within 1 year prior to removal for 260 (or 95 percent) of the 274 removal decisions we reviewed. FDA officials told us that they do not monitor the extent to which the agency is meeting its audit goal, such as through analyzing CMS sampling data across all firms and products affected by the alerts and therefore were not aware that the agency had not met the audit goal. Conversely, FDA officials told us that they were aware that the agency historically had not met its second audit goal specified in its procedures to audit at least 10 percent of each private laboratory s work to support removal decisions to ensure that each laboratory submits scientifically sound data. While FDA does not regularly monitor whether it is meeting its 10 percent audit goal, in 2014, the agency analyzed data on the audit samples it collected during its audits of shipments covering fiscal years 2003 through 2013. FDA conducted this analysis in response to concerns that district staff raised about the quality of the analyses performed by private laboratories for one of its districts. These concerns included the following: Failure to obtain representative samples from throughout a shipment. Failure to obtain samples randomly from throughout the shipment. Failure to ensure an unbroken chain of custody from the site of collection of a sample to the private laboratory as necessary to ensure the integrity of the sample. Use of untrained temporary employees to collect samples and representing these individuals as employees of the private laboratory. FDA s 2014 analysis showed that the agency did not achieve its 10 percent audit goal during the 11-year period. According to the analysis, FDA audited about 1 to 2 percent of work performed by private laboratories to support removal decisions. In response to our request, FDA updated its analysis through fiscal year 2018. The updated analysis shows that this percentage has improved in recent years, with FDA auditing about 3 percent of the work that private laboratories performed for fiscal year 2018. However, this level of auditing remains far below the goal of at least 10 percent, as shown in figure 1. According to FDA officials, the agency has not met this audit goal largely because it has limited resources. Inspections prior to removal decisions. For the 274 removal decisions we reviewed, FDA conducted inspections of importers or foreign processing facilities for 28 (about 10 percent) of the removal decisions in the 6 months prior. According to FDA s procedures, firms or products placed on an import alert based on a violative facility inspection may generally be removed from the alert following a reinspection of the importer or foreign processing facility. In some instances, a firm may present information or documentation sufficient to demonstrate that appropriate corrections are in place to overcome the appearance of a violation and, with appropriate concurrence, may be removed from the import alert. FDA officials added that, regardless of the basis for placement on an import alert, FDA could require an on-site inspection prior to removal, depending on the hazard the violation posed. For example, certain violations may result in a finding of official action indicated (OAI), which indicates that an establishment failed to meet regulatory or administrative requirements and may pose a hazard to public health. FDA s Field Management Directive 86 establishes a goal for FDA staff to conduct a follow-up inspection within 6 months after an OAI finding to verify that the facility has corrected violations. In our review of the 274 removal decisions, we found that for 32 firms that received an OAI inspection finding after FDA issued the directive in December 2011, FDA did not conduct a follow-up inspection for 31 of these firms before removing them from an import alert. According to FDA officials, the agency did not monitor whether its staff decided that inspections would be expected for the 274 removal decisions or whether the facilities that received an OAI inspection finding were reinspected. FDA officials told us that the agency relied on reviewing data on removal decisions individually to ensure that expected inspections had been conducted. Consequently, FDA was not aware of the extent to which the facilities associated with the removal decisions were actually inspected. Sampling or inspections following removal decisions. As shown in figure 2, for the 274 removal decisions we reviewed, FDA subsequently conducted sampling for 6 percent of the products at ports of entry and inspections for 13 percent of the importers or foreign processing facilities within 1 year after removal. FDA does not have a goal for the amount of sampling or inspections that should be conducted following removal decisions; however, as described above, FDA s procedures call for the agency to base removal decisions on evidence establishing that the conditions that gave rise to the appearance of a violation have been resolved and that the agency has confidence that future shipments will comply with FFDCA. FDA officials said that when the agency does not inspect a facility and relies on documentation describing the actions the firm has taken to address the appearance of a violation to support a removal decision, the agency relies on subsequent sampling and inspections to have confidence in continued compliance. According to FDA, the past violative history of a firm is reflected in the PREDICT screening rules for the examination of future shipments and in the process of prioritizing inspections of foreign facilities. It was unclear from the CMS data that FDA provided the extent to which the agency relied on documentation to support the remainder of its removal decisions. However, based on FDA officials statements about subsequent sampling or inspections, we would expect to see a larger percentage of products sampled and firms inspected after their removal from import alerts for FDA to have confidence in continued compliance given the low percentage of inspections we found before removal decisions. FDA officials said they were not monitoring whether staff decided that subsequent sampling and inspections would be expected for these removals, and staff do not continuously monitor post- removal activities. Consequently, FDA officials were not aware of the extent to which the products and foreign processing facilities associated with removal decisions were subsequently sampled and inspected. FDA officials told us that they were generally aware that FDA had conducted limited sampling and inspections to support removal decisions and have confidence in continued compliance. They attributed this limited sampling and inspections to their belief that many import alert removal decisions can be supported by reviewing documentary evidence that FDA requested and the firms provided that describes the actions the firms have taken to address the appearance of a violation. According to FDA officials, such reliance on firm-provided documentation to support removal decisions is, in part, how FDA prioritizes its use of limited laboratory and inspection resources. FDA officials stated that the agency can check the basis of its removal decisions by looking up individual import alert cases in CMS and the agency s sampling and inspection data in FACTS and OASIS to determine whether the agency would conclude that sampling and inspections to support these decisions would be appropriate, and if so, whether they were done. These officials said that they believed that checking the data on the basis of removal decisions individually and when questions arise from sources internal or external to FDA, instead of regularly analyzing sampling and inspections data, was sufficient to ensure the appropriate level of oversight. However, as discussed above, this approach has not informed them of the extent to which the agency is meeting its audit goals and expectations. Standards for internal control in the federal government state that management should design control activities to achieve objectives and respond to risks. An example of such control activities includes management comparing actual performance with planned or expected results. Such a comparison could include FDA comparing audits conducted with its audit goal (e.g., auditing at least 10 percent of a private laboratory s work) to ensure that its goal was met. Monitoring the extent to which the agency is meeting its audit goals and expectations for conducting sampling and inspections to support its import alert decisions would enhance its oversight of these activities to better protect U.S. consumers from imported seafood that is not safe and wholesome. <4. FDA Generally Has Not Coordinated with DHS to Help Ensure Firms Comply with Seafood Import Alerts> FDA and DHS have established a mechanism for coordinating the use of certain resources, but they generally have not coordinated to help ensure that firms comply with seafood import alerts by identifying potential instances of evasion of alerts, according to agency officials. FDA officials stated that the agency can coordinate with CBP in situations that could involve evasion of import alerts, but the agency does not have a formal mechanism for regularly and proactively coordinating to identify evasion. FDA officials said that such coordination could include CBP sharing information that could help FDA identify instances of evasion. As previously noted, CBP is responsible for collecting customs duties on imports, including seafood, and seeks to prevent the evasion of customs duties. As we reported in 2012, CBP personnel are to analyze trends in import data, among other things, to look for anomalies that may indicate evasion and also follow up on allegations from external sources. Once CBP identifies a potential instance of evasion, it can use a variety of techniques at different points in the import process to determine whether evasion is actually occurring. These techniques include collecting samples from shipments of products at U.S. ports of entry and conducting laboratory analyses of these samples to identify their true country of origin. Through its efforts, CBP has identified illegal transshipments a scheme to conceal the country of origin and thereby evade applicable duties or FDA s import alerts. For example, CBP reported that in 2016, customs officers seized about 42 tons of Chinese honey that had been transshipped through Taiwan to evade U.S. duties applicable to Chinese honey. According to FDA documents, at the same time, FDA had an import alert for honey because of unsafe drug residues. This alert included Chinese firms, but did not include any firms from Taiwan. In February 2009, we reported on CBP s expertise in detecting illegal transshipment that could enhance FDA s ability to detect import alert evasion. We stated that FDA and CBP could work together to help ensure that importers were not attempting to evade duties or import alerts. However, we found that the agencies had not identified ways to maximize and leverage their resources or established processes and policies for working together systematically across agency lines. We recommended, among other things, that FDA and CBP develop mechanisms to share information related to the evasion of import alerts. FDA and CBP agreed with our recommendation, but as of July 2019, the agencies had not fully implemented it. Specifically, FDA and CBP signed a memorandum of understanding (MOU), effective May 2013, to set forth terms for CBP to coordinate with FDA on staffing, space, and equipment requirements for the National Targeting Center. However, the MOU does not address CBP sharing information on potential evasion of import alerts with FDA regularly or the agencies working proactively to identify such evasion. According to CBP officials, FDA and CBP do not coordinate specifically on targeting to detect evasion, but CBP would be willing to coordinate with FDA and provide any applicable expertise in this area. While a collaborative mechanism such as an MOU is not needed to share information, we continue to believe that FDA and CBP should develop a mechanism to help the agencies formally coordinate to identify potential evasion of seafood import alerts. Until these agencies develop such a mechanism, they may be missing opportunities to share information regularly that could benefit each agency s efforts to detect illegal transshipment and help FDA proactively identify and prevent evasion of seafood import alerts. <5. FDA Has Not Assessed the Effectiveness of Its Seafood Import Alerts in Achieving Its Food Safety Mission> FDA has not assessed the effectiveness of its seafood import alerts in helping to achieve its food safety mission. Specifically, FDA has not established performance goals and measures for seafood import alerts key elements of assessing the effectiveness of programs. Performance goals explain the purpose of agency programs and the results including outcomes that they intend to achieve. Performance measures provide organizations with the ability to track the progress they are making toward their mission and goals and provide managers with information on which to base their organizational and management decisions. Under GPRAMA, agencies are required to develop long-term strategic plans and establish results-oriented goals in alignment with their missions and identify objectives and strategies needed to achieve those goals. GPRAMA also requires agencies to use performance information to assess their progress toward achieving their goals. According to FDA officials, the agency is implementing a program, which it refers to as an import alert effectiveness program, to review its import alerts. FDA documents note that the focus of this program includes (1) determining if FDA identified the firms on import alerts during its admissibility screening and took the appropriate action, (2) ensuring the accuracy of data FDA maintains in CMS on firms on import alerts, and (3) determining whether the reasons for the alerts are still relevant, and ensuring that the import alerts are accurately posted for clear communication to industry and FDA field staff. We commend FDA for these efforts. However, according to our review of FDA documents describing the activities planned for this program, the program does not include performance goals and measures for import alerts. FDA officials stated that this is because the program is new. Additionally, in February 2019, FDA published a broad plan for the safety of imported food that includes a goal, objective, and strategy related to import alerts. Under its goal to detect and refuse entry of unsafe foods at the border, FDA has an objective to strategically use import alerts and import certifications by using data and information from oversight activities, regulatory cooperation, and other reliable sources to enhance the effectiveness and efficiency of import alerts. However, FDA s strategy for achieving this objective does not include performance goals or measures that would allow the agency to assess the effectiveness of its seafood import alerts in helping to achieve FDA s food safety mission. In its 2019 plan for the safety of imported food, FDA states that it intends to develop performance goals and measures for imported food safety. However, FDA has not established a time frame for doing so. Once FDA has developed goals and measures for imported food safety, FDA would be able to establish corresponding performance goals and measures specific to seafood import alerts. By developing such goals and measures, FDA would be better positioned to assess how well its seafood import alert activities are supporting the agency in achieving its food safety mission. <6. Conclusions> Import alerts play an important role in keeping the U.S. food supply as well as other FDA-regulated products safe, and FDA has numerous active import alerts affecting imported seafood that address a wide range of seafood products and violations of FFDCA. However, FDA does not have a process to monitor the extent to which it is conducting key activities to support its removal decisions sampling and inspections. Establishing such a process would provide greater assurance that FDA is conducting its expected level of sampling and inspections to support its removal decisions and have confidence in continued compliance. Additionally, FDA and CBP have yet to develop mechanisms to share information regularly and proactively that can help detect noncompliance with import alerts through evasion. We continue to believe that doing so, as we previously recommended, would enhance the agencies efforts to identify potential evasion of seafood import alerts. Further, by establishing a time frame for developing goals and measures for assessing the effectiveness of its imported food safety efforts and also developing such goals and measures specific to seafood import alerts, FDA would be better positioned to assess how well its import alert activities are supporting the agency in achieving its food safety mission. <7. Recommendations for Executive Action> We are making the following three recommendations to FDA: The Commissioner of FDA should establish a process to monitor whether the agency is meeting its audit goals and expectations for sampling and inspections to support its removal decisions for seafood import alerts. This could be done through regularly analyzing data that FDA collects, such as those in CMS, FACTS, and OASIS. (Recommendation 1) The Commissioner of FDA should establish a time frame for developing performance goals and measures for its imported food safety program. (Recommendation 2) The Commissioner of FDA should, as the agency develops goals and measures for its imported food safety program, develop performance goals and corresponding performance measures specific to seafood import alerts. (Recommendation 3) <8. Agency Comments and Our Evaluation> We provided a draft of this report to HHS and DHS for comment. In its comments, reproduced in appendix II, HHS s FDA agreed with all three of our recommendations. FDA also provided technical comments, which we incorporated as appropriate. DHS provided technical comments, which we incorporated as appropriate. More specifically, FDA agreed with our recommendation that it establish a process to monitor whether the agency is meeting its audit goals and expectations for sampling and inspections to support its removal decisions for seafood import alerts. FDA stated that it agrees that developing metrics and monitoring the import alert removal process is necessary and that these efforts should be guided by the analysis of available data. FDA also stated that it plans to develop goals for its auditing process to ensure audit sampling targets products of higher public health concern and provides the agency support to guide decisions to release individual shipments that have been detained as a result of an import alert. FDA further stated that it intends to enhance its case management system to include checklists for FDA reviewers who process petitions for removal from import alerts to better document that all necessary information is present and has been evaluated to support the removal decision. FDA agreed with our recommendation that it should establish a time frame for developing performance goals and measures for its imported food safety program. FDA stated that the agency is developing performance measures and outcome indicators for imported food safety to help support the agency s overall goal of reducing the incidence of illness and death attributable to preventable contamination of FDA- regulated foods. Finally, FDA agreed with our recommendation that it should, as it develops goals and measures for its imported food safety program, develop performance goals and corresponding performance measures specific to seafood import alerts. FDA stated that the agency will use the results of its import alert effectiveness program to develop metrics to demonstrate the effectiveness of the program and its use of import alerts. The extent to which FDA s planned actions will satisfy our recommendations will depend on how FDA implements those actions. As agreed with your offices, unless you publicly announce the contents earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Health and Human Services, the Secretary of Homeland Security, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact me at (202) 512-3841 or morriss@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Food and Drug Administration Import Alerts Affecting Seafood Products Table 1 includes information posted on the Food and Drug Administration s website describing the 52 import alerts affecting seafood that were active as of July 3, 2018. Appendix II: Comments from the Department of Health and Human Services Appendix III: GAO Contact and Staff Acknowledgments <9. GAO Contact> <10. Staff Acknowledgments> In addition to the contact named above, Anne K. Johnson (Assistant Director), David Moreno (Analyst in Charge), Kevin Bray, Steven Campbell, Stephen Cleary, Michele Fejfar, Ellen Fried, Juan Garay, Caitlyn Leiter-Mason, Ying Long, Cynthia Norris, Dan Royer, and Kiki Theodoropoulos made key contributions to this report. Related GAO Products Food Safety and Nutrition: FDA Can Build on Existing Efforts to Measure Progress and Implement Key Activities. GAO-18-174. Washington, D.C.: January 31, 2018. Imported Seafood Safety: FDA and USDA Could Strengthen Efforts to Prevent Unsafe Drug Residues. GAO-17-443. Washington, D.C.: September 15, 2017. Seafood Safety: Status of Issues Related to Catfish Inspection. GAO-17- 289T. Washington, D.C.: December 7, 2016. Imported Food Safety: FDA s Targeting Tool Has Enhanced Screening, but Further Improvements Are Possible. GAO-16-399. Washington, D.C.: May 26, 2016. 2015 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-15- 404SP. Washington, D.C.: April 14, 2015. Food Safety: FDA Can Better Oversee Food Imports by Assessing and Leveraging Other Countries Oversight Resources. GAO-12-933. Washington, D.C.: September 28, 2012. Managing for Results: Key Considerations for Implementing Interagency Collaborative Mechanisms. GAO-12-1022. Washington, D.C.: September 27, 2012. Seafood Safety: Responsibility for Inspecting Catfish Should Not Be Assigned to USDA. GAO-12-411. Washington, D.C.: May 10, 2012. Seafood Safety: FDA Needs to Improve Oversight of Imported Seafood and Better Leverage Limited Resources. GAO-11-286. Washington, D.C.: April 14, 2011. Seafood Fraud: FDA Program Changes and Better Collaboration among Key Federal Agencies Could Improve Detection and Prevention. GAO-09- 258. Washington, D.C.: February 19, 2009. Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005. | Why GAO Did This Study
Imports account for over 90 percent of U.S. seafood consumption. FDA and the Department of Homeland Security (DHS) both play a role in overseeing imported seafood. FDA is responsible for ensuring the safety of most imported seafood. DHS provides FDA with import data on FDA-regulated products, including seafood. If FDA finds that imported seafood products appear to violate U.S. laws, FDA may place the products, firms, or countries on an import alert.
GAO was asked to review FDA's efforts to use import alerts to ensure the safety of imported seafood. This report, among other things, (1) describes FDA's import alert process for seafood products, (2) examines FDA oversight of key activities to support import alert removal decisions, and (3) examines the extent to which FDA has assessed the effectiveness of its seafood import alerts. GAO reviewed FDA procedures and data, including data on 274 removal decisions, for a non-generalizable sample of seven import alerts selected for a range of violations of federal law. GAO also interviewed FDA officials.
What GAO Found
The Food and Drug Administration's (FDA) import alert process for seafood products includes three key components: (1) establishing new import alerts, which inform FDA field staff and the public that the agency has enough evidence that products appear to violate a federal food safety law to detain those products at U.S. ports of entry without physically examining them; (2) placing firms and products on existing import alerts; and (3) removing firms and products from those import alerts when violations are resolved. As of July 3, 2018—the most recent data at the time of GAO's analysis—FDA had 52 active import alerts affecting imported seafood that addressed a wide range of violations of federal law, including the presence of foodborne pathogens, such as Salmonella , or unapproved animal drug residues.
FDA has established audit goals, requirements, and expectations related to sampling and inspections—key activities to support import alert removal decisions—but does not monitor the extent to which it is meeting them. GAO's review of 274 removal decisions from October 1, 2011, through July 3, 2018, found that FDA had supported only a small percentage of its removal decisions by conducting sampling and inspections. For example, FDA has a goal to audit samples from at least one of the shipments used to support each removal decision to ensure the validity of the analysis that a private laboratory performed. However, GAO found that within a year prior to the 274 removal decisions, FDA did not conduct any audits for 260 (95 percent) of the 274 removal decisions. FDA officials said they conducted limited sampling because many import alert removal decisions can be supported by documentary evidence provided by firms. Additionally, for certain violations that indicate a firm failed to meet regulatory or administrative requirements and may pose a public health hazard, an FDA directive establishes a goal for FDA staff to conduct a follow-up inspection within 6 months. However, GAO's review of removal decisions found that for 31of the 32 firms that received such a finding, FDA did not conduct a follow-up inspection before removing them from an import alert. FDA officials said they did not know whether they were meeting their audit goals because the agency does not have a process to monitor the extent to which it is conducting its sampling and inspections. Establishing such a process would provide greater assurance that FDA is conducting its expected level of sampling and inspections to support its removal decisions and has confidence in continued compliance.
FDA has not established performance goals and measures for seafood import alerts—key elements for assessing the effectiveness of programs. Goals explain the outcomes a program seeks to achieve, and measures track progress towards those goals. In February 2019, FDA published a broad plan for the safety of imported food. The plan states that FDA intends to develop performance goals and measures related to imported food safety, but FDA has not established a time frame for doing so. By establishing a time frame and developing such goals and measures, FDA would be better positioned to assess how well its seafood import alert activities are supporting the agency in achieving its food safety mission.
What GAO Recommends
GAO recommends that FDA (1) establish a process to monitor whether the agency is meeting its audit goals and expectations for sampling and inspections, (2) establish a time frame for developing goals and measures for its imported food safety program, and (3) develop goals and measures for seafood import alerts. FDA agreed with GAO's recommendations. |
gao_GAO-19-675 | gao_GAO-19-675_0 | <1. Background> <1.1. Coast Guard Shore Infrastructure> The Coast Guard owns or leases more than 20,000 shore facilities consisting of various types of buildings and structures. According to Coast Guard guidance, a building is generally defined as a fully enclosed structure that is affixed to the ground, in which personnel work or live, or where equipment is stored. A structure is generally defined as any other construction affixed to the ground that does not meet the definition of a building. The Coast Guard s shore infrastructure is organized into 13 asset types, known as asset lines. Table 1 provides information on 11 of these asset lines, including examples, numbers of assets, and their replacement value as of 2018. We reported in February 2019 that the Coast Guard faced recapitalization, new construction, and deferred maintenance backlogs for its shore infrastructure totaling at least $2.6 billion as of 2018 and that its backlogs increased by $300 million since fiscal year 2012. Moreover, according to the Coast Guard Civil Engineering program s 2018 annual report, about 46 percent of the Coast Guard s shore infrastructure was beyond its overall service life. In 2018, the Coast Guard rated its overall shore infrastructure condition as a C- based on criteria it derived from standards developed by the American Society of Civil Engineers. In addition, some asset lines such as the industrial asset line, whose assets are generally mission-critical, were rated lower. Table 2 shows information about Coast Guard asset lines, including the rate at which the Coast Guard reported that these assets were functioning past their service life, and the condition grades assigned by the Coast Guard for fiscal year 2018. <1.2. Coast Guard Roles and Processes for Managing Shore Infrastructure> According to Coast Guard guidance, the Office of Civil Engineering and the Shore Infrastructure Logistics Center each play a role in managing the Coast Guard s infrastructure by assessing risks and helping to mitigate damage from natural disasters or other events. The Office of Civil Engineering is responsible for setting Coast Guard-wide civil engineering policy, which includes facility planning, design, construction, maintenance, and disposal. The Shore Infrastructure Logistics Center is to establish project priorities for the acquisition, programmed depot maintenance, major repair, and modification of shore facilities. This center is also responsible for implementing the Coast Guard s shore infrastructure policies. According to its guidance, the Coast Guard makes procurement, construction, and improvements funding decisions for its shore infrastructure through enterprise-level planning boards that meet twice a year. These planning boards are to prioritize Coast Guard shore infrastructure needs based on expected appropriations and other prioritization factors or considerations, such as damage caused by natural disasters or the Coast Guard s need to construct new shore infrastructure or recapitalize existing facilities. The boards are responsible for evaluating potential shore infrastructure projects that have been assessed, ranked, and recommended by Coast Guard managers of various asset lines. For example, aviation asset line managers may recommend to the planning boards aviation-related shore infrastructure projects, such as the recapitalization of runways, landing areas, and hangars. <1.3. Climate Change Effects and Extreme Weather> According to the National Academies, climate change poses serious risks to many of the physical and ecological systems on which society depends, although the exact details cannot be predicted with certainty. Moreover, the effects and costs of extreme weather events, such as floods and droughts, are expected to increase in significance as they become more common and intense because of climate change. For example, the National Oceanic and Atmospheric Administration (NOAA) has reported that eight of the 10 costliest tropical cyclones in U.S. history occurred in recent years from 2005 to 2017. DOD documented seven effects commonly associated with climate change and their potential effects on its infrastructure and operations (see table 3). Although the Coast Guard operates on a smaller scale, it maintains many of the same types of infrastructure as DOD, and these infrastructure are also situated in coastal and riverine locations, and thus subject to the same potential effects from extreme weather events. For example, Coast Guard facilities along the East and Gulf coasts of the United States are vulnerable to hurricanes which NOAA projects will increase in frequency and severity because of climate change and may cause flooding or wind damage to Coast Guard infrastructure. Coast Guard infrastructure is also vulnerable to natural disasters that are not associated with climate change. For example, Coast Guard facilities situated on the West Coast, Hawaii and Alaska, are located on or near historic earthquake fault lines. As a result, this infrastructure is more likely to be damaged by earthquakes than infrastructure located elsewhere in the country, according to the Coast Guard. According to Coast Guard officials, it can take months and sometimes years to repair or replace Coast Guard facilities damaged by severe natural disasters. For example, as shown in Figure 1, Coast Guard facilities at Station Port Aransas in Texas suffered significant damage during Hurricane Harvey in 2017. As of April 2019, the Coast Guard was working to demolish these damaged facilities so they could be replaced by one facility that is resilient to hurricanes. <1.4. DHS Critical Infrastructure Risk Management Framework> DHS established its Critical Infrastructure Risk Management Framework to guide critical infrastructure owners and operators, from both the public and private sector, in investing limited resources to protect critical infrastructure. As shown in Figure 2, the framework consists of five steps that involve (1) setting goals and objectives, (2) identifying infrastructure, (3) assessing and analyzing risk, (4) implementing risk management activities, and (5) measuring the effectiveness of actions taken to address identified risks. According to DHS, agency decision makers can use this framework to prioritize investments, develop plans, and allocate resources for critical infrastructure in a risk-informed way. The framework is based on risk management activities, which call for cost-effective use of resources by taking protective actions that offer the greatest mitigation of risk for any given expenditure. According to the NIPP, a risk management approach that aligns with the five key steps can help guide organizational decision making and prioritize actions to more effectively achieve desired outcomes. <2. Coast Guard Has Rebuilt Some Damaged Facilities and Is Conducting a Vulnerability Assessment of Selected Buildings> Since 2005, the Coast Guard has taken actions to improve the resilience of at least 15 storm-damaged shore facilities and has rebuilt them to new standards largely by using supplemental appropriations provided for this purpose. The Coast Guard has also developed new guidance to increase the likelihood that new or recapitalized buildings will withstand natural disasters and follows updated state and local building codes, which a senior Coast Guard official told us led to more resilient buildings, thus limiting risks to Coast Guard personnel and operations. In 2015, the Coast Guard s Civil Engineering program initiated a formal assessment of owned and occupied Coast Guard buildings to determine which were vulnerable to 10 natural disasters, which, according to agency officials, it aims to complete in 2025. <2.1. Coast Guard Has Received Supplemental Appropriations to Rebuild Some Damaged Facilities> Since 2005, the Coast Guard has taken actions to improve the resilience of its shore infrastructure, largely by using supplemental appropriations for rebuilding facilities damaged by major storms. Specifically, from December 2005 through June 2019, the Coast Guard was appropriated about $2 billion in supplemental funds to, among other things, rebuild or relocate 15 facilities damaged by hurricanes. During this time, the Coast Guard has relocated facilities further inland or to higher ground, upgraded facilities to be more resilient, and designed new facilities with features to protect them from natural disasters. The 2016 and 2017 hurricane seasons were particularly destructive, and the Coast Guard received $719 million in supplemental funding to restore facilities damaged by Hurricanes Matthew, Harvey, Irma, and Maria. Figure 3 below shows Coast Guard shore infrastructure, and associated replacement values, located along the East and Southeast coasts of the United States and the general paths of selected hurricanes in those regions since 2005. The Coast Guard has used supplemental funding to rebuild or relocate at least 15 damaged facilities to enhance their resilience. To improve the resilience of its facilities when rebuilding after hurricanes, Coast Guard officials reported that they generally either relocated the facility inland for better protection from extreme weather or modified the facility to be more resilient by elevating it to protect it from storm surge and flooding. For example: Station Houston, Texas. After this station was damaged by Hurricane Ike in 2008, the Coast Guard determined that this station s boathouse could not be built above the local floodplain and still meet mission requirements. As a result, the Coast Guard took steps to protect the boathouse from future water damage by using water resistant materials in its construction, elevating its electrical and telecommunications systems above the flood plain, and placing the heating, ventilation, and air conditioning systems on the roof of the building. Sector Houston-Galveston, Texas. After being damaged by Hurricane Ike in 2008, this regional command facility was relocated further inland to provide the new facility with greater protection from extreme weather. It was also designed to withstand wind speeds of up to 115 miles per hour. Station Sandy Hook, New Jersey. After this station was damaged by Hurricane Sandy in 2012, the old building was demolished and replaced on the same site with a facility that was designed to be more resilient. The station s first floor was constructed with openings to allow flood waters to pass beneath the station. Station Sabine Pass, Texas. Following damage by Hurricane Ike in 2008, the Coast Guard rebuilt this station in 2013 to better withstand floods and high winds (see fig. 4). The new station s first floor was elevated to a height that exceeds the projected depth of a 100-year flood to protect station equipment. The station was also designed to resist wind speeds up to 130 miles per hour sufficient to withstand a Category III hurricane. <2.2. The Coast Guard Has Updated Its Guidance to Reflect Higher Building Standards> The Coast Guard has also developed new guidance reflecting higher building standards, and follows updated state and local building codes which a senior Coast Guard official told us led to more resilient buildings. In February 2017, the Coast Guard s Civil Engineering program issued engineering planning guidance intended to increase the likelihood that new or recapitalized buildings would withstand natural disasters and that the design of these buildings would minimize risks to Coast Guard operations and personnel, among other things. This new guidance contains the following requirements: All new permanent, regularly occupied buildings will either be located at least 2 feet above the Federal Emergency Management Agency s (FEMA) 100-year base flood elevation or meet the level of the 500- year base flood elevation for the proposed site location. To account for storm surge, sea level rise, or periodic flooding, buildings may also be constructed above this elevation as necessary. To ensure operational continuity and safety after a flood event, critical building systems such as utility and communications systems are to be located at least 3 feet above the 100-year base flood elevation. Each site will be evaluated for vulnerability to natural hazards, such as earthquakes, tornadoes, and wildfires. This evaluation will identify risk to Coast Guard operations and personnel. A senior Coast Guard official testified to Congress in November 2017 that Coast Guard buildings rebuilt after being damaged by Hurricane Ike in 2008 suffered minimal damage from Hurricanes Harvey and Irma. The official also said that the resilience of these buildings resulted from the recapitalization efforts that made them more storm-resilient and allowed them to align the buildings with modern building codes and standards. Further, according to Coast Guard civil engineering officials, units impacted by Hurricanes Harvey, Irma and Maria which had been recapitalized to resilient standards returned to full mission capability within 2 to 3 days and, in some instances, avoided damage or a loss of mission capability as a result of more resilient construction. For example, operations at Sector Houston-Galveston, which supports a wide range of Coast Guard missions, were not interrupted during Hurricane Harvey, allowing it to serve as the primary federal response hub during this disaster. A senior official from the Coast Guard Facilities Design and Construction Center told us that state and local building codes, which have been updated as a result of lessons learned from natural disasters, have also led to more resilient Coast Guard buildings because the Coast Guard is required to align its facilities standards with these codes. For example, according to this official, Florida updated its building codes after Hurricane Andrew in 1992, which resulted in more resilient buildings in this state. In December 2018, the Coast Guard Civil Engineering program issued updated planning guidance for reconstructing facilities damaged by Hurricanes Matthew, Harvey, Irma, and Maria in 2016 and 2017. According to this guidance, new and renovated facilities are to incorporate resilient construction techniques including, but not limited to, hurricane resistant construction and design, and infrastructure resiliency. These facilities are to have the ability to return to full operations after an event, minimizing any major reconstruction and long-term mission impact. In addition, when the Coast Guard builds a new facility or renovates an existing one that directly supports Coast Guard natural disaster response efforts, that facility is to be built to a higher resiliency level to increase the likelihood that it will remain operational during a natural disaster. <2.3. Coast Guard Began Assessing Certain Buildings for Vulnerabilities to Natural Disasters in 2015 and Aims to Complete the Assessment in 2025> In 2015, the Coast Guard s Civil Engineering program initiated a formal vulnerability assessment of owned and occupied Coast Guard buildings, and according to Coast Guard officials they aim to complete this assessment in 2025. The Coast Guard calls this assessment the Shore Infrastructure Vulnerability Assessment. According to Coast Guard documentation, its focus was to determine the vulnerability of these buildings and Coast Guard personnel to natural disasters. Further, the assessment results are intended to assist with contingency planning by identifying which Coast Guard facilities are likely to remain operational after a natural disaster. According to its documentation, this vulnerability assessment is to be completed in two phases. During Phase I, completed in 2018, the Coast Guard analyzed 3,214 buildings, or approximately 16 percent of its infrastructure, for vulnerabilities to disasters such as floods, earthquakes, and hurricanes. To conduct its analysis, Coast Guard officials analyzed the vulnerability of these buildings to 10 natural disasters using information from other government agencies and professional organizations. For example, the Coast Guard assessed its vulnerability to flooding using FEMA, National Weather Service information, state sources and websites. This analysis identified Coast Guard-wide infrastructure vulnerabilities to coastal risks such as shoreline loss, coastal erosion and earthquakes, as well as tsunami risks on the West Coast of the United States, Alaska, Guam, and Hawaii, and immediate and serious flood risks in Puerto Rico and the Gulf and East Coasts. The Phase I report recommended that Coast Guard units and contingency planners consider these vulnerabilities when preparing contingency plans or making capital investments in Coast Guard facilities. Although the Shore Infrastructure Vulnerability Assessment Phase I report identified multiple vulnerabilities to sixty-eight percent of the assessed infrastructure, Coast Guard Civil Engineering program officials told us they were unable to conclusively determine whether approximately 1,500 assessed buildings were vulnerable to hurricane winds, earthquakes, or tornadoes leading officials to conclude that they needed to conduct further structural analysis. Accordingly, Coast Guard Civil Engineering program officials initiated plans for Phase II of the assessment, which involves more detailed structural analyses of 1,500 buildings to determine whether they can withstand either earthquakes or tornado and hurricane winds, depending on the building. Since earthquakes strike with essentially no warning, unlike hurricanes and tornadoes, Coast Guard Civil Engineering program officials told us that the Coast Guard considered them to be a greater threat to its personnel and infrastructure. Accordingly, the Coast Guard decided that Phase II of the assessment would prioritize structural analyses for buildings it believes to be more susceptible to damage from earthquakes. Further, it would prioritize the order in which it assesses these buildings based on how critical the building is to Coast Guard operations, building occupant density, and the overall age and condition of the building. The Shore Infrastructure Vulnerability Assessment Phase II analysis began in September 2018 with a contract for about $700,000 to determine if 15 buildings at multiple Coast Guard sites are vulnerable to earthquakes. According to the contract, these assessments are to be completed in October 2021. <3. Coast Guard Processes to Improve Shore Infrastructure Resilience Do Not Fully Align with Key Steps of DHS s Critical Infrastructure Risk Management Framework> While the Coast Guard has taken steps to improve the resilience of its shore infrastructure by rebuilding storm damaged facilities and initiating a vulnerability assessment, its overarching processes to improve shore infrastructure resilience are not fully aligned with the five steps of the DHS Critical Infrastructure Risk Management Framework. As previously mentioned, DHS established this framework to guide both public and private resource investment decisions for protecting critical infrastructure. Its five steps include (1) setting goals and objectives, (2) identifying infrastructure, (3) assessing and analyzing risk, (4) implementing risk management activities, and (5) measuring the effectiveness of actions taken to address identified risks. <3.1. Set Goals and Objectives> According to the first step of the DHS Critical Infrastructure Risk Management Framework, organizations should define specific goals for what they intend to accomplish and establish objectives to help them achieve the goals (see text box). Organizations that establish broad strategic goals for risk management can also benefit from translating these goals into specific, measurable objectives to assess the extent to which its actions actually reduce risk (see text box). DHS Critical Infrastructure Risk Management Framework Step 1 Organizations should define specific outcomes, conditions, end points, or performance targets that collectively describe an effective and desired risk management posture. By defining risk management goals and expressing them in terms of the objectives and outcomes the organization intends to accomplish, stakeholders, including those at all levels of government and the private sector, would be better able to tailor their risk management programs and activities to address infrastructure resilience needs. Our review of four key Coast Guard documents related to managing its shore infrastructure showed that some of these documents refer to resilience and identify it as an important factor to its operational success. However, none of the documents we reviewed identified a measurable goal or objective for improving shore infrastructure resilience. Instead, the documents either include goals related to management of the shore infrastructure program, or include no goals at all. Specifically: The Coast Guard Shore Infrastructure Strategic Plan for 2017-2021 includes what it describes as performance and foundational goals, including a foundational goal for improving resilience, contingency preparedness, and response to natural hazards. However, the plan does not link this foundational goal to a specific objective and performance target that could guide Coast Guard actions to improve shore infrastructure resilience. For example, an objective could be to increase the percentage of mission critical buildings that are within or above base flood elevations by a certain date, and annual targets could be established to assess progress toward this goal. The Coast Guard issued its agency-wide strategic plan in November 2018 which states that resilient shore infrastructure is directly connected to Coast Guard operational readiness and successful mission execution. The plan further stated that to meet its operational needs, the Coast Guard will prioritize the repair or replacement of degraded shore infrastructure that negatively affects operations or hinders workforce readiness. However, this plan does not identify the shore infrastructure resilience goals the Coast Guard hopes to achieve or any objectives to measure progress toward these goals. Moreover, this plan does not include goals or measures to guide such prioritization. In February 2019, we reported that Coast Guard Engineering program officials were not able to provide documents showing how they had directed field units to prioritize the repair or replacement of degraded shore infrastructure. In July 2019, the Coast Guard was able to provide one planning document that was specifically created to help manage its response to Hurricanes Harvey, Irma, Maria, and Matthew that included guidance on improving infrastructure resilience. Based on our interviews with Coast Guard engineering program and Shore Infrastructure Logistics Center officials, the Coast Guard is still in the initial stages of incorporating resilience plans and objectives into the shore infrastructure program. In July 2019, Civil Engineering program officials told us that the Coast Guard had updated its Civil Engineering Strategic Plan to direct its personnel to develop a communication plan and resource strategy for infrastructure resiliency projects based on the Shore Infrastructure Vulnerability Assessment s Phase II results. The Coast Guard provided us with a copy of this plan in August 2019, and while this document includes two measures that can be useful to account for actions taken, it did not include goals or performance targets to guide the prioritization of resiliency projects, and Civil Engineering program officials were not able to provide documents showing how they had made decisions to incorporate resilience into the repair and replacement of degraded shore infrastructure. Coast Guard officials also reported that they had initiated a separate resilience effort in 2018 at the direction of DHS, which required all operational components to participate in the development of the 2018 DHS Resilience Framework, and to develop individual component resilience plans to guide its approach to resilience planning. According to the Coast Guard, their plan was submitted to DHS in August 2019. When we discussed this effort with Coast Guard officials, they were able to provide few details about their efforts and no documentation about their progress to date. We also discussed this effort with DHS officials managing the process, but they were not able to tell us whether this new endeavor will align with or compete for resources with ongoing Coast Guard assessment processes. <3.2. Identify Infrastructure> According to the second step of the DHS Critical Infrastructure Risk Management Framework, organizations should identify infrastructure assets that are critical for security and national preparedness (see text box). DHS Critical Infrastructure Risk Management Framework Step 2 Organizations should identify assets, systems, and networks that contribute to critical functionality, and collect information pertinent to risk management, including analysis of dependencies and interdependencies. Through this step, it is important to identify assets that are both nationally significant and those that may not be significant on a national level but are, nonetheless, important to state, local, or regional critical infrastructure security and resilience and national preparedness efforts. We found that the Coast Guard identified many occupied buildings that may be important to operations and assessed their vulnerability through its Shore Infrastructure Vulnerability Assessment process, but this process did not identify all shore infrastructure assets that are critical to its missions or screen them for all vulnerabilities. Specifically, through the Shore Infrastructure Vulnerability Assessment Phase I, the Coast Guard identified and screened all occupied Coast Guard buildings over 1,000 gross square feet about 16 percent of all Coast Guard infrastructure for vulnerabilities to 10 natural disasters. The analysis found that approximately 68 percent (2,200) of the 3,214 buildings it assessed are vulnerable to certain natural disasters. However, the initial screening did not include other mission critical infrastructure, as the framework recommends, even though the loss of such structures could also impact its ability to carry out its missions. For example, the Coast Guard did not include structures in Phase I of the Shore Infrastructure Vulnerability Assessment, such as aircraft runways, and therefore has not determined whether such structures are vulnerable to flooding following a severe storm, or which ones are at greatest risk for such flooding. Phase II is also not expected to include these assets, as Civil Engineering program officials stated it is not intended to identify any additional infrastructure. Rather, in Phase II for example, Civil Engineering program officials will determine whether roughly 45 percent of the buildings on the West Coast that were screened in Phase I, are vulnerable to earthquakes, as the results of Phase I were inconclusive for these buildings. This DHS framework step recommends that stakeholders identify assets and networks that contribute to critical functionality and analyze their dependencies and interdependencies. The Coast Guard has two such measures to help identify the criticality of its shore infrastructure for conducting its missions. The Mission Essentiality Index measure classifies shore infrastructure assets into one of four tiers based on the degree to which they are mission critical. Similarly, the Mission Dependency Index scores building criticality based on how quickly the loss of utilities would impact operations, and how difficult it would be to relocate operations in advance of a natural disaster. Coast Guard officials told us they used Mission Dependency Index scores to help identify which buildings to include first during Phase II of the Shore Infrastructure Vulnerability Assessment. However, they did not consider either of these measures when they conducted the initial screening for Phase I, which prevented operational risks from being fully considered. Using this information at the beginning of its Shore Infrastructure Vulnerability Assessment process could have provided the Coast Guard with useful information to help it assess its critical infrastructure, as the DHS framework recommends. Coast Guard officials stated in July 2019 that they believe that the mission critical assets collocated with the assessed buildings would have the same vulnerabilities given their geographic proximity. While this may be the case for structures that are collocated with assessed buildings, unoccupied structures (such as piers and runways) may be built with different requirements and building codes; consequently, they may differ in the extent of their vulnerabilities to the same natural hazard threats. Furthermore, the Shore Infrastructure Vulnerability Assessment Phase I report did not demonstrate the extent to which Coast Guard structure are collocated with the occupied buildings the Coast Guard analyzed. They also told us that the Coast Guard has not tracked the performance of its infrastructure, particularly piers and runways, because it has always been able to find alternative means to continue operations. However, by identifying all of its mission critical infrastructure that may be vulnerable to natural disasters, the Coast Guard would be more fully informed of the possible scenarios that could affect their capabilities in the event of a natural disaster, and which infrastructure facilities are most likely to be affected. Such information could also better position the Coast Guard to plan for and execute mission operations from alternative locations if needed. <3.3. Assess and Analyze Risks> According to the third step of the DHS Critical Infrastructure Risk Management Framework, organizations should assess and analyze risks to understand infrastructure vulnerabilities and threats, as well as the potential consequences of an incident or known vulnerabilities (see text box). DHS Critical Infrastructure Risk Management Framework Step 3 Organizations should assess and analyze risks, taking into consideration the potential direct and indirect consequences of an incident, known vulnerabilities to various potential threats or hazards, and general or specific threat information. Risks can be assessed in terms of their likelihood and potential consequences. This step supports an assessment strategy that results in sound, scenario-based consequence and vulnerability estimates, as well as an assessment of the likelihood that the given threat or hazard will occur. Organizations should consider potential harm to operations and impacts on mission in executing a critical infrastructure risk management approach. The Shore Infrastructure Vulnerability Assessment process is the Coast Guard s main action to formally assess and analyze its shore infrastructure, according to Civil Engineering program officials. This process was intended to help contingency planners anticipate which infrastructure is likely to remain operational following a natural disaster, and assist with operational and future capital investment decisions, according to a senior Coast Guard official. We found that through this process, the Coast Guard assessed and analyzed certain elements of risk for its shore infrastructure, such as potential vulnerabilities of certain infrastructure to multiple natural disasters information which could help inform its processes to improve resilience. However, the Coast Guard has not identified the potential direct and indirect consequences posed by natural disasters on its infrastructure, or the consequences associated with its operational risks that is, risks affecting its ability to carry out its missions if shore infrastructure is damaged. Specifically: Through Phase I of the Shore Infrastructure Vulnerability Assessment process, the Coast Guard determined that its personnel and operations are generally more vulnerable to certain threats. For example, Phase I determined that about 880 assessed buildings may be vulnerable to earthquakes, which according to the Coast Guard, represent approximately 45 percent of its assessed buildings on the West Coast. Similarly, it also identified about 800 buildings that may be vulnerable to tornadoes and approximately 1,000 buildings vulnerable to hurricanes. However, the Coast Guard has not analyzed the potential consequences of damage to the infrastructure that it identified as vulnerable. For example, it has not assessed the economic losses associated with potential catastrophic disasters, such as costs for rebuilding assets or taking other actions to respond to and recover from natural disasters. Additionally, the Coast Guard has not assessed long-term costs that could result from environmental damage to its property caused by these events. Without also determining consequence information, the Coast Guard is not positioned to provide decision makers with the type of information the DHS Critical Infrastructure Risk Management Framework recommends for making cost effective risk management decisions. As the Coast Guard begins to conduct Phase II, it is unclear whether it will include information on potential consequences in its assessment. The Coast Guard initiated Phase II in September 2018 and intends to assess about 1,500 buildings for vulnerabilities to natural disasters by 2025. Coast Guard officials stated that Phase II would entail following civil engineering standards for conducting the assessments. These assessments are expected to entail on-site contractor assessments of about 1,500 buildings. In 2018, the first year of Phase II, the Coast Guard contracted for an assessment of 15 buildings, and Shore Infrastructure Logistics Center officials said they expect this assessment to be completed in 2021. According to Civil Engineering program officials, the purpose of Phase II is to understand whether 1,500 buildings identified in Phase I as inconclusive are indeed vulnerable to certain natural hazards. This information can help Coast Guard officials better understand the likelihood that vulnerabilities exist, but the plan for Phase II does not support an assessment strategy that results in sound, scenario-based consequence and vulnerability estimates, as well as an assessment of the likelihood that the given threat or hazard will occur or the operational risks that may be affected, as this step recommends. <3.4. Implement Risk Management Activities> According to the fourth step of the DHS Critical Infrastructure Risk Management Framework, organizations should implement risk management activities by evaluating risk reduction methods that consider countermeasures that result in controlling, accepting, transferring, or avoiding risks (see text box). DHS Critical Infrastructure Risk Management Framework Step 4 Organizations should evaluate risk reduction methods by considering countermeasures that result in controlling, accepting, transferring, or avoiding risks. Approaches can include prevention, protection, mitigation, response, and recovery activities. Ideally, the selection and implementation of appropriate risk management activities helps to focus planning, increase coordination, and support effective resource allocation and incident management decisions. We found that the Coast Guard identified thousands of infrastructure vulnerabilities to natural disasters through its Shore Infrastructure Vulnerability Assessment process, and has contracted for more detailed structural analyses of the buildings with vulnerabilities that were deemed inconclusive with respect to seismic and windstorm threats. However, the Coast Guard has not taken action to mitigate risks for those buildings with confirmed vulnerabilities. Our analysis of Phase I results showed that of the 3,214 buildings the Coast Guard analyzed, 32 percent had two or more identified vulnerabilities and an average Mission Dependency Index of 34, and 10 percent had three or more identified vulnerabilities with an average Mission Dependency Index of 38. The average Mission Dependency Index score for all 3,214 buildings was 30. These results indicate that the Coast Guard has data on buildings that may be more vulnerable than others and have relatively greater mission value. Despite the availability of this information, the Coast Guard has not taken steps to develop a mitigation strategy for these buildings, as the DHS Critical Infrastructure Risk Management Framework recommends. Coast Guard officials stated that they had sufficient information from Phase I about how their facilities would perform against eight of the ten disasters, so they elected to further study those buildings with inconclusive results on earthquakes and wind. According to the DHS Critical Infrastructure Risk Management Framework, risk assessments are to inform the selection and implementation of mitigation activities and the establishment of risk management priorities for organizations. Effective risk management activities are comprehensive, coordinated, and cost-effective. The framework further states that risk management decisions should be made based on an analysis of the costs and other impacts, as well as the projected benefits of identified courses of action including the no-action alternative if a risk is considered to be effectively managed already. However, it is unclear whether and to what extent the civil engineering staff and other decision makers consider the Shore Infrastructure Vulnerability Assessment results as part of the planning board processes where decisions are made about which infrastructure projects will be prioritized for funding. Civil Engineering program officials told us that hazard mitigation strategies will be employed for buildings determined to be vulnerable, as the Coast Guard plans and executes major construction and recapitalization projects through its existing planning board processes. They also provided us with updated planning board guidance, issued in March 2019, which directs Coast Guard officials to consider improving shore infrastructure resilience as a significant factor in the decision-making process. They also noted that the Coast Guard s updated policy described earlier requires compliance with higher building standards, which helps ensure that newly constructed facilities will be more resilient than the ones they replace. Shore Infrastructure Logistics Center officials, however, were unable to provide us with documentation showing whether and to what extent risk reduction methods were considered during past planning board processes. Furthermore, since they are not required to incorporate Shore Infrastructure Vulnerability Assessment results into future planning board decisions, it is unclear whether future Coast Guard planning boards will be focused on addressing the most critical risks, or will consider resilience as a factor when choosing projects to fund. This is of particular concern since in at least 5 cases, the Coast Guard s backlog list for Procurement, Construction and Improvement projects includes boat stations that the Coast Guard had previously identified as suitable for closure. <3.5. Measure Effectiveness> According to step five of the DHS Critical Infrastructure Risk Management Framework, organizations should use metrics and other evaluation procedures to measure progress and assess the effectiveness of efforts to secure and strengthen the resilience of critical infrastructure (see text box). DHS Critical Infrastructure Risk Management Framework Step 5 Organizations should use metrics and other evaluation procedures to measure progress and assess the effectiveness of efforts to secure and strengthen the resilience of critical infrastructure. They are an important step in the critical infrastructure risk management process to enable assessment of improvements in critical infrastructure security and resilience. They provide a basis for accountability, document actual performance, promote effective management and provide a feedback mechanism for informed decision making. We found that the Coast Guard has identified some specific measures, but they are too narrow to measure the agency s progress or assess the effectiveness of its efforts to improve its shore infrastructure resilience. For example, the Coast Guard established metrics to count the number and dollar value of certain projects to improve resilience, such as seismic improvement or floodplain adaptation projects, that the Civil Engineering program plans and accomplishes each year. While these measures can be useful to account for actions taken and funds invested in these particular actions, they do not indicate whether the resilience of Coast Guard shore infrastructure has improved or is improving as a result of the actions being measured. Coast Guard officials told us that they have not used the DHS Critical Infrastructure Risk Management Framework to guide actions to improve the resilience of its critical infrastructure because they have instead focused on implementing the Shore Infrastructure Vulnerability Assessment to provide them information they intend to use to influence resource investment decisions in the future. However, without a complete understanding of the vulnerabilities of its infrastructure and the consequences to Coast Guard operations if it is damaged, the Coast Guard risks questionable recapitalization investments for its resilience when selecting projects to fund from its $2.6 billion maintenance backlogs. Given that the five steps of the DHS Critical Infrastructure Risk Management Framework are intended to guide decision making and prioritize actions to more effectively achieve desired outcomes, having processes that fully align with the five key steps of the framework would provide greater assurance that the Coast Guard is investing its shore infrastructure resources to manage potential damage and expenses from extreme weather events in the future. <4. Conclusions> The Coast Guard s shore infrastructure program includes a range of facilities and structures that are vital to the agency s ability to fulfill its missions, and it constitutes a significant fiscal commitment that requires ongoing investment to maintain. By nature of their mission and location, many facilities and structures are vulnerable to potentially catastrophic natural disasters that are projected to occur more frequently and have required over $2 billion in supplemental funding over recent years to replace or repair. The Coast Guard faces the difficult decision of determining how best to invest its limited resources in improving the resilience of its shore infrastructure to better manage the costs of repairing or replacing such infrastructure after natural hazards occur. DHS s Critical Infrastructure Risk Management Framework provides a decision making approach that can help ensure risk-informed resource investments, but the Coast Guard has not fully aligned its processes for improving shore infrastructure resilience with any of the five steps outlined in this framework. The Coast Guard s Shore Infrastructure Vulnerability Assessment process is the agency s main approach to understanding shore infrastructure vulnerabilities, but this process is limited in scope and not expected to be completed until at least 2025. For the Coast Guard s planning board processes, officials were unable to verify that they have consistently considered resilience as a significant factor when selecting projects or that they plan to do so in the future. This is of particular concern given the current condition of Coast Guard shore infrastructure and the existing $2.6 billion backlogs of infrastructure maintenance and recapitalization projects that compete for finite funding. By fully aligning its processes with DHS s recommended risk management framework for critical infrastructure, the Coast Guard would be better positioned to reduce its future fiscal exposure to the effects of catastrophic natural disasters. <5. Recommendation for Executive Action> The Commandant of the Coast Guard should ensure that the Deputy Commandant for Mission Support implements risk management processes that more fully align with the five key steps outlined in DHS s Critical Infrastructure Risk Management Framework to better guide agency shore infrastructure investment decisions. This should include (1) setting goals and objectives, (2) identifying critical infrastructure, (3) assessing and analyzing risks and costs, (4) implementing risk management activities, and (5) measuring the effectiveness of actions taken. (Recommendation 1) <6. Agency Comments> We provided a draft of this report to DHS for review and comment. In its comments, reproduced in appendix I, DHS concurred with our recommendation. DHS, through the Coast Guard, also provided technical comments, which we incorporated as appropriate. DHS concurred with the intent of our recommendation to formalize its shore infrastructure risk management processes, and stated that it plans to make progress towards implementing GAO s recommendation concurrently with the development and implementation of its Component Resilience Plan, in accordance with the recently mandated DHS Resilience Framework. It intends to complete these efforts by the end of 2021. The Coast Guard also intends to develop, by July 2020, goals and objectives for measuring the effectiveness of actions taken to identify resilience readiness gaps and resource needs. We will continue to monitor these efforts. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or AndersonN@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Homeland Security Appendix II: GAO Contact and Staff Acknowledgements <7. GAO Contact> Nathan J. Anderson, (202) 512-3841 or andersonn@gao.gov. <8. Staff Acknowledgements> In addition to the contact above, Dawn Hoff (Assistant Director), Landis Lindsey (Analyst-in-Charge), Michael Armes, John Bauckman, Jason Berman, Chuck Bausell, Rick Cederholm, Kendall Childers, John Crawford, Billy Commons, Andrew Curry, Dominick Dale, Elizabeth Dretsch, Shannon Finnegan, Michele Fejfar, Peter Haderlein, Eric Hauswirth, Susan Hsu, Michael Pinkham, John Mingus, and Jan Montgomery, made key contributions to this report. | Why GAO Did This Study
The Coast Guard, within DHS, owns or leases more than 20,000 shore facilities such as piers, boat and air stations, and housing units at over 2,700 locations. This infrastructure is often positioned on coastlines where it is vulnerable to damage from extreme weather. Noting the importance of protecting critical infrastructure from such risks, in 2013 DHS updated its risk management guidance for enhancing infrastructure resilience—which is the ability to prepare and plan for, absorb and recover from, or successfully adapt to adverse events.
GAO was asked to review Coast Guard efforts to improve the resilience of its shore infrastructure. This report (1) describes Coast Guard actions to improve shore infrastructure resilience since 2005, and (2) examines the extent to which its processes to improve shore infrastructure resilience follow DHS's key steps for critical infrastructure risk management. GAO reviewed and analyzed Coast Guard guidance and data on assessed infrastructure and interviewed Coast Guard officials. GAO also compared Coast Guard policies, procedures, and actions to manage shore infrastructure against DHS's framework for managing risks to critical infrastructure.
What GAO Found
Since 2005, the U.S. Coast Guard's main actions to improve resilience have been to repair or rebuild shore infrastructure to higher building standards after it has been damaged by extreme weather events. The Coast Guard has received more than $2 billion in supplemental appropriations since 2005 to improve resilience after severe storms (see figure). The Coast Guard has also developed new guidance requiring that repairs and new construction meet higher building standards to make it more resilient. Further, in 2015, the Coast Guard began an assessment of certain occupied buildings to identify their vulnerabilities to ten natural hazards, such as hurricanes and earthquakes. As of 2018, this assessment covered approximately 16 percent of the Coast Guard's shore infrastructure. The Coast Guard aims to complete the assessment in 2025.
Coast Guard processes to improve shore infrastructure resilience do not fully align with the Department of Homeland Security's (DHS) key steps for critical infrastructure risk management. These steps are described in DHS's Critical Infrastructure Risk Management Framework, which recommends that DHS components, among other things, identify critical infrastructure, assess risks, and implement risk management activities. While the Coast Guard has identified some vulnerable shore infrastructure through its ongoing assessment, it has not identified all shore assets that may be vulnerable, such as piers and runways; or assessed operational risks affecting its ability to complete missions with these assets. In addition, the Coast Guard has not taken steps to develop mitigation strategies for buildings already identified as vulnerable. Moreover, Coast Guard data show a growing backlog of at least $2.6 billion in recapitalization, new construction, and deferred maintenance projects that compete for finite funding. However, Coast Guard officials were unable to verify that they have consistently selected projects to also enhance resilience. Coast Guard officials stated that they have not used the DHS framework and have instead focused on implementing their ongoing vulnerability assessment. Fully aligning its processes with the DHS framework would better position the Coast Guard to reduce its future fiscal exposure to the effects of extreme weather events.
What GAO Recommends
GAO recommends that the Coast Guard revise its processes for improving shore infrastructure resilience to more fully align with key steps of the DHS critical infrastructure risk management framework. This should include, for example, identifying critical infrastructure, assessing risks, and implementing risk management activities. DHS concurred with our recommendation. |
gao_GAO-20-5 | gao_GAO-20-5_0 | <1. Background> <1.1. FEMA s Role in Providing Assistance During and After Wildfires> <1.1.1. Fire Management Assistance Grants> The activities and resources required to suppress wildfires generally belong to the states and federal agencies with land management missions, such as the U.S. Forest Service and four bureaus (Bureau of Land Management, Bureau of Indian Affairs, National Park Service, and U.S. Fish and Wildlife Service) within the U.S. Department of the Interior. FEMA can provide reimbursement to help support wildfire suppression (e.g., labor costs for overtime or seasonal personnel involved in fire suppression activities). When a wildfire burns on nonfederal lands and threatens to become a major disaster, a state governor or governor s representative may request federal assistance via a Fire Management Assistance Grant (FMAG) administered by FEMA. While the fire is burning, a governor s office can submit a verbal request for an FMAG to the designated FEMA regional office, followed within 14 days by a formal written request. The regional administrator then either approves or denies the request after consulting with relevant officials from the U.S. Forest Service or bureaus within the U.S. Department of the Interior about technical aspects of the fire. Eligible FMAG costs include, among other things, equipment and supplies, labor costs, travel and per diem, temporary repairs of damage caused by firefighting activities, mobilization and demobilization of resources, and limited costs of pre-positioning fire prevention or suppression resources. From fiscal years 2009 through 2018, FEMA awarded 374 FMAGs totaling $952,318,049. The average FMAG during this timeframe was about $2.5 million. The state of California received the majority of those grant funds over $543 million. Figure 1 below illustrates the states that received FMAGs during this 10 year period, figure 2 provides annual FMAG totals, and figure 3 provides a breakout of the dollars distributed by state for this same 10 year period. If a wildfire increases in size and intensity in a manner that overwhelms the ability of state, tribal, territorial or local governments to respond and recover effectively, a state or tribal government can request and the President can approve a major disaster declaration, as with other types of disasters (e.g., a hurricane or flood). A disaster declaration is the primary mechanism by which the federal government gets involved in funding and coordinating response and recovery activities. Under the National Response Framework, the Department of Homeland Security (DHS) is the federal department with primary responsibility for coordinating disaster response, and within DHS, FEMA has lead responsibility. From fiscal years 2009 through 2018, a total of 19 major disasters were declared as a result of wildfires. Figure 4 shows the number and locations of these major disaster declarations. Once a major disaster is declared, FEMA can provide funds for response and recovery efforts through the Disaster Relief Fund and coordinate other federal support through the National Response Framework s 14 Emergency Support Functions. Federal assistance following a major disaster declaration includes the following: Individual Assistance: FEMA s Individual Assistance programs provide assistance directly to individuals and households, as well as state, local, tribal, and territorial governments to support individual survivors. This assistance covers necessary expenditures and serious needs that cannot be met through insurance or low-interest loans, such as temporary housing assistance, counseling, unemployment compensation, or medical expenses. See appendix I for a further description of FEMA s Individual Assistance program. Public Assistance: FEMA s Public Assistance program provides supplemental federal disaster grant assistance to state, local, tribal, and territorial governments and certain types of private nonprofit organizations for debris removal, emergency protective measures, and the restoration of disaster-damaged, publicly-owned facilities and the facilities of certain private nonprofit organizations. The eligibility rules outline the types of damage that can be reimbursed by the federal government and steps that federal, state, and local governments must take in order to document eligibility. If the debris on private property is determined to be so widespread that it threatens the health, safety, or economic recovery of the community, FEMA may determine that debris removal from private property, including contaminated soil, is eligible for reimbursement under the Public Assistance program. An applicant (a state, territorial, or tribal government) may contract for debris removal. Alternatively, if an applicant lacks the capability to perform or contract for debris removal, the applicant may request that the federal government perform the work. In such cases, FEMA may task another federal agency, typically the U.S. Army Corps of Engineers (USACE), to perform or contract the work by issuing a mission assignment (see description below). See appendix I for a further description of FEMA s Public Assistance program. Mission Assignment to Other Agencies: FEMA can fulfill disaster response needs through mission assignments work orders it issues to another federal agency to provide a service or other response need. For example, FEMA may request medical teams from the Department of Health and Human Services and logistical support from the Department of Defense. Hazard Mitigation Grant Program: This program is designed to improve disaster resilience the ability to prepare and plan for, absorb, recover from, and more successfully adapt to disasters during recovery. The program funds a wide range of projects, such as use of non-combustible materials on new and existing homes to mitigate risk from future wildfires, adding shutters to windows to prevent future damage from hurricane winds and rains, and rebuilding culverts in drainage ditches to prevent future flooding damage. Table 2 below shows money obligated for Individual Assistance, Public Assistance, mitigation efforts, operations (including mission assignments), and administrative costs for the 19 major disaster declarations resulting from wildfires from fiscal years 2009 through 2018. <1.2. Other Federal Roles and Responsibilities for Wildfires> The U.S. Forest Service within the Department of Agriculture and the Bureau of Indian Affairs, Bureau of Land Management, U.S. Fish and Wildlife Service, and National Park Service within the Department of the Interior, are responsible for managing wildfires on federal lands. Wildfire management consists of three primary components: 1. Preparedness involves acquiring and positioning firefighting assets. 2. Suppression involves selecting among strategies to extinguish or contain a fire, with the aim of protecting firefighters and public safety and using the minimum resources necessary. 3. Fuels Reduction involves acting in advance of wildfires to manage vegetation with the aim of reducing the intensity, severity, or negative effects of a wildfire. We are currently reviewing federal fuel reduction efforts, and how those efforts consider community protection, and plan to issue a report on the subject later this year. <1.3. State Efforts and Assistance Available for Fighting Wildfires> State forestry agencies and other nonfederal entities including tribal, county, city, and rural fire departments have primary responsibility for managing wildfires on nonfederal lands, and share responsibility for protecting homes and other private structures. When a wildfire occurs on nonfederal lands and begins to exceed the state or local entity s ability to effectively respond to the wildfire, the state or local entity may seek assistance from neighboring jurisdictions, typically through prescribed mutual aid agreements. For example, during wildfires in California in October and December of 2017, the California Governor s Office of Emergency Services used the California fire and rescue and law enforcement mutual aid systems, along with the national Emergency Management Assistance Compact to mobilize and organize a large number of emergency services. In total, according to California Governor s Office of Emergency Services, over 400 state and local government and 200 out-of-state fire departments sent engines, crews, and other assets to assist the local firefighting efforts. When a state or local jurisdiction needs further firefighting assistance, it may request additional support through Geographic Area Coordination Centers overseen by the National Interagency Fire Center. Once a Geographic Area Coordination Center has exhausted the resources it can provide, it can turn to the National Interagency Coordination Center within the National Interagency Fire Center for further assistance. <2. FEMA Provided Assistance to Help Wildfire-Affected State and Local Jurisdictions Consistent with Its Role in the National Response and Recovery Frameworks> For wildfire disaster declarations from 2015 to 2018, FEMA provided a variety of assistance to state and local emergency management officials consistent with roles and responsibilities in the National Response Framework and National Disaster Recovery Framework. Specifically, FEMA helped these jurisdictions by reimbursing some fire suppression costs, supporting state-led efforts to coordinate the response and provide for the immediate needs of displaced survivors, and helping localities plan and execute recovery. FEMA has obligated over $2.4 billion to assist in response to and recovery from these disasters to date. As previously discussed, although states and other federal agencies have primary responsibility for fire suppression, some state and local fire suppression costs are eligible for reimbursement through FMAGs. Most wildfire-affected states and localities in our scope received this kind of fire suppression support from FEMA initially in the form of the FMAGs. As the fires ultimately led to major disaster declarations, any funding that FEMA would have provided through the FMAGs were ultimately provided under Public Assistance as part of the declaration. To support state-led response and provide for the immediate needs of displaced survivors, FEMA deployed staff to assist in state Emergency Operations Centers and secured needed resources for mass care such as cots to help with temporary sheltering, according to state officials. In addition, FEMA assigned federal agencies to perform various missions to help with disaster response. For example, the Environmental Protection Agency provided hazardous material cleanup of damaged properties, and USACE provided public works services, such as contracting for debris removal. As response activities continued and recovery began, FEMA and the state emergency management agencies established Joint Field Offices, which are temporary field offices established to coordinate federal and state efforts in disaster response and recovery, and provided resources to help individual disaster survivors with community services and housing needs. For example, following wildfires in November 2018 including the Camp Fire in Butte County FEMA provided over $55 million to survivors to reimburse them for the cost of temporary lodging and rentals after their homes were destroyed. In addition, FEMA provided funding and support to local jurisdictions to help address community infrastructure needs. For example, FEMA obligated money to pay for wildfire debris removal from public property as well as from private property, given the widespread effect on the community of toxic fire debris. Also to support recovery, in coordination with state and local entities, FEMA established and staffed Disaster Recovery Centers, which are facilities or mobile offices where survivors can go for information about FEMA programs or other disaster assistance programs. Representatives from the relevant state agencies, FEMA, U.S. Small Business Administration, volunteer agencies, and other agencies were at the centers to answer questions about and help survivors apply for disaster assistance and low-interest disaster loans for homeowners, renters, and businesses. Finally, to assist local jurisdictions with longer-term recovery, FEMA provided assistance to some locally-led long-term recovery activities designed to address housing and other survivor needs in the community. Table 3 shows the amount of assistance FEMA provided for each of the six major disasters that we reviewed, and Appendix II provides a more detailed breakdown of each major disaster, including a map of each disaster, the number of structures that were destroyed, and mission assignment data. <3. Multiple Jurisdictions Reported FEMA Practices that Aided in Wildfire Response and Recovery, But also Experienced Challenges> State and local officials we spoke with reported practices that aided in wildfire response and recovery and also experienced challenges that arose in multiple jurisdictions across different disasters. <3.1. Jurisdictions Noted Specific Actions that Aided Response and Recovery Efforts> <3.1.1. FEMA and State Collaboration> When asked what worked well, officials from three out of the six California counties told us that FEMA and the California Governor s Office of Emergency Services collaborated effectively during response and recovery efforts. For example, one of the three counties reported that when posing questions or concerns to the California Governor s Office of Emergency Services, they were able to quickly obtain answers or further information and get help navigating complex issues. As we reported in 2018, according to officials in the California Governor s Office of Emergency Services and FEMA, they have developed a strong relationship with each other over time, which helps both agencies deliver consistent, unified information to stakeholders and disaster survivors. <3.1.2. Services Provided to Disaster Survivors> Local officials also praised FEMA s role in helping to set up and operate Disaster Recovery Centers. Officials in four of the six California counties that we interviewed noted that FEMA was quick to send staff to assist local jurisdiction staff and disaster survivors at the facilities established to provide assistance, such as Local Assistance Centers (generally activated by the county in the immediate wake of a disaster to provide government services to survivors) and Disaster Recovery Centers established by FEMA. For example, one of these counties noted that FEMA had staff available at their Local Assistance Center to support requests for Individual Assistance and other items shortly after the disaster was declared, and the county received positive feedback from the public about the varied types of support provided by experienced staff at their Local Assistance Center. Officials in one of the counties mentioned above, as well as FEMA officials, cited as good practices efforts to bring together local and state providers of governmental services to provide a variety of assistance in one place. For example, FEMA credited one county for their efforts in partnering with a local mental health service provider to offer mental health counseling on site at a Disaster Recovery Center, as opposed to referring individuals to such services off site. Similarly, one Disaster Recovery Center we visited in California included representation from a number of different state agencies, such as the state s contractors licensing board, insurance regulators, department of employment opportunities, and franchise tax board. Officials explained that being able to access a variety of state services in a Disaster Recovery Center can be particularly helpful for fire survivors, as they may have evacuated their homes with very little notice and lost all their identifying documentation to the fire. <3.2. Jurisdictions Experienced a Number of Response and Recovery Challenges> State and county officials described challenges that were present in several of the wildfire disaster declarations that we reviewed. Some of these challenges such as a complex Public Assistance application process or FEMA staff turnover are not specific to wildfires and could also affect recovery efforts after a hurricane, flood, or other natural disaster. Some challenges were more specific to and further complicated by the nature of wildfire disasters. These challenges include the complexity and scale of fire debris removal, shortage of temporary housing for wildfire survivors, and lack of local experience dealing with the magnitude of the wildfires encountered in 2017 and 2018. <3.2.1. Complexity and Timeframes for FEMA Public Assistance Applications> Officials in three of the seven counties we met with said that the onerous and confusing documentation required when applying for Public Assistance grants was a challenge. For example, an official from one county told us that the Public Assistance guidance in effect at the time his county was recovering from disaster contained conflicting information, though he believed this issue has since been resolved. Officials in two counties also described difficulty meeting the deadlines for application submission, especially while managing the other demands of disaster response and recovery. We have previously reported on challenges with FEMA s administration of the Public Assistance program, including effectively overseeing and staffing the program, among other things. Officials from FEMA s Public Assistance Division acknowledged that the complexity of the program has been a challenge for local officials in recent years. The officials pointed to the development of a new Public Assistance delivery model as the key initiative to address these challenges. This new delivery model, which includes a new information portal designed to improve local officials ability to upload and submit information, was intended to clarify program requirements, improve operations, and respond to previously-identified challenges, according to FEMA officials. FEMA introduced the new model in California during the recovery phase of the 2017 wildfires. Officials from two of the selected counties stated that the new information portal eased the process of submitting documentation for FEMA review. In 2017, we reported on the historical challenges with FEMA s Public Assistance program and identified additional challenges with the roll-out of the new delivery model, including the need to determine its staffing needs for supporting rollout of the system and strengthen controls over the information system being used. California officials we spoke with also noted that in order for the new delivery model to be used efficiently, it would be helpful for FEMA to provide additional training to stakeholders who use the system. According to FEMA officials, FEMA provided a number of training sessions on the new model to California stakeholders between August 2017 and April 2019. <3.2.2. Frequent Turnover of FEMA Staff> Officials in three of the seven selected counties told us that frequent rotations of FEMA staff during disaster response and recovery was disruptive. For example, after working with state and local officials following a disaster, the rotations of FEMA staff resulted in having to re- share information that was already provided to FEMA, as well as inconsistent advice or interpretation of FEMA guidelines. FEMA officials acknowledged that ensuring continuity following staff turnover has long been an issue in multiple complex disaster environments. They noted a number of reasons why a staff member in a position might turn over. For example, according to FEMA officials, the disasters in 2017, including Hurricanes Harvey, Irma, and Maria, as well as the California wildfires, required FEMA management to redeploy response personnel from one disaster to the next. We have reported on multiple FEMA workforce challenges in prior work and continue to observe workforce challenges in our ongoing work. We are currently reviewing how FEMA deploys and trains staff to meet disaster mission needs and plan to report early in 2020. <3.2.3. Complexity and Scale of Wildfire Debris Removal> Debris removal is an important first step in the disaster recovery process, allowing communities to expedite the recovery process by restoring accessibility to public services and space, while ensuring public health and safety in the aftermath of a disaster. Debris removal posed several challenges for state and local jurisdictions affected by the wildfires. Wildfires typically leave no remaining structure, and the resulting ash contains contaminants that must be carefully removed, wrapped, and disposed of before survivors can move back to their properties. This can make the wildfire debris removal process costlier and more complicated than for other types of disaster debris. California s Department of Resources, Recycling, and Recovery typically handles debris removal after local disasters in the state, but it did not have the capacity to handle the high volume of debris caused by the 2017 Northern California wildfires. As a result, the state asked FEMA to assign USACE with the debris removal mission. According to local officials, there was some confusion over how much contaminated soil should be removed from some properties. Specifically, in some cases, USACE removed more soil than necessary at home sites in an attempt to scrape the soil deeply enough to remove all possible contaminants at the site; however, this did not take into account that some contaminants, such as arsenic, occur naturally in the soil. As a result, some property owners were left with large over-excavated pits on their property that needed to be filled in before rebuilding could occur. Figure 5 shows a property site that, according to local officials, had been excavated below the foundation of the home and thus needed to be refilled with soil, complicating the rebuilding effort. In addition, officials from one county stated that USACE staff rotations made it difficult for state and local officials to communicate debris removal options clearly both internally and to the public, leading to confusion among some survivors about their best options for debris removal. In 2018 and 2019, we reported on issues with contracting for wildfire debris removal. We found that USACE s debris removal contracts, while broad enough to cover any type of debris, had been used primarily to manage hurricane debris removal and did not address issues posed by wildfire debris removal. We also found that miscommunication at the federal level resulted in differing expectations between USACE and state and local officials about debris removal work to be performed, such as the types of structures to be removed from private property and acceptable soil contamination levels. According to USACE officials, they relied on FEMA to manage communication with states and localities and to identify and manage expectations about the scope of work to be performed. We recommended, among other things, that FEMA take the lead to work together with USACE to revise the mission assignment policy and related guidance to better incorporate consideration of contracting needs and to ensure clarity of contracting-related coordination responsibilities. DHS concurred with this recommendation and reported that it will take steps such as development of mission assignment project management tools and training for mission assignment work to implement it. <3.2.4. Shortage of Temporary Housing for Wildfire Survivors> According to DHS s 2017 National Preparedness Report, providing effective and affordable temporary housing for disaster survivors has been a longstanding and continuing challenge. Wildfires pose an additional challenge because in contrast to disasters such as hurricanes or floods where there may be a substantial portion of a home left standing, and property may be habitable after the most dangerous debris is removed, wildfires generally destroy entire structures and leave a pile of contaminated debris and soil. This kind of damage requires a lengthier clean-up and necessarily precludes survivors from occupying the property until state and local officials declare the lot safe for habitation. In the meantime, one of FEMA s responsibilities under the Mass Care, Emergency Assistance, Temporary Housing and Human Services Emergency Support Function is to help displaced disaster survivors with access to temporary housing. This has posed challenges for some of the counties we spoke with, most notably in select Northern California communities. In particular, officials in two California counties noted that vacancy rates are very low in these areas, and there were few places to house survivors who were either waiting to rebuild on their property or had been living in rental properties that were destroyed. In addition, in one California county there have been a limited number of potential sites available (such as commercial parks or group sites) to place transportable temporary housing units. According to FEMA, several factors limited the number of commercial or group sites available for such housing units, including limited space for the housing units, contaminated utilities, and challenges with local jurisdictions responsible for deciding whether and where to place group sites. According to FEMA officials, the nature of fire debris affects the array of post-disaster housing options that FEMA can offer through its Individual Assistance program. For example, although FEMA can provide replacement assistance for destroyed homes and repair assistance for homes with damage that can be repaired, the complete destruction of homes due to fires significantly lengthens the recovery processes. Rental assistance and lodging reimbursement are limited by lack of access to rental properties, and the use of manufactured housing units is limited by lack of group sites that meet requirements, including adequate space for such units and access to utilities (e.g., potable water not contaminated by fire damage). See Appendix I for more information on this program. FEMA officials acknowledged that providing housing for survivors has long been a challenge for the agency. They also acknowledged that several of FEMA s housing tools are less relevant to wildfires versus other disasters (as discussed above). According to FEMA, the agency is currently reviewing various aspects of its housing mission to better identify ways to address some of these challenges. <3.2.5. Lack of Experience with Large- Scale Wildfires> Officials from two of the counties we spoke with said that their lack of experience in response to and recovery from wildfires of the magnitudes encountered was very challenging. Officials from one of those counties stated that they did not have the knowledge or skill-set needed at the local level to best identify response and recovery needs and relied heavily on FEMA and California s Governor s Office of Emergency Services for resources and training in these areas. Officials from another county stated that neither they nor FEMA were accustomed to the level of destruction in a rural area, which created challenges identifying resources and processes to remove damaged trees from private property, storing the volume of downed trees, and maintaining the few roads available for hauling debris. Officials from another county in California described being unprepared when they were tasked with collecting duplicate payments for private property debris removal after survivors received their insurance benefits. Residents who participated in the private property debris removal program who were paid out of FEMA s Public Assistance program, and subsequently also received an insurance benefit for debris removal, were required to repay the federal government for the duplicate benefit. According to these county officials, they were not aware that collection would be their responsibility until about 2 years after the initial debris removal took place. The officials noted that the administrative burden for identifying the affected homeowners and the amount owed and then collecting the payments was significant, and taxed their administrative capacity. They said they wished they had been aware sooner that they would have to absorb this duty, so they could put systems in place. According to FEMA and state officials, however, these requirements were included in FEMA s Public Assistance Program and Policy Guide, which states that local governments are responsible for implementing private property debris removal, including the requirement to collect and reimburse FEMA for any duplicate benefits. Nevertheless, the confusion described by the county government illustrates the difficulty jurisdictional officials with little previous wildfire experience can have navigating complex program rules while simultaneously confronting the disaster aftermath. <4. FEMA Has Identified Lessons Learned from 2017 Wildfires but Could Further Benefit from a Comprehensive Assessment of Its Operations, Policies, and Procedures> <4.1. FEMA Has Prepared an After-Action Report for 2017 California Wildfires> In June 2019, FEMA Region IX which provides disaster assistance in California finalized the after-action report for the October and December 2017 wildfire disasters in Northern and Southern California. FEMA s 2017 wildfire after-action report offered response and recovery lessons learned from both the challenges identified and successful practices. Some, but not all, of these were mirrored in our interviews with California jurisdictions that were affected by recent wildfires. Among its findings, the 2017 wildfire after-action report identified several areas for improvement. For example, FEMA s immediate activation of the Transitional Sheltering Assistance program and lack of a unified information system to track applicants eligibility for all Individual Assistance programs at the time of the wildfires resulted, in some instances, in applicants receiving sheltering benefits inappropriately (i.e., receiving Transitional Sheltering Assistance benefits despite their residence being undamaged). One potential action to address this challenge identified in the report was to add information on Transitional Sheltering Assistance program applicants into the database that FEMA uses to track disaster information to ensure those applicants have access to all benefits and reduce the potential for duplication. FEMA officials have stated that since the 2017 wildfires, policy changes have been made to address this issue, including adding Transitional Sheltering Assistance program applicant data to the information system used to track eligibility for all Individual Assistance programs. In addition, FEMA reported that the typical contracts USACE had in place for debris removal were not designed to address the nature (i.e., fire- related debris) and scope of work required, particularly with respect to private property debris removal. The agencies worked together to rapidly scope the statements of work for the debris removal contracts to provide services to survivors, but FEMA ultimately found that the contract requirements lacked detail and clarity, resulting in additional costs. USACE prepared its own after-action report after the 2017 wildfires, which also identified challenges with the scope of its debris removal contracts and the mission assignment task orders, and planned to incorporate lessons learned in future debris removal contracts. According to FEMA Region IX officials, many of the issues regarding debris removal stemmed from not having documented processes in place to govern wildfire debris removal specifically. In its after-action report, FEMA identified potential actions to address these challenges such as developing standard operating procedures in coordination with USACE for fire debris removal to correct these and other identified areas for improvement. According to USACE officials, FEMA subsequently provided funds through a 2018 wildfire disaster declaration to USACE to develop such standard operating procedures. USACE officials told us they had shared these procedures with FEMA and stated that the procedures will help guide future wildfire private property debris removal operations. The 2017 after-action report also identified a number of strengths and best practices during 2017 wildfire response and recovery efforts in California. For example, the report noted that collaboration and pre- existing relationships between federal and state personnel helped to overcome knowledge gaps about certain programs and improved survivor outcomes (such as the placement of temporary housing units based on work done by an interagency task force). In addition, Facebook provided FEMA with pre- and post-disaster survivor locations (provided voluntarily by the survivor) that helped identify where survivors were located after the wildfires. Using this information, FEMA then worked with the state and private sector in order to help plan for short- and long-term housing solutions. <4.2. FEMA Could Improve Its Preparation for Potential Effects of Heightened Wildfire Risks by Comprehensively Assessing Its Operations, Policies, and Procedures> Standards for Internal Control in the Federal Government state that management should identify, analyze, and respond to significant changes that could impact its internal control system, which would include actions established through policies and procedures. Agency management, therefore, should analyze the effect of identified change on policies and procedures and revise such policies and procedures and other elements of its internal control system on a timely basis to maintain effectiveness. The combination of back-to-back devastating wildfire seasons in California, overall upward trends in wildfire disaster declarations, and several factors that point to increased likelihood of severe wildfire activity in the future suggest a change that may have significant impacts on FEMA s operating environment. As shown in figure 6, from 1953 to the present, the number of major disaster declarations from wildfires has increased in nearly every decade since 1950 and most dramatically in the last two decades. During Congressional testimony from March 2018, FEMA s Region IX Administrator stated that fire season has changed from covering spring through early fall to a now year-round event, and that the unprecedented impacts from the 2017 wildfire season would linger for years to come. Land use practices and climate trends increase the likelihood that severe and intense wildfires will affect people and communities. As we have described in previous reports, land use practices over the past century have reduced forest and rangeland ecosystems resilience to fire. Land use practices like fire suppression and timber harvesting have contributed to abnormally dense accumulations of vegetation. These accumulations can fuel uncharacteristically large or severe wildfires. At the same time, development occurring in and around wildlands an area often called the wildland-urban interface has increased, placing more people, businesses, and infrastructure at risk. The wildland-urban interface contains 46 million single-family homes, representing about 40 percent of single-family homes in the United States. According to the 2014 Quadrennial Fire Review, 60 percent of new homes built in the United States since 1990 were built in the wildland-urban interface. As the footprint of human activity and settlement into the wildland-urban interface expands, the risk of fire exposure to people and property is expected to increase further. In addition, changing climate conditions, including drier conditions in certain parts of the country, have increased the length and severity of wildfire seasons, according to many scientists and researchers. For example, in the western United States, the average number of days in the fire season increased from approximately 200 in 1980 to approximately 300 in 2013, according to the 2014 Quadrennial Fire Review. According to the U.S. Global Change Research Program s 2018 National Climate Assessment, warmer and drier conditions have led to a greater incidence of large forest fires (fires with an area greater than 386 square miles) in the western United States and Alaska, a trend expected to continue as climate warms and the fire season gets longer. Despite these trends and projections, FEMA does not plan to comprehensively assess operations to determine whether and how policies and procedures might need to change to better respond to changing operational conditions. According to FEMA officials, they had not considered conducting this kind of review, because they believe their existing mechanisms specifically after-action reporting, the continuous improvement process, and program specific mechanisms such as the Public Assistance Change Control Tool will allow them to incorporate relevant lessons into policies and procedures. According to FEMA officials, after a major disaster, FEMA s standard practice is to identify areas for improvement and develop lessons learned that can improve FEMA planning and policy and support national preparedness by preparing an after-action report which is required by FEMA policy. FEMA has a continuous improvement program which serves as the overarching process by which it identifies and responds to operational lessons learned identified in after-action reporting. According to FEMA officials, FEMA headquarters reviews all completed after-action reports to identify any areas for improvement that may need to be addressed through changes in policies and procedures. Although the continuous improvement process and its reliance on after- action reporting offers the opportunity to incorporate discrete lessons learned into select policies and procedures, there are some limitations in its ability to offer a comprehensive assessment of its internal controls in light of the strong potential that wildfire disasters will continue to increase. By its nature, after-action reporting captures select issues at a specific time and in a specific place, but it is not a dedicated effort to assess how various policies and procedures may need to be changed to better respond to changing operational conditions. For example, in our discussions with fire-affected jurisdictions, we noted that some programmatic or policy challenges were specific to or made more difficult by the nature of wildfires, such as the complexities of debris removal and difficult housing missions. A comprehensive review of internal controls, such as policies, procedures, and training, may shed light on aspects of FEMA s operations well tested over the years in hurricane and flood situations that could be adapted for greater responsiveness to the wildfire environment, helping to ensure attention to a broad range of issues in addition to those that might be noticed in a specific time and place through after-action reporting. In light of the potential for high-impact wildfires to become more frequent, a dedicated effort to comprehensively assess operations could help FEMA better ensure that its management controls such as policies, procedures, and training are as well designed as possible to respond to the unique challenges. <5. Conclusions> Devastating wildfires have exacted a large human and financial toll in recent years, with 159 lives lost and $2 billion obligated by FEMA in response during the major disasters of 2017 and 2018. FEMA has provided support personnel and resources to affected state and local jurisdictions to aid in wildfire response and recovery efforts. Given some reports of projected increase in risk from wildfires as well as the challenges we have noted in providing housing, conducting debris removal operations, and other areas comprehensively assessing agency operations in response to and recovery from wildfires to determine if any actions or changes to agency policies and procedures are needed could provide guidance or insight for communities that may be affected in the future. Comprehensively identifying, analyzing, and responding to the significant operating changes posed by wildfires, as recommended in internal control standards, could provide FEMA with an opportunity to better ensure the nation is ready to address the unique challenges posed by increased large-scale wildfires. <6. Recommendation for Executive Action> We recommend that the FEMA Administrator comprehensively assess operations to identify any additional updates to its management controls such as policies, procedures, or training that could enhance future response and recovery from large-scale and severe wildfires. (Recommendation 1) <7. Agency Comments and Our Evaluation> In August 2019, we requested comments on a draft of this report from the Departments of Agriculture, Defense, Interior, and Homeland Security. The Departments of Agriculture and Defense had no formal or technical comments. In September 2019, FEMA and the Department of the Interior provided technical comments, which we have incorporated as appropriate. In addition, DHS provided an official letter for inclusion in the report, which can be seen in appendix III, stating that it concurred with our recommendation. DHS s letter describes a number of ongoing and planned actions that it plans to leverage in addressing our recommendation. These actions include, among other things, the use of sheltering and housing field teams to support states efforts to house disaster survivors; continued updates to direct housing guidance; developing guidance for the use of FEMA-issued, state-administered direct housing grants authorized by the Disaster Recovery Reform Act of 2018; and development of a project to analyze and improve capabilities and identify areas of innovation in response to wildfire disasters. DHS anticipates that these efforts will be put into effect by December 2020. We will continue to monitor DHS and FEMA s efforts in addressing our recommendation. We will send copies of the final report to the Secretaries of the departments mentioned above, the FEMA Administrator, and appropriate congressional committees. If you or your staff have any questions about this report, please contact me at (404) 679-1875 or curriec@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other key contributors to this report are listed in appendix IV. Appendix I: Federal Emergency Management Agency (FEMA) Individual Assistance and Public Assistance Programs FEMA s Individual Assistance programs provide assistance directly to individuals and households, as well as state, local, tribal, and territorial governments to support individual survivors. This assistance covers necessary expenditures and serious needs that cannot be met through insurance or low-interest loans, such as temporary housing assistance, counseling, unemployment compensation, or medical expenses. FEMA provides this assistance through seven different programmatic areas, with a substantial amount of the assistance coming from the Individuals and Households Program. The Individuals and Households Program provides financial assistance and direct services to eligible individuals and households who have uninsured or underinsured necessary expenses and serious needs. Individuals and Households Program assistance is intended to meet basic needs and supplement recovery efforts and is not a substitute for insurance. The Individuals and Households Program consists of two forms of assistance: Housing Assistance and Other Needs Assistance. Housing Assistance: Housing assistance may be provided in the form of financial assistance, direct assistance, or a combination of the two. Financial assistance may include lodging expense reimbursement for time spent at hotels or other temporary lodging, rental assistance, and home repair or replacement assistance. Direct housing assistance may be provided when applicants are unable to use rental assistance due to a lack of available housing resources. This type of assistance may include the repair and lease of multi- family housing units such as apartments for temporary use by applicants, direct lease assistance, or the provision of transportable temporary housing units, such as recreation vehicles or manufactured housing units. Transportable temporary housing units can be placed on private sites, commercial sites or on group sites. Commercial sites are existing manufactured home sites with available pads that FEMA may lease. Group sites require additional approval when housing needs cannot be met by other direct temporary housing options. They may include publicly-owned land with adequate available utilities. Other Needs Assistance: This consists of financial assistance for other expenses and serious needs caused by the disaster. Some Other Needs Assistance is only provided if an applicant does not qualify for a Small Business Administration disaster loan; this assistance would include personal property, moving and storage, and transportation assistance. Other types of Other Needs Assistance can be provided regardless of SBA loan qualification, including funeral, medical, dental, and child care assistance, and other miscellaneous items. Mass Care and Emergency Assistance This program provides life-sustaining services to disaster survivors immediately before a potential incident, during the response phase, and during the beginning of post-disaster recovery. Services provided include sheltering, feeding, distribution of emergency supplies, support for individuals with disabilities and others with access and functional needs, reunification services for adults and children, support for household pets and service/assistance animals, and mass evacuee support. This program provides supplemental federal financial assistance to states, territories, tribal governments, or private nonprofit entities in order to provide the services of a case manager to a disaster survivor. Through this service, a case manager assists a survivor with developing a disaster recovery plan for meeting his or her unmet needs. Crisis Counseling Assistance and Training Program This program provides supplemental funding to eligible state, territorial, tribal, or local governments, and non-governmental organizations to assist disaster-impacted individuals and communities in recovering from the major disasters through the provision of community-based outreach and psycho-educational services. This program provides legal aid to survivors who qualify as low-income through an agreement with the American Bar Association. The service is limited to cases that would not normally incur legal fees, such as assistance with insurance claims or recovery or reproduction of legal documents lost in the disaster. This program provides unemployment benefits and re-employment assistance services to survivors under the responsibility of the U.S. Department of Labor. This assistance is only available to survivors who are not eligible for regular state unemployment insurance. FEMA employs Voluntary Agency Liaisons who establish and maintain relationships with voluntary agencies active in response and recovery, coordinate with the National Voluntary Organizations Active in Disaster, provide guidance on donations, and act as subject matter experts in development of long term recovery groups with local community organizations, faith-based groups, and other voluntary organizations. FEMA s Public Assistance program provides supplemental federal disaster grant assistance to state, local, tribal, and territorial governments, and certain types of private nonprofit organizations for debris removal, emergency protective measures, and the restoration of disaster- damaged, publicly-owned facilities and the facilities of certain private nonprofit organizations. The Public Assistance program also encourages protection of these damaged facilities from future events by providing assistance for hazard mitigation measures. The program which represents the largest share of federal aid from the Disaster Relief Fund is administered through a partnership between FEMA and the state, tribal or territorial grantee, which provides funding to local or tribal entities who are the subrecipients of a Public Assistance grant award. The Public Assistance program funds both emergency work and permanent work. Public Assistance for Emergency Work FEMA provides funding for emergency work such as emergency protective measures and debris removal that must be conducted immediately to save lives, protect public health and safety, protect improved property, or eliminate or lessen a threat of immediate additional damage. This assistance is divided into two categories, described below. Debris Removal (Category A): Debris removal activities, such as clearance, removal, and disposal, are eligible if the removal is in the public interest based on whether the work eliminates immediate threats to lives, public health, and safety or of significant damage to improved public or private property; ensures economic recovery of the affected community to the benefit of the community at large; or mitigates risk to life and property by removing substantially damaged structures and associated structures. In limited circumstances, based on the severity of the impact of an incident, FEMA may determine that debris removal from private property is eligible under the Public Assistance Program. If debris on private property is so widespread that it threatens public health and safety or the economic recovery of the community, FEMA may provide Public Assistance funding for debris removal from private property. Emergency Protective Measures (Category B): Emergency protective measures conducted before, during, and after an incident are eligible if the measures: eliminate or lessen immediate threats to lives, public health, or safety; or eliminate or lessen immediate threats of significant additional damage to improved public or private property in a cost-effective manner. Examples of such measures include transporting and pre-positioning equipment, flood fighting, supplies and commodities, evacuation and sheltering, child care, security, or searches to locate and recover human remains. Public Assistance for Permanent Work Permanent Work is work required to restore a facility to its pre-disaster design (size and capacity) and function in accordance with applicable codes and standards. This assistance is divided into the five categories listed below: Roads and Bridges (Category C) Water Control Facilities (Category D) Buildings and Equipment (Category E) Utilities (Category F) Parks, Recreational, Other (Category G) Appendix II: Information on Major Disasters Resulting from Wildfires, 2015 through 2018 Below are details on the six wildfire disasters selected for our review and the support the Federal Emergency Management Agency (FEMA) provided under the major disaster declarations. <8. Northern California Wildfires, September 2015> On September 9, 2015, the Butte Fire began burning across Calaveras County, and on September 12, 2015, the Valley Fire began burning across Lake County. FEMA subsequently approved a Fire Management Assistance Grant (FMAG) for the Butte Fire on September 10, 2015, and an FMAG for the Valley Fire on September 12, 2015. On September 22, 2015, the President issued a major disaster declaration at the request of the state for Lake County, which was ultimately expanded to include Calaveras County. On September 28, 2015, FEMA in collaboration with the state and counties opened two Disaster Recovery Centers in Calaveras and Lake Counties, and on October 2, 2015, FEMA opened a third Disaster Recovery Center in Lake County. In total, the Valley and Butte Fires burned 146,935 acres, destroyed 2,876 structures, and resulted in 6 deaths. See figure 7 for a map of the fire locations, and tables 4 and 5 for data on FEMA s mission assignments, Individual Assistance, and Public Assistance support. <9. East Tennessee Wildfires, November 2016> On November 28 2016, strong winds pushed a wildfire named the Chimney Tops 2 fire beyond the boundaries of the Great Smoky Mountains National Park and into the surrounding wildland urban interface. The fire primarily spread into Sevier County, Tennessee, which includes the cities of Gatlinburg and Pigeon Forge. That same day, FEMA approved an FMAG for Tennessee to support fire suppression activities. On December 15, following a request by the governor of Tennessee on December 9, the President issued a major disaster declaration for Sevier County. On December 23 and December 28, FEMA in collaboration with the state and counties opened Disaster Recovery Centers in Gatlinburg and Pigeon Forge, respectively. The Tennessee wildfires ultimately burned approximately 17,000 acres, destroyed 2,545 structures, and led to 14 fatalities. See figure 8 for a map of the fires location, tables 6 and 7 for data on FEMA s mission assignments, Individual Assistance, and Public Assistance support. <10. Northern and Southern California Wildfires, October 2017> On October 8, 2017, multiple fires began burning in northern California, spreading rapidly due to high winds and dry conditions. Among these fires was the Tubbs Fire in Sonoma and Napa Counties, which was, at the time, the most destructive fire in California s history. On October 9, 2017, FEMA approved FMAGs for ten separate fires. On October 10, 2017, the President issued a major disaster declaration at the request of the state for seven counties Butte, Lake, Mendocino, Napa, Nevada, Sonoma, and Yuba. On October 13, 2017, Solano County and Orange County (in southern California) were added to the declaration. In total, the fires included in this disaster declaration burned 240,138 acres, destroyed 8,924 structures, and resulted in 44 deaths. From October 17 through November 28, FEMA in in collaboration with the state and counties established five Disaster Recovery Centers to assist disaster survivors. See figure 9 for a map of the fires locations, and tables 8 and 9 for data on FEMA s mission assignments (including FEMA s assignment of debris removal responsibilities to the U.S. Army Corps of Engineers), Individual Assistance, and Public Assistance support. Figure 10 provides an aerial snapshot of the destruction in one area of the city of Santa Rosa in Sonoma County. <11. Southern California Wildfires, Flooding, Mudflows and Debris Flows, December 2017> On December 4, 2017, the Thomas Fire started burning in Ventura County. Over the next three days, the Thomas Fire and other wildfires spread rapidly through Ventura and neighboring counties due in part to the Santa Ana Winds and FEMA approved a number of FMAGs for these wildfires. On December 20, the governor of California requested a major disaster declaration for Los Angeles, San Diego, Santa Barbara, and Ventura Counties. The request was approved on January 2, 2018 for Santa Barbara and Ventura Counties for Public Assistance. In the week that followed, heavy rains exacerbated the damages caused by the fires, leading to mudflows and debris flows. On January 10, FEMA expanded the disaster declaration to include the flooding, mudflows, and debris flows related to the wildfires. Five days later, FEMA added Los Angeles and San Diego Counties to the disaster declaration, and granted all four counties eligibility for Individual Assistance, in addition to the Public Assistance eligibility previously approved. From January 19 through February 5, 2018, FEMA in collaboration with the state and counties established five Disaster Recovery Centers to assist disaster survivors. The Southern California wildfires, debris flows, and mudflows ultimately burned 308,083 acres, destroyed 1,378 structures, and caused 23 fatalities. See figure 11 for a map of the fires locations, and tables 10 and 11 for data on FEMA s mission assignments, Individual Assistance, and Public Assistance support. <12. Northern California Wildfires and High Winds, July 2018> On July 23, 2018, the Carr Fire began burning in Shasta County. On July 27, 2018, the Mendocino Complex Fire, a combination of the River and Ranch Fires, began burning in Lake County. FEMA soon approved FMAGs for these fires. On August 4, 2018, the President issued a major disaster declaration for Shasta County, which was ultimately expanded to include Lake County. On August 9, 2018, FEMA in collaboration with the state and counties established a Disaster Recovery Center in Shasta County, with a second Disaster Recovery Center established in Lake County on August 21, 2018. One of the wildfires the Mendocino Complex Fire was the largest fire in California s history, burning 459,123 acres. In total, the Carr and Mendocino Complex Fires burned 688,774 acres, destroyed 1,894 structures, and resulted in 4 deaths. See figure 12 for a map of the fires locations, and tables 12 and 13 for data on FEMA s mission assignments, Individual Assistance, and Public Assistance support. Figure 13 shows the aftermath of the Carr Fire in one residential neighborhood. <13. Northern and Southern California Wildfires, November 2018> <13.1. Information on Fires and Assistance Provided> On November 8, 2018, the Camp Fire struck the city of Paradise in Butte County. According to California s Department of Forestry and Fire Protection, the Camp Fire grew into the deadliest and most destructive fire in California history, resulting in 18,793 structures destroyed, 153,336 acres burned, and 85 deaths. On the same day two other major fires the Woolsey Fire in Los Angeles County and the Hill Fire in Ventura County began. On November 8-9, FEMA approved FMAGs for these fires, and the President issued a major disaster declaration for these counties on November 12, 2018. FEMA in collaboration with the state and counties opened a Disaster Recovery Center in Butte County on November 16 and four other Disaster Recovery Centers in Butte, Ventura, and Los Angeles counties over the next month. In total, the three fires resulted in 20,295 structures destroyed, 254,816 acres burned, and 88 deaths. See figure 14 for a map of the fires locations, and tables 14 and 15 for data on FEMA s mission assignments, Individual Assistance, and Public Assistance support. Appendix III: Comments from the Department of Homeland Security Appendix IV: GAO Contact and Staff Acknowledgments <14. GAO Contact> <15. Staff Acknowledgments> In addition to the contact above, the following staff members made significant contributions to this report: Kathryn Godfrey (Assistant Director), Adam Couvillion (Analyst-in- Charge), Elizabeth Dretsch, Ricki Gaber, Eric Hauswirth, Hannah Hubbard, Tracey King, John Mingus, Ben Nelson, and Kevin Reeves. | Why GAO Did This Study
In 2017 and 2018, deadly wildfires struck the state of California, tragically resulting in 159 deaths and over 32,000 structures destroyed. FEMA, as the lead federal agency for responding to and recovering from disasters, has obligated about $2 billion in housing, debris removal, and other assistance following these disasters. According to recent environmental assessments, fire seasons are increasing in length, putting more people and infrastructure at risk.
GAO was asked to assess a range of response and recovery issues related to the 2017 disasters. Specifically, this report addresses (1) the assistance FEMA provided to jurisdictions in response to major disaster declarations stemming from wildfires from 2015 through 2018, (2) selected jurisdictions' perspectives on FEMA wildfire response and recovery efforts, and (3) the extent to which FEMA has identified and addressed key lessons learned. GAO obtained data on FEMA wildfire disaster assistance and statistics on fire damages and fatalities; reviewed key documentation, such as incident action plans and after action reports; and interviewed officials from FEMA headquarters and regional offices, states, and a nonprobability sample of affected local jurisdictions (e.g., counties).
What GAO Found
For wildfire-related major disaster declarations from 2015 through 2018, the Federal Emergency Management Agency (FEMA)—consistent with its authorities and responsibilities—helped state and local officials obtain and coordinate federal resources to provide for the needs of wildfire survivors and execute recovery efforts. This support totalled over $2.4 billion and included providing staff to assist at Emergency Operations Centers and establishing Disaster Recovery Centers to coordinate disaster assistance services for survivors. In addition, FEMA provided Public Assistance grant funds to local jurisdictions to help address community infrastructure needs, such as debris removal. FEMA also assigned federal agencies to perform various missions to help with response and recovery—for example, the U.S. Army Corps of Engineers was assigned with contracting for debris removal services in some instances.
Officials from jurisdictions that GAO spoke with described practices that aided in wildfire response and recovery, but also reported experiencing challenges. Specifically, officials in affected areas noted that collaboration between FEMA and California's Office of Emergency Services allowed for timely information sharing, and FEMA's assistance at Disaster Recovery Centers greatly assisted survivors in obtaining necessary services. Among the challenges cited were onerous documentation requirements for FEMA's Public Assistance grant program and locating temporary housing for survivors whose homes were completely destroyed. In addition, the unique challenge of removing wildfire debris led to confusion over soil excavation standards and led to overexcavation on some homeowners' lots, lengthening the rebuilding process.
FEMA has developed an after-action report identifying lessons learned from the October and December 2017 wildfires, but could benefit from a more comprehensive assessment of its operations to determine if additional actions are needed to ensure that policies and procedures are best suited to prepare for future wildfires. The combination of recent devastating wildfires and projections for increased wildfire activity suggest a potential change in FEMA's operating environment. According to Standards for Internal Control in the Federal Government , such changes should be analyzed and addressed to help ensure that agencies maintain their effectiveness.
What GAO Recommends
GAO recommends that FEMA comprehensively assess operations to identify additional updates to policies and procedures that could enhance future wildfire response and recovery efforts. The Department of Homeland Security agreed with our recommendation. |
gao_GAO-20-277T | gao_GAO-20-277T_0 | <1. Multiple Aspects of the REAC Inspection Program Have Weaknesses> Our March 2019 report identified a number of areas in which HUD needs to improve its physical inspection process and its oversight of inspectors, which could help better ensure the health and safety of households that live in HUD-assisted properties. These areas include conducting a comprehensive review of the inspection process; incorporating sampling error as part of determining inspection frequency and enforcement actions; tracking whether inspections are conducted by their expected date; enhancing the process and practices related to selecting, training, and evaluating inspectors; and ensuring that new quality control policies and procedures are implemented. <1.1. Comprehensive Review of REAC Inspection Process> We found that REAC had not conducted a comprehensive review of its inspection process since 2001, although new risks to its process have emerged since then. For example, REAC staff have raised concerns that some property owners have taken advantage of the scoring system and others have misrepresented the conditions of their properties. Specifically, because more points are deducted for deficiencies on the property site than for deficiencies in a dwelling unit, some property owners prioritize site repairs over unit repairs. Additionally, some property owners attempt to cover up, rather than address, deficiencies such as by using mulch on a building exterior to hide erosion. REAC staff also have raised concerns about property owners employing current or former REAC contract inspectors to help prepare for an inspection, sometimes by guiding owners to repair just enough to pass inspection rather than comprehensively addressing deficiencies. REAC also continues to find that some contract inspectors conduct inspections that do not meet REAC s quality standards. Furthermore, REAC fundamentally changed the entities that conduct inspections. In 1998, REAC employed a few large inspection companies to conduct the inspections. However, in 2005, REAC introduced the reverse auction program and opened up the inspection process to a larger number of small businesses, which resulted in a change in the composition of inspectors. We found that without a comprehensive review, REAC cannot determine if it has been meeting the goal of producing inspections that are reliable, replicable, and reasonable. We recommended that REAC conduct a comprehensive review of the physical inspection process, and HUD agreed with this recommendation. In November 2019, HUD officials told us that they recently completed a comprehensive review of the physical inspection process. In supporting documentation, HUD stated that the current model was insufficient for evaluating HUD-assisted housing when compared to modern expectations of housing quality, and that there is now a need to focus more on health and safety of residents and less on asset preservation and condition and appearance items. We have been assessing HUD s recent review to determine whether it is has fully addressed our recommendation. <1.2. Incorporating Sampling Errors> We also found that REAC may not be identifying all properties in need of more frequent inspections or enforcement actions because it does not consider sampling errors of the inspection scores. For large properties, REAC inspects a statistical sample of the property s units and buildings rather than all of them. The results for the sample are then used to estimate a score that represents the condition of the entire property. HUD takes enforcement action for multifamily properties with a score below 60. However, sampling introduces a degree of uncertainty, called sampling error, which statisticians commonly express as a range associated with numerical results. For example, for a property that scored 62 on its physical inspection, due to sampling error, the range associated with this score could be between 56 on the lower bound and 68 on the upper bound. REAC would consider this a passing score that requires an annual inspection and no enforcement action, although the lower bound fell below 60. REAC previously calculated sampling errors but ceased doing so in 2013, according to REAC officials, in part because of a lack of resources and also because they believed there was no need to calculate them. Based on our analysis of REAC inspection data, HUD could have taken enforcement actions against more properties if REAC had taken sampling errors in inspection scores into account. For example, from fiscal years 2002 through 2013, about 4.3 percent of inspections of multifamily and public housing properties had an inspection score of 60 or slightly above 60 but had a lower bound score under 60. Without considering sampling errors when determining whether enforcement action is needed, REAC will not identify some properties that may require more frequent inspections or enforcement actions. We recommended in our March 2019 report that REAC resume calculating the sampling error associated with the physical inspection score for each property, identify what changes may be needed for HUD to use sampling error results, and consider those results when determining whether more frequent inspections or enforcement actions would be needed. HUD neither agreed nor disagreed with this recommendation. However, since our report was issued, HUD said that by September 30, 2020, REAC planned to include the standard error calculations in the next version of its scoring software for physical inspections. REAC officials also stated that a task team concluded that the use of sampling error likely would have no impact on any individual enforcement action. However, REAC s statement appears to contradict its own policies because inspection scores alone are used to determine whether some properties are referred for potential enforcement actions. We will continue to monitor REAC s actions regarding this recommendation, including how it uses sampling error results to make decisions about properties. <1.3. Selecting, Training, and Evaluating Inspectors> In our March 2019 report, we also found that REAC lacked formal mechanisms to assess the effectiveness of its training program for contractors hired to inspect properties (contract inspectors) and for HUD employees responsible for monitoring and overseeing contract inspectors (quality assurance inspectors). Unlike professional inspection organizations, REAC does not have continuing education requirements. Formal mechanisms to assess the effectiveness of its training program could help REAC ensure that its program supports the development needs of inspectors. Furthermore, requiring continuing education could help REAC ensure that inspectors are current on any changes in REAC s policies or industry standards. We also found weaknesses in REAC s process for evaluating the performance of inspectors, which could hinder its ability to ensure the quality of inspections. We made a number of recommendations related to the selection, training, and performance evaluation of inspectors. Specifically, we recommended that HUD take the following actions: Follow through on REAC s plan to create a process to verify candidate qualifications for contract inspectors for example, by calling references and requesting documentation from candidates that supports their completion of 250 residential or commercial inspections. Develop a process to evaluate the effectiveness of REAC s training program for example, by reviewing the results of tests or soliciting participant feedback. Revise training for quality assurance inspectors to better reflect their job duties. Develop continuing education requirements for contract and quality assurance inspectors. Review performance standards for quality assurance inspectors and revise them to better reflect the skills and supporting behaviors that quality assurance inspectors need to effectively contribute to REAC s mission. HUD agreed with these recommendations, and we have been evaluating actions it has taken in response to them since our report was issued. For example, in November 2019, HUD officials said that they were moving toward a model of contracting with larger firms to conduct physical inspections of properties. In this model, HUD plans to put the first level of responsibility on the contractor to do its own due diligence on inspector candidates, and the contractor would be required to review 25 verifiable prior inspections completed by each inspector candidate. A REAC official then would be expected to select a sample of the candidate s inspections to review. In response to our recommendation about revising training for quality assurance inspectors, REAC said that it recently began requiring a minimum of 8 hours of continuing education annually for all quality assurance staff. As of November 2019, REAC had not yet provided us with information about the subject matter of that training. Since our report was issued, REAC also developed continuing education requirements for contract and quality assurance inspectors, which it said will be required beginning in January 2020. In addition, REAC has developed updated performance standards for quality assurance inspectors, which REAC officials said were under review. REAC considers the new standards to be more aligned with the job responsibilities of quality assurance inspectors. <1.4. Meeting Target Dates for Inspections> We also found that REAC did not always meet its schedule for inspecting multifamily properties or track progress toward meeting scheduling requirements. REAC did not meet its schedule for about 20 percent of multifamily property inspections from calendar years 2013 through 2017. On average, REAC conducted inspections for these properties about 6 months past the targeted date. REAC staff told us that there may be legitimate reasons for not conducting an inspection according to the targeted date. For example, the Office of Multifamily Housing, which oversees the performance of properties that receive project-based assistance, can delay an inspection for reasons such as natural disasters or major rehabilitation to the property. However, REAC maintains limited data on the reasons why inspections have been rescheduled or cancelled. In addition, these data are not readily available to understand retrospectively why an inspection did not occur on schedule. REAC also does not track its progress toward meeting its requirement for inspecting multifamily properties within prescribed time frames. REAC s inability to adhere to the inspection schedule could hinder the Office of Multifamily Housing s ability to monitor the physical condition of properties on a timely basis and take enforcement actions when warranted. Furthermore, the lack of a mechanism to track REAC s progress toward meeting its requirement for inspecting multifamily properties hinders its ability to determine what factors have contributed to delays in conducting the inspections. In our March 2019 report, we recommended that REAC track on a routine basis whether it conducts inspections of multifamily housing properties in accordance with federal guidelines for scheduling, as well as coordinate with the Office of Multifamily Housing to minimize the number of properties that can cancel or reschedule their physical inspections. HUD partially agreed with this recommendation. Since our report was issued, REAC officials told us that REAC developed an electronic spreadsheet to better track information about its inspections, and they expect information technology enhancements that would automate the tracking of information about these inspections to be deployed by September 1, 2020. HUD s Office of Multifamily Housing also issued a memorandum in March 2019 that provides guidance on when a field office may approve an owner s request to delay an inspection. We will continue to monitor HUD s actions related to this recommendation. <1.5. Implementing New Quality Control Policies and Procedures> In our March 2019 report, we found that REAC had yet to implement policies and procedures for its Quality Control group, which was formed in 2017. REAC created the Quality Control group to standardize quality assurance inspector reviews by conducting more frequent oversight and looking for trends across all quality assurance inspectors, according to a Quality Control official. In November 2018, Quality Control developed a mission statement that says that the primary goal of the group is to improve the consistency of inspections. Also in November 2018, Quality Control developed procedures for reviewing quality assurance inspectors, which include processes for conducting field reviews of completed inspections, criteria for acceptable inspections, and processes for providing feedback. An official from the group told us both its mission and procedures have not been implemented, in part because Quality Control staff repeatedly have been occupied with other special projects. Without finalizing and implementing its policies and procedures for reviewing quality assurance inspectors, Quality Control may not be able to provide consistent reviews of quality assurance inspectors, which could affect the quality of inspections and the feedback and coaching that quality assurance inspectors provide to contract inspectors. We recommended that REAC ensure that Quality Control s policies and procedures for overseeing quality assurance inspectors are implemented, and HUD agreed with this recommendation. Since our report was issued, REAC has begun to implement this recommendation by clarifying in writing the roles, responsibilities, and objectives of the Quality Control group, including how the group plans to support changes in REAC s inspection program. In determining the status of our recommendation, we will look for evidence that the group has been consistently implementing its policies and procedures. <1.6. Other Recommendations and Actions HUD Has Taken> In addition, our March 2019 report made several other recommendations regarding the physical inspection process and oversight of inspectors. These recommendations addressed documenting the sampling methodology for the inspection process, designing and implementing an evaluation plan for assessing the effectiveness of REAC s pilot program for staffing inspections in hard- to-staff geographic areas, implementing internal HUD recommendations, implementing a plan for meeting management targets for reviews by quality assurance inspectors, and reporting to Congress on why the agency has not complied with a Consolidated Appropriations Act requirement. HUD generally agreed with these recommendations. While HUD has taken some steps, it had not fully addressed them as of November 2019. We have been assessing the actions HUD has taken and will continue to monitor HUD s progress toward implementing these recommendations. HUD has been undertaking significant changes to the REAC physical inspection program. In a Federal Register notice published on August 21, 2019, HUD said it was soliciting comments on a proposed voluntary demonstration of a new physical inspection process, called the National Standards for the Physical Inspection of Real Estate. According to HUD officials, the new inspection model is intended to address issues of inspections not always identifying health and safety conditions and properties with poor unit conditions passing inspections, among other things. HUD officials have said that a transition to the new model may take 2 years or more. HUD also has been taking steps to replace its reverse auction program with a program in which large contractors will be responsible for conducting physical inspections. We will continue to monitor HUD s actions regarding the recommendations, as well as HUD s activities more broadly related to implementing a new inspection model. Full implementation of the recommendations, even as the inspection program undergoes changes, can help REAC to ensure that properties are decent, safe, sanitary, and in good repair. <2. HUD Needs to Better Monitor Compliance with Lead Paint Regulations and Measure and Report on Performance of Lead Efforts Compliance Monitoring and Enforcement> Our June 2018 report identified a number of areas in which HUD needs to improve its efforts to identify and address lead paint hazards and protect children in low-income housing from lifelong health problems. Among other issues, we identified shortcomings in compliance monitoring and enforcement, inspection standards, and performance assessment and reporting. Our June 2018 report noted that HUD began taking steps in 2016 to monitor how PHAs comply with lead paint regulations. These steps included tracking the status of lead inspection reports for public housing properties and PHA-reported information about cases of children with elevated blood lead levels living in voucher and public housing units. However, we also identified several limitations with HUD s monitoring efforts. For example, HUD relies in part on PHAs self-certifying their compliance with lead paint regulations, but investigations found that some PHA officials may have falsely certified that they were in compliance. Also, on-site compliance reviews performed by HUD staff can be used to determine if PHAs are in compliance with these regulations, but HUD performs a limited number of these reviews annually. In fiscal year 2017, HUD conducted these reviews at less than 2 percent of the roughly 4,000 PHAs. Finally, HUD does not have data readily available on the physical condition of the roughly 2.5 million voucher units or these units compliance with lead paint regulations because the individual PHAs keep these data. These limitations in HUD s monitoring suggest that HUD may not be fully aware of the extent to which children may live in unsafe units. As a result, we recommended that HUD establish a plan to mitigate and address risks in its lead paint compliance monitoring processes. These actions could further strengthen HUD s oversight and keep PHAs accountable for ensuring that housing units are lead-safe. HUD agreed with the recommendation. As of November 2019, HUD officials told us the agency had taken steps to implement the recommendation, including requiring PHAs to submit appropriate documentation regarding public housing units compliance with lead paint regulations and updating an internal checklist for on-site compliance reviews that HUD staff conduct. We will continue to monitor HUD s progress in response to our recommendation. Our 2018 report also found that HUD did not have detailed procedures to address PHA noncompliance with lead paint regulations or to determine when enforcement decisions might be needed. HUD staff stated that they address PHA noncompliance through ongoing communication and technical assistance. However, HUD has not documented specific actions staff should perform when deficiencies are identified. Furthermore, in response to our requests for information on enforcement actions taken, HUD was able to provide information on only one enforcement action, which dated from 2013. As a result, we recommended that HUD develop and document procedures to ensure staff take consistent and timely steps to address issues of PHA noncompliance with lead paint regulations. HUD generally agreed with the recommendation. As of November 2019, HUD officials told us procedures were in draft form and under internal review and were not expected to be finalized until spring 2020. HUD officials noted that the draft procedures could help HUD staff decide when an enforcement action might be appropriate, including determining how long PHAs have to resolve noncompliance. <2.1. Inspection Standards> We also found that HUD s Lead Safe Housing Rule requires a stricter lead inspection standard for public housing than for voucher units. For public housing, inspectors must conduct a risk assessment that includes testing paint chips and dust for the presence of lead paint. For voucher units, inspectors conduct a visual assessment that includes looking for deteriorated paint or visible surface dust but does not include any testing of paint chips or samples. As a result of the different inspection standards in the two programs, children living in voucher units may receive less protection from lead paint hazards than children living in public housing. According to agency officials, HUD does not have the statutory authority to require the more stringent inspection in the voucher program. In our June 2018 report, we recommended that HUD request authority from Congress to use the stricter lead inspection standard in the voucher program as indicated by analysis of health effects for children, the impact on landlord participation in the program, and other relevant factors. In August 2018, HUD officials told us that they planned to convene a working group to design and conduct a statistically rigorous study on the impact of risk assessments to help decide whether to support statutory change for greater flexibility in strengthening inspection standards for pre- 1978 units under the voucher program. Such an analysis could be useful in evaluating the potential benefits and risks of a change in the voucher program, and we will continue to monitor the progress made by the working group. As of November 2019, HUD officials told us they were working on a demonstration proposal to test an alternative inspection standard in the voucher program. The officials noted that details of the demonstration proposal were not currently available. Separately, we have ongoing work reviewing possible changes in the inspection standard for the voucher program. This work started in September 2019 and will include an in-depth review of the impact a change in the inspection standard may have on the cost and length of time of inspections, as well as the impact on landlords and families participating in the voucher program. <2.2. Performance Assessment and Reporting> Our June 2018 report also identified weaknesses in HUD s performance assessment of and reporting on its lead-safety efforts. We found that HUD had taken limited steps to measure, evaluate, and report on the performance of its programmatic efforts to ensure that housing is lead- safe. First, HUD lacked comprehensive goals and performance measures for its lead-reduction efforts. We found that HUD did not track the number of housing units in the voucher or public housing programs that were lead-safe. At the time of our report, HUD officials told us that the agency did not have systems to count the number of housing units made lead- safe in these two programs. HUD had begun discussing whether existing databases could be used to count lead-safe housing units but did not provided us with details at that time. Second, HUD had not formalized plans and did not have a time frame for evaluating the effectiveness of its lead paint regulations. Third, it had not complied with annual statutory reporting requirements and last reported on its lead efforts in 1997. We noted that by improving its measurement of whether its housing is lead- safe and evaluating and reporting on its efforts, HUD will be better positioned to inform Congress and the public about its progress toward ensuring that housing is lead-safe for residents. As a result of these findings, we recommended that HUD develop performance goals and measures, including a measure to track its efforts to ensure that housing units in its rental assistance programs were lead- safe. Additionally, we recommended that HUD finalize plans for evaluating the effectiveness of its lead paint regulations. Finally, we recommended that HUD complete statutory reporting requirements and make the reports publicly available. HUD generally agreed with these recommendations. In August 2018, HUD told us that it would use existing data systems to begin to establish a baseline for reporting lead-safe housing units in its rental assistance programs. As of November 2019, HUD officials told us they still were exploring whether current data systems could be used to count the number of lead-safe housing units in HUD s rental assistance programs. According to HUD officials, for public housing, HUD has made progress in counting housing units that have been made lead-safe using funds from the Lead-Based Paint Capital Fund Program. However, officials told us data will not be available until spring 2020. To evaluate the effectiveness of lead paint regulations, in November 2019 HUD officials told us they planned to use data from the forthcoming update to the American Healthy Homes Survey to better estimate the prevalence of lead paint hazards in federally assisted housing. However, officials told us the findings from the updated survey likely would not be available until summer 2020. With respect to complying with statutory reporting requirements, in November 2019, HUD officials told us they planned to issue a report to Congress on the agency s lead efforts in early 2020. We will continue to monitor HUD s efforts to implement these recommendations. In summary, it is essential to strengthen HUD s oversight and keep PHAs accountable for ensuring that housing units are lead-safe because children continue to test positive for lead while living in HUD-assisted housing. As of November 2019, HUD officials told us they continue to learn of confirmed cases of children testing positive for lead while living in HUD-assisted housing because PHAs are required to record the cases in a HUD database. We maintain that improvements to the areas noted in this statement today will help HUD better protect children from lifelong health problems. Chairman Clay, Ranking Member Stivers, and Members of the Subcommittee, this concludes my statement for the record. <3. GAO Contact and Staff Acknowledgments> If you or your staff have any questions about this statement, please contact Daniel Garcia-Diaz, Director, Financial Markets and Community Investment, at (202) 512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this statement are Beth Faraguna and Andy Pauline (Assistant Directors), Cory Marzullo (Analyst in Charge), Rachel Batkins, Carl Barden, Charlene Calhoon, Rudy Chatlos, Jeff Harner, Jill Lacey, Lisa Moore, Marc Molino, Jos Pe a, Rhonda Rose, Jessica Sandler, Jennifer Schwartz, Tyler Spunaugle, and Nina Thomas-Diggs. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
As of the end of 2018, roughly 4.4 million low-income households were served by HUD's three largest rental assistance programs. HUD has responsibilities for ensuring that housing units provided under these programs are decent, safe, sanitary, and in good repair, as well as for identifying and addressing lead paint hazards in these units.
GAO issued reports in March 2019 ( GAO-19-254 ) on HUD's physical inspections of HUD-assisted properties and in June 2018 on lead paint hazards in the public housing and voucher programs ( GAO-18-394 ). This statement is based on these two reports and discusses prior GAO findings on (1) REAC inspections and inspector oversight and (2) lead paint hazards. For the March 2019 report, GAO reviewed HUD documents and data related to REAC's physical inspection process. For the June 2018 report, GAO reviewed HUD documents and information related to its compliance efforts, performance measures, and reporting.
In March 2019, GAO made 14 recommendations to HUD to improve the physical inspections process and oversight of inspectors. In June 2018, GAO made six recommendations to HUD to improve compliance monitoring processes, inspection standards, and performance assessment and reporting on lead reduction efforts in federally assisted properties. HUD generally agreed with these recommendations. As of November 2019, HUD officials had identified planned steps to implement most of these recommendations but had not fully addressed them.
What GAO Found
The Department of Housing and Urban Development (HUD) plays an important role in providing decent and safe housing for households receiving federal rental assistance. However, HUD needs to improve its physical inspection program and its efforts to identify and address lead paint hazards in federally assisted housing. To that end, GAO made 20 recommendations on these issues in its March 2019 and June 2018 reports.
Physical inspections of properties. HUD's Real Estate Assessment Center (REAC) is responsible for conducting physical inspections of HUD-assisted properties. Despite longstanding processes to inspect properties and take action against owners who do not address physical deficiencies, HUD continues to find some properties in poor physical condition and with life-threatening health and safety issues. In a March 2019 report, GAO identified a number of areas in which HUD needed to improve its physical inspection process and oversight of inspectors, which could help ensure the health and safety of those who live in HUD-assisted properties. For example, REAC had not conducted a comprehensive review of its inspection process since 2001, although new risks to the process have emerged since then. A comprehensive review could help REAC identify risks and ensure it meets the goal of producing reliable inspections.
In addition, REAC uses contractors to inspect properties; these contract inspectors are trained and overseen by HUD staff known as quality assurance inspectors. However, GAO found REAC lacked formal mechanisms to assess the effectiveness of its training program for contractor inspectors and for HUD employees responsible for monitoring and overseeing contract inspectors. And, unlike professional inspection organizations, REAC does not have continuing education requirements. Formal mechanisms to assess the effectiveness of its training program and requirements for continuing education could help REAC ensure its program supports development needs of inspectors and that inspectors are current on any changes in policy or industry standards.
Lead paint hazards. GAO also identified a number of areas in which HUD could improve its efforts to identify and address lead paint hazards to protect children from lifelong health problems. Lead paint hazards (such as dust containing lead and chips from deteriorated lead-based paint) are the most common source of lead exposure for U.S. children. In a June 2018 report, GAO identified shortcomings in HUD's compliance monitoring and enforcement, inspection standards, and performance assessment and reporting for lead-reduction efforts. For example, HUD's monitoring efforts relied in part on public housing agencies to self-certify compliance with lead paint regulations. Additionally, the lead inspection standard for the voucher program is less strict than that for the public housing program. As a result, children living in voucher units may receive less protection from lead paint hazards than children living in public housing. Furthermore, GAO found that HUD did not track the number of lead-safe housing units in the voucher or public housing programs. Therefore, HUD may not be fully aware of the extent to which children have been living in unsafe units. |
gao_GAO-20-316 | gao_GAO-20-316_0 | <1. Background> <1.1. F-35 Program> The F-35 Lighting II program is a joint, multinational acquisition program intended to develop and field a family of next-generation strike fighter aircraft for the U.S. Air Force, Navy, and Marine Corps (hereinafter referred to as the services); seven international partners; and four foreign military sales customers (collectively hereinafter referred to as program participants). The program has developed and is delivering three variants of the F-35 aircraft: F-35A conventional takeoff and landing variant for the Air Force. (see fig. 1) F-35B short takeoff and vertical landing variant for the Marine Corps. F-35C carrier-suitable variant for the Navy. The characteristics of the services variants are similar in that each is intended to be a multi-role, stealthy strike aircraft, but each service s variant also has unique operating requirements. For example, the Marine Corps requires that the F-35B be capable of operating from aircraft carriers, amphibious ships, and main and austere operating bases alike, requiring the ability to conduct short take offs and vertical landings. DOD initiated the F-35 program in October 2001. Since then, the Marine Corps and Air Force declared initial operational capability in 2015 and 2016, respectively, while the Navy declared initial operational capability in February 2019. Operational testing of the F-35 aircraft began in December 2018 and is currently scheduled to be completed late 2020. At that time, DOD will make a decision on whether to proceed with plans to begin full-rate production of the aircraft. DOD has, concurrently, been fielding and operating a growing fleet of aircraft as part of low-rate initial production. As of October 2019, more than 435 U.S. and international aircraft had been fielded and were operating from 19 sites worldwide. By 2023, the global F-35 fleet is expected to expand to more than 1,100 aircraft across 43 operational sites. In total, the program participants plan to purchase more than 3,300 F-35 aircraft, with the U.S. services planning to purchase nearly 2,500 of those aircraft. See Figure 2 for a timeline of anticipated worldwide fleet growth in the F-35 program. DOD has two primary contractors for the F-35 program: Lockheed Martin for the overall aircraft system and Pratt & Whitney for the engine. As the prime contractor for the overall aircraft system, Lockheed Martin (hereinafter referred to as the prime contractor) is responsible for managing the F-35 supply chain, depot maintenance, and pilot and maintainer training, as well as for providing engineering and technical support. Currently, DOD is contracting for this support with the prime contractor largely through annual contracts. It plans to transition to multiple-year, fixed-price, performance-based sustainment contracts when the program achieves certain condition-based criteria, including the establishment of critical sustainment capabilities and the government s ability to collect and more fully assess performance and cost data. In addition, the U.S. Air Force, Navy, and Marine Corps have each established an F-35 integration office or similar construct focused on how the services will operate and afford the F-35, among other things. Figure 3 depicts how these key stakeholders provide support to the F-35 program participants across the three aircraft variants. <1.2. Autonomic Logistics Information System> The Autonomic Logistics Information System (ALIS) is a system of systems that serves as the primary logistics tool to support F-35 operations, mission planning, and sustainment. ALIS is intended to help maintainers manage tasks including aircraft health and diagnostics, supply-chain management, and other maintenance events. ALIS functionality is intended to support many of the F-35 program s key performance parameters such as: Increase sortie generation rate: Number of aircraft sorties launched in a flight day. Increase mission reliability: The probability that a system will perform mission essential functions for a period of time. Reduce logistics footprint: The size of in-theater logistics support needed to move and sustain a warfighting force. The footprint includes all the necessary support needed to maintain the force such as fuels, parts, support equipment, transportation, and people. According to DOD officials, ALIS is integral to supporting F-35 operations. Figure 4 shows some of the key intended capabilities of ALIS. These capabilities reside in multiple software applications within the system that perform specific functions for maintainers, pilots, supply personnel, and data analysts. Lockheed Martin is the prime contractor for ALIS and has been responsible for developing and managing the capabilities of the system, as well as developing training materials for F-35 pilots, maintainers, and supply personnel. ALIS is co-located with F-35 aircraft both at U.S. military installations and in theater to support missions and assist with maintenance and resource allocation. ALIS consists of the overarching system, the applications housed within it, and the network infrastructure required to provide global integrated and autonomic support of the F-35 fleet. It comprises both hardware and software, and supports the flow of unclassified and classified aircraft-related data. As a system of systems, major components of ALIS consist of: The Autonomic Logistics Operating Unit (ALOU). The ALOU is the central computer unit that all F-35 data are sent through. As part of the unit, the ALOU consists of two servers that process and store classified and unclassified data respectively. There is only one ALOU, and it is owned by the prime contractor. The Central Point of Entry (CPE). The CPE is a server unit configured to provide software and data distribution for a country s entire F-35 fleet. It is the node between the ALOU and each country s Standard Operating Units (generally housed at F-35 installations). The CPE consists of two servers that process and store classified and unclassified data respectively. There is typically one operational CPE per country, although the United States has separate CPEs for its operational commands and training sites. The Standard Operating Unit (SOU). The SOU is a server that is intended to provide all ALIS capabilities to support flying, maintenance, and training at F-35 installations. Typically, each F-35 squadron has at least one SOU. It is the node local to each F-35 squadron. There are two types of SOUs: a classified SOU that supports the flow of classified aircraft-related data and an unclassified SOU that supports the flow of unclassified aircraft-related data. The Portable Memory Device (PMD). The PMD is informally referred to as the brick that F-35 pilots use to upload information such as mission planning data. F-35 personnel use the PMD to store mission and maintenance data generated during flight which may then be downloaded into the ALIS SOU to support maintenance and mission debrief activities. The Portable Memory Device Reader (PMD Reader). The PMD Reader is a device intended to be used to remove maintenance data, including health-related codes, off of the Portable Memory Device and load into the SOU. The Portable Maintenance Aid (PMA). The PMA is an unclassified ruggedized laptop used by F-35 maintainers and flight-line supervisors to view unclassified technical data, and perform and document maintenance activities. According to the F-35 program office, the purpose of the server construct is to support the exchange of information necessary to support the F-35 sustainment enterprise. As of September 2019, according to program officials, there was one operational ALOU and CPE within the United States. Each F-35 site in the United States has a varying number of SOUs depending on the site s number of aircraft and squadrons. The SOU was designed to have its components fit into transit cases that can be carried by two personnel, with each case weighing up to 200 pounds. The PMDs, PMD Readers, and PMAs reside at the squadron and support the collection and transfer of unclassified and classified aircraft-related data. Figure 5 shows how unclassified ALIS data are collected and transferred from component to component. As we have previously reported, ALIS has experienced recurring developmental issues and schedule delays. The development of ALIS originated in 2002, a year after the start of the F-35 program. However, the first major ALIS release was not fielded until October 2009, nearly 7 years after initial development began. DOD officials had originally planned for the version of ALIS that would include all of the capabilities required to complete developmental testing of the program to be finalized in 2010. However, this milestone was reached in September 2018, nearly 8 years behind the original schedule. Figure 6 shows the timeline of major ALIS software version releases and other significant ALIS-related milestones. <2. DOD Has Made Some Improvements to ALIS, but Users Continue to Report Significant Challenges> ALIS users from all 5 F-35 locations we visited reported that ALIS has improved in some aspects over the last 5 years. However, these users continue to report significant challenges with ALIS that are affecting the day-to-day operations of the aircraft. DOD is currently unable to assess the overall performance of ALIS because it has not developed performance metrics. Additionally, DOD is unaware of how challenges with ALIS are affecting F-35 fleet-wide readiness. <2.1. Users Report Some Improvements with ALIS> According to pilots, maintainers, supply personnel, and contractors at 5 U.S. F-35 locations, ALIS is generally performing better than it was 5 years ago. Specifically, users at all 5 locations stated that data processing, downloading of information, and screen navigation were generally faster than previous years. According to users at 1 location, in previous releases of ALIS, it could take several minutes to complete a simple function like a screen download. Further, some users also reported minor functionality improvements within certain ALIS applications, such as the Computerized Maintenance Management System, leading to reduced time required to perform actions within those applications. We reported in April 2016 that ALIS users had problems accessing data in ALIS to produce service-specific reports for their squadrons. Users we spoke to at 4 locations for this report stated that they can now access some data within ALIS and can generate reports that they previously could not. For example, users at 1 location said that it was now easier to export aircraft-related maintenance information from ALIS and put it into an external spreadsheet. Additionally, in December 2015, the F-35 program began deploying software fixes to address minor defects in ALIS at F-35 locations in between major ALIS software version releases, which users at 1 location said have made improvements to the system. According to the F-35 program office, these software releases, referred to as service packs, have focused on improving user interface-related flaws that were discovered during major releases. Service packs provide users more frequent functionality fixes to the system, preventing them from having to wait, in most cases, over a year for a major ALIS software release. <2.2. Users Continue to Report Significant Challenges Using ALIS> While users at all 5 F-35 locations we visited said that ALIS is performing better than it was 5 years ago, they also stated that the system still posed significant challenges to day-to-day F-35 operations. Specifically, users across the 5 locations we visited stated that seven significant challenges still exist with ALIS, as shown in table 1. Many of the challenges cited above are similar to those we reported in April 2016, including deployability, inefficient issue resolution process, and data inaccuracies. We recommended at that time that DOD develop a plan to prioritize and address ALIS issues. DOD concurred and in 2016 developed a plan that identified key areas for system modernization and sustainment, which included prioritizing issues related to ALIS. While DOD s development of this plan is a positive step, significant user issues persist today, which are discussed in more detail below. Continued attention on ALIS is needed to make improvements to the system, reduce the burden on its users, and mitigate risks to operations and maintenance. <2.2.1. Inaccurate or Missing Data> Users at all 5 F-35 locations we visited expressed concern about data integrity issues related to inaccurate or missing data within ALIS. For example, users at all the locations said they have had consistent problems with data related to aircraft parts. Certain F-35 parts have an associated electronic record, which is used to track the remaining time before the part must be replaced, among other things. To be cleared for flight, F-35 policy states that an aircraft must be electronically complete in ALIS, meaning that all of the electronic records from each installed F- 35 part must be entered into ALIS. However, users at all 5 of the locations we visited told us that electronic records are frequently incorrect, corrupt, or missing, resulting in ALIS signaling that the aircraft should be grounded, often in cases where maintainers know that the parts have been correctly installed and are safe for flight. Users at 1 location said that within a 6-month period in 2019, they experienced anywhere between 0 and 400 issues per week related to inaccurate or missing electronic records. These same users said that it is common for their squadron leadership to elect to allow an aircraft to fly with over 20 inaccurate or missing electronic records that ALIS signals to ground. According to users at all 5 locations we visited, squadron leadership (e.g., DOD personnel designated by maintenance squadron commanders) may decide to fly an aircraft with inaccurate or missing electronic records, but we found that this practice varies by location and type of part. In June 2019, the Department of Defense Inspector General published a report on missing electronic records on F-35 spare parts. The report found that since 2015, F-35 locations have been consistently receiving spare parts without requisite electronic records. For example, of the 263 spare parts delivered to one location in June 2018, 213 spare parts (81 percent) did not have electronic records. Due in part to the unreliability of the data in ALIS, users at all 5 F-35 locations we visited have been collecting and tracking information outside of the system that should be automatically captured in ALIS. Although not a requirement, users said they need to track information outside of the system because they do not always trust the data that reside in ALIS. Users provided examples of critical aircraft data that they are tracking outside of ALIS such as aircraft performance data and maintenance inspection deadlines and said that manually tracking this information is a time-intensive process that pulls maintainers away from completing other aircraft maintenance-related responsibilities. For example, users at 1 location estimated that they spend an average of 5,000 to 10,000 hours per year manually tracking information that should be automatically and accurately captured within ALIS. In addition, there may be risks associated with using information tracked outside of the system of record to make decisions about the safety and operational health of aircraft. For example, users at one location said that there is a danger of overlooking a critical piece of information when key aircraft data used to determine an aircraft s status must be tracked manually using Excel spreadsheets. Users also said that by continuously ignoring alerts in ALIS caused by missing or inaccurate data, squadrons could be at risk of ignoring an alert for a legitimate aircraft issue. Finally, one commander we spoke with said that while his policy is to generally require maintainers to resolve data issues before releasing an aircraft for flight, in a wartime scenario, his squadron will carry out missions with inaccurate or missing ALIS data and assume the subsequent risk that this may entail. <2.2.2. Challenges Deploying> Users at all 5 F-35 locations we visited cited challenges deploying with ALIS to forward locations. Users stated that the required hardware for ALIS is bulky, can be cumbersome to transport, and, when necessary, difficult to store on a ship. For example, the unclassified and classified Standard Operating Unit (SOU) servers that are required for collecting and analyzing aircraft data in ALIS are broken up into a series of transportable cases. These cases each weigh approximately 200 pounds and require at least two people to lift. Users from 1 location told us that they have taken several separate SOU-related cases to support ALIS on deployments. These servers, as shown in figure 7, require dedicated transportation to transport them to forward locations, and heavy-duty equipment to load them on and off of ships. Some users stated that it was challenging to find space on the ship to store these servers since they typically require an entire room to function, as well as specific power and environmental controls. Additionally, users at all 5 locations stated that limited internet connectivity can make deployments challenging. Although SOU servers are critical ALIS hardware components, due to their size, squadrons will not always take them on deployments. In these instances, internet connectivity is important to access critical aircraft data from the forward location and send it back to the squadron s SOU for processing. However, internet connectivity can be slow or non-existent at these locations. In 2018, we recommended that the F-35 Program Executive Officer should test operating the F-35 disconnected from ALIS for extended periods of time in a variety of scenarios to assess the risks related to operating and sustaining the aircraft. DOD concurred with the recommendation, but as of December 2019, DOD had still not determined how long the aircraft can safely fly without connectivity to ALIS. Finally, users at 2 locations stated that contractor support is critical to supporting deployments. For example, at one location, due to inaccuracies with parts data in ALIS, the prime contractor prefers to match every requisite electronic record with its respective spare part prior to a deployment, which requires significant time and advanced planning. Furthermore, according to users at another location, due to the complexities and functionality issues related to ALIS, contractor support is required on deployments; however, deploying with contractors could become problematic in a combat scenario. Overall, users at all 5 locations said that they have completed deployments using ALIS. However, deployments are challenging and the current deployment preparation process for ALIS inhibits a military service s ability to deploy on short notice. <2.2.3. Increasing Personnel Needs> Users at 4 of 5 F-35 locations we visited stated that ALIS requires more contractor or military personnel support than originally planned. According to the F-35 s Operational Requirements Document the document that outlines the overall requirements for the F-35 program ALIS is supposed to help reduce the logistics footprint for the F-35. However, a 2013 DOD-commissioned study on reducing F-35 costs stated that the current ALIS support plan already uses 30 percent more administrators across squadrons and bases than a similarly-scaled IT implementation would normally require. In addition, current ALIS users at these 4 locations are finding that as ALIS becomes more mature, even more personnel are required to support the system s operations. For example, according to users at 1 Air Force location, the Air Force currently relies on about 8 contractor employees to support each ALIS SOU server, but has determined that this is not sufficient. Users at 2 Air Force locations stated that until the Air Force can train more military personnel to support ALIS-related issues, they will need to increase the number of contractor employees per squadron to support F-35 operations. Further, users from 1 Air Force location said they have had to assign full- time ALIS Expeditor responsibilities to military personnel within the squadrons to keep track of ALIS-related issues and pressure the contractor for resolution. Since these roles are not official billets, their resulting responsibilities are adding to the military personnel s existing, non-ALIS related responsibilities on the flight line. Air Force users from 1 location reported that due to inconsistencies within ALIS, they now have 20 full-time ALIS Expeditors to track ALIS-related issues and help ensure safety of flight for the aircraft. The Marine Corps had originally planned to maintain ALIS using only military personnel; however, as the numbers of aircraft and requisite SOUs increased, users at 1 Marine Corps location said that it was too difficult to develop and retain personnel with ALIS- specific expertise. According to these users, this has resulted in the Marine Corps needing increased numbers of contractor personnel to support its squadron operations. <2.2.4. Inefficient F-35 Issue Resolution Process> Users at all 5 F-35 locations we visited said that the process for resolving F-35 issues within ALIS remains problematic and inefficient. The Action Request (AR) process requires personnel to use an application within ALIS to submit an AR about any F-35 problem, including those about ALIS itself, to the contractor for triaging and ultimate resolution. In April 2016, we reported that ALIS users thought the AR process did not allow for the effective reporting and resolution of F-35 aircraft and ALIS issues. Specifically, users stated that the process did not provide transparency to all ARs submitted across F-35 locations and placed responsibility for resolving the requests primarily on the contractor. ALIS users at 4 locations stated that this remains the case. Users from 3 locations stated that the overall process would be more efficient if they were able to search ARs submitted by other squadrons across the fleet to determine if a solution to the problem already exists. Without this ability, users must submit an AR for every issue and wait for a response that can sometimes take months. For example, 1 location reported that from October 2018 through September 2019, F-35 aircraft were grounded for 9,262 hours or 9 percent of possible flight hours, due to unresolved ALIS- related ARs attributed mainly to missing and inaccurate electronic parts records. Officials from another location reported that during a 6-month period they had to ground aircraft for 2,200 hours as a result of waiting for contractors to resolve parts-related ARs. Users from a third location stated that more transparency in the AR process could reduce reliance on contractor support, provide a way to address F-35 problems more efficiently, and reduce costs to the program since DOD incurs a fee each time an AR is submitted. <2.2.5. Poor User Experience> Users at all 5 F-35 locations we visited stated that ALIS is not user- friendly or intuitive. While users stated that there have been some limited improvements to ALIS over the past years, as previously discussed, in general, users at all 5 locations described ALIS applications as difficult to navigate. For example, users from 1 location stated that it is more difficult and time-consuming to search for information on parts in ALIS than in legacy logistics systems because the information is located in multiple locations within ALIS. Additionally, users from all 5 locations said that some of the applications within ALIS have very slow processing speeds. According to users at 1 location, in some instances, ALIS s slow applications require maintainers to work additional hours to complete required maintenance tasks. During a demonstration of ALIS and its Joint Technical Data application at one of the locations we visited, we observed maintainers deal with a slow log-in process, problems filtering and searching for data in an application, and ultimately having the application freeze and kick them out. Figure 8 shows a maintainer using a PMA to work in ALIS. <2.2.6. Immature Applications> Users at all 5 F-35 locations we visited stated that the training and mission planning applications within ALIS remain immature. Users at all 5 locations said they are not using the Training Management System (TMS), an application designed for pilots and maintainers to track training qualifications and assign personnel to carry out specific tasks, for its intended purpose. Users from 4 locations said that because of the ongoing issues with TMS, they are using legacy systems in its place. For example, one Air Force command released a memorandum in January 2018 allowing some squadrons to use an external legacy system in place of the TMS application due to shortfalls in TMS functionality, which it stated had caused excessive work to execute normal operations and become an unacceptable burden. Marine Corps and Navy users from 2 locations we visited said that they are using other legacy systems to circumvent the TMS application as well. Additionally, pilots at 4 locations stated that the Off-Board Mission Support (OMS) application within ALIS is immature and remains non- intuitive, time consuming, and difficult to navigate. The OMS application is a key application for pilots to conduct mission planning and debriefing. Pilots at 2 locations said that they rely on contractors to help them complete tasks in the application. <2.2.7. Ineffective Training> Users at all 5 F-35 locations we visited stated that training to learn how to use ALIS does not provide adequate knowledge or information to fully prepare users to operate the system. Specifically, users at 3 locations we visited stated that the training for ALIS does not reflect a realistic operational environment. Instead, users at all 5 locations stated that training materials are usually in the form of PowerPoint slides and that knowledge of ALIS and its functionality is primarily obtained at the squadron level through on-the-job-training. In April 2016, we reported that almost every user in the F-35-related focus groups we conducted at that time noted that they did not learn how to operate any ALIS applications until on-the-job training began on the flight line. Users stated that this remains true today. Users at 1 of the locations we visited stated that learning how to use ALIS in this manner has caused people to develop their own unique way of operating the system, which creates an F-35 fleet environment that is using its primary logistics tool in different ways. <2.3. DOD Is Unable to Assess the Performance of ALIS or How the System Is Impacting F-35 Fleet Readiness> <2.3.1. DOD Has Not Developed Performance Metrics for ALIS> Although DOD and F-35 program officials agreed that ALIS continues to provide challenges for users and is generally not performing well, DOD still has not determined how it wants the system to perform. For example, officials from the Joint Strike Fighter Integrated Test Force told us that testing for individual ALIS software version releases focuses primarily on whether the new version is performing better than the previous version. Specifically, ALIS testers have developed criteria to determine if the newest version of ALIS is functioning more efficiently than the previous version by comparing such tasks as screen download times. However, according to these officials, these tests are not determining if the ALIS system is performing to a specified standard because DOD has not defined this standard. In September 2014, we recommended that DOD develop a performance- measurement process for ALIS that includes, but is not limited to, performance metrics and targets that (1) are based on the intended behavior of the system in actual operations and (2) tie system performance to user requirements. The DOD Systems Engineering Guide for Systems of Systems states that to fully understand performance of systems of systems (such as ALIS), it is important to have a set of metrics that assess the system s performance and trace back to user requirements because the system will likely evolve based on incremental changes similar to ALIS s incremental fielding. These metrics should measure the intended behavior and performance of the system in actual operations versus the progress of the development of the system, allowing an assessment of system capabilities based on user requirements. After over 5 years, and more than 400 aircraft fielded, DOD has not yet established a performance-measurement process for ALIS. DOD concurred with our 2014 recommendation, and repeated its commitment to develop performance metrics for ALIS after the release of our 2016 report on ALIS risks. In September 2019 program officials told us that DOD remains in the process of developing these metrics and has no set timeline for their completion. Without a performance-measurement process, the F-35 program does not have critical information about ALIS performance across F-35 locations. Such information could help address current and future ALIS performance issues and systematically measure ALIS functionality compared to intended performance. <2.3.2. Problems with ALIS Could Be Affecting Overall F-35 Fleet Readiness> Users at all 5 F-35 locations we visited also stated that problems with ALIS are affecting the overall readiness of the F-35 fleet; however, they were unable to tell us the degree to which this is the case. Overall F-35 fleet-wide performance has been falling short of warfighter requirements that is, aircraft cannot perform as many missions or fly as often as required. Figure 9 shows F-35 fleet aircraft performance from October 2018 through September 2019. Full mission capability, or the percentage of time during which the aircraft can perform all of its tasked missions, was 31.6 percent across the fleet, as compared with the warfighter minimum target of 60 percent. Mission capability, or the percentage of time during which the aircraft can safely fly and perform at least one tasked mission, was 59.5 percent across the fleet, as compared with the warfighter minimum target of 75 percent. Furthermore, citing less than desirable aircraft performance, in September 2018, the Secretary of Defense directed the military services to achieve and maintain 80 percent mission capability rates for their critical aviation platforms, including the F- 35 fleet, by the end of fiscal year 2019. Two F-35 locations have started tracking information on how ALIS is affecting F-35 aircraft performance at their locations. Officials from one location told us that from October 2018 through September 2019, F-35 aircraft were grounded and thus non-mission capable for 16,221 hours, or 2 percent of possible flight hours, as a direct result of issues with ALIS such as inaccurate or missing electronic records. However, according to officials at this location, this number does not capture all scenarios in which ALIS is affecting aircraft performance because sometimes squadron commanders make decisions to fly an aircraft when ALIS signals that they should not, in order to fulfill mission requirements. Officials from another location reported that in fiscal year 2018, ALIS- related issues caused the F-35 aircraft to be non-mission capable for 3,246 hours, or .5 percent of possible flight hours; however, as was the case with the previous location, officials said that this number also did not capture all scenarios in which ALIS is affecting aircraft performance. These limited efforts represent squadron-specific initiatives, as no other F-35 location has tracked similar ALIS-related data. Further, the data collected by the two locations only capture non-mission capability rates when ALIS signals to ground the aircraft and makes the aircraft incapable of completing a mission. The data do not account for the workarounds users said they are routinely performing to circumvent a non-functioning aspect of ALIS in order to get an aircraft ready to fly, or the times when squadron leadership decides to fly the aircraft when ALIS signals otherwise. Different factors can play a role in reducing F-35 aircraft readiness. For example, in April 2019, we reported that reduced aircraft performance was due largely to spare parts shortages. This conclusion was drawn from data that had been collected and tracked by both the contractor and DOD across the entire fleet to determine non-mission capability rates due to supply issues. Further, the F-35 program collects data on the degree to which maintenance issues are affecting F-35 mission capability. And, there are ongoing efforts to improve F-35 fleet readiness that are specifically targeted at supply and maintenance issues that are causing the significant mission-capability degradation. However, users and program officials stated that recurring issues with ALIS could also be affecting aircraft performance and noted that data on these issues are not being collected by the contractor or DOD. Although users reported multiple instances when ALIS-related issues grounded aircraft, these issues are being captured and categorized as either supply or maintenance-related issues, thus masking ALIS s effect on fleet-wide readiness. DOD Instruction 5000.02T, Operation of the Defense Acquisition System, states that the program manager will use technical performance measures and metrics to assess program progress. It further states that the analysis of technical performance measures and metrics, in terms of progress against established plans, will provide insight into the technical progress and risk of a program like the F-35. In the case of ALIS, the F-35 program does not have a fleet-wide process for measuring, collecting, and tracking information on how ALIS is affecting the performance of the F-35 aircraft, such as fleet-wide mission capability rates. Without such a process, the F-35 program may be limited in its ability to identify all of the drivers of reduced aircraft performance and appropriate target solutions. Further, as we previously reported, DOD plans to enter into multi-year, performance-based F-35 sustainment contracts with the prime contractor, but may not be well positioned to enter into such contracts because, in part, it does not fully understand the technical characteristics of the aircraft. ALIS may or may not be having a notable effect on mission capability rates for the F-35 fleet. However, without understanding how or the extent to which ALIS is affecting the performance of the aircraft, DOD risks entering into long-term, performance-based logistics contracts without fully understanding all of the factors currently affecting aircraft operations. This could hinder DOD s ability to effectively negotiate performance-related terms of the contract. Finally, without understanding how ALIS is affecting the performance of the aircraft, DOD risks developing a performance-measurement process for ALIS that is not tied to the overall performance goals of the program. <3. DOD Is Pursuing Actions to Enhance the Long-Term Viability of ALIS, but It Has Not Established a Strategy for the Future System Re- Design> DOD is taking actions to enhance the long-term viability of ALIS. Limited DOD attention on ALIS has resulted in a troubled history with the system. As a result, multiple efforts are currently underway to re-design and attempt to improve ALIS. However, key technical and programmatic uncertainties hinder these efforts. Furthermore, DOD does not have an overarching strategy for the future redesign of ALIS. <3.1. Limited DOD Attention Has Resulted in a Troubled History with ALIS> As originally envisioned, ALIS was intended to be a first-of-its-kind, fully autonomic system that would provide users access to data on a range of capabilities including operations, maintenance, prognostics, supply chain, customer support services, training, and technical data in one logistics system to support aircraft operations. According to Joint Strike Fighter Integrated Test Force officials, previous DOD aircraft logistics systems were much simpler, not fully autonomic, and generally included data related to fewer major capabilities. However, the F-35 program office did not clearly specify what it required from ALIS from the warfighter s perspective beyond the broad capabilities to be included in the system. Air Force officials stated that instead, the F- 35 program office relied on the prime contractor to take the lead in managing the development of the system. For example, the F-35 Operational Requirements Document provides only overarching, high- level requirements for ALIS and does not include specific, user-related requirements or requirements to adapt and modernize the system over time. DOD officials acknowledged that historically, DOD has prioritized other aspects of the F-35 program, such as the development of the airframe, over its logistics system. In addition, DOD s focus with ALIS development over the last 5 years has largely centered on adding capabilities required to complete developmental testing for the F-35. As issues with the fielded system have arisen, DOD and the prime contractor s approach has generally been to resolve these issues on a case-by-case basis as available resources allowed, as opposed to making more costly and time-intensive improvements to the system s underlying design and functionality. DOD contracting officials and prime contractor representatives stated that the need to balance a limited number of software development personnel between efforts to stabilize the current system and add new features has negatively affected the development of ALIS. In a 2017 report, the Air Force Digital Service recommended that the F-35 program office cease adding new capabilities in order to re-evaluate ALIS-related design choices and improve software development processes and procedures. According to the report, many of the issues with ALIS have known root causes that are directly related to software and hardware design choices that are 15 years old. For example, ALIS is made up of siloed applications that each have their own, sometimes conflicting, databases. Further, according to the Air Force Digital Service report, efforts to upgrade ALIS from an out-of-date operating system have not been prioritized by the F- 35 program office. Finally, ALIS hardware is cumbersome, consisting of heavy servers as well as laptops that were originally designed in the mid- 1990s. The current approach to developing ALIS has generally led to scheduling delays and challenges addressing a backlog of ALIS deficiencies. For example, the ALIS version required to complete developmental testing for the F-35 was not released until 2018 8 years after the originally planned release date. F-35 program office officials emphasized that in general, the timeframe for releasing major software updates for ALIS up to 18 months has been long. Further, based on data from the prime contractor, as of September 2019, there were about 4,700 open ALIS deficiencies, which are used by the prime contractor to track and manage issues with the system. According to an F-35 program office official, ALIS deficiencies may be identified in the field by F-35 users, in the prime contractor s testing laboratory, or during DOD-led developmental and operational testing of the F-35 and ALIS. Of these 4,700 deficiencies, about 34 percent were identified in 2017 or earlier and 22 percent were category 1 or category 2 deficiencies. Category 1 deficiencies are considered critical and could jeopardize safety, security, or another requirement; category 2 deficiencies are those that could impede or constrain successful mission accomplishment. As shown in figure 10, the total number of open deficiencies has generally increased over the last 2 years. In addition, the number of open category 1 through category 3 deficiencies, which are considered critical or have an adverse effect on mission accomplishment, generally increased during this period. While the rate at which the prime contractor closed deficiencies during this period increased, the rate of increase was generally lower than the rate at which new deficiencies were identified. Officials from the Joint Strike Fighter Integrated Test Force and Office of the Director of Operational Test and Evaluation expressed concerns about the number and nature of the ALIS-related deficiencies they have identified during developmental and operational testing. For example, F- 35 testers identified a number of deficiencies with the most recent ALIS software version, ALIS 3.5, including eight category 1 deficiencies. ALIS 3.5 is referred to as the stabilization release because it was intended to address longstanding issues with ALIS. In addition, F-35 testers stated that since 2016, they have identified a number of cyber-related ALIS deficiencies, most of which remain open today. While officials said that the number of cyber deficiencies is consistent with other DOD weapons systems, they stressed that a vulnerable ALIS is particularly problematic because of how interconnected the system is with the F-35 aircraft and its operations. <3.2. Multiple Efforts Are Underway to Re-Design ALIS> DOD and the prime contractor have acknowledged ALIS s troubled history and have established three initiatives to re-design and fix ALIS. At a November 2019 congressional hearing, the F-35 Program Executive Officer stressed that significant additional work is required to improve ALIS functionality and that this work cannot be done in old and outdated ways. Table 2 summarizes the three initiatives, led by the F-35 program office, Air Force, and prime contractor respectively. <3.3. Key Technical and Programmatic Uncertainties Hinder Efforts to Re-Design ALIS> According to the F-35 program office, the three initiatives are complementary and will eventually be integrated in a final redesign of ALIS. However, we found that DOD lacks clarity on how it will address key technical and programmatic uncertainties about the future of the system (see figure 11). These uncertainties relate to complex aspects of ALIS that will significantly impact the future design of the system and how it will be managed. Further, there are divergent views among officials involved with the various initiatives in terms of how DOD should approach key aspects of the re-design, highlighting the uncertainty that exists about the future of ALIS. <3.3.1. ALIS Capabilities> DOD has not fully determined what capabilities will be included in the ALIS re-design. After years of focusing on adding new capabilities with each major ALIS software version release, DOD officials agreed that that their current goal is to streamline and simplify ALIS. For example, the Mad Hatter initiative is designing applications based on the minimum capabilities required by maintainers to quickly release an aircraft for flight. Similarly, the ALIS Next initiative is working to optimize functions in ALIS by identifying aspects of the current design that could be slowing down the system for example, transferring an aircraft s entire digital history each time the jet is transferred from one SOU to another. However, officials from the Office of the Director of Operational Test and Evaluation indicated that there continues to be uncertainty about the capabilities both classified and unclassified that will be included in the re-design. Further, as discussed previously, the F-35 program office has not formally established how it expects ALIS to perform in operations or developed a performance-measurement process for ALIS. Program officials indicated the need for discussions with the services and international partners about aspects of the current system that are not consistently being used and may therefore not be required (such as the Training Management System) through an updated process for establishing ALIS-related requirements. This process, which requires coordination across all military services and international partners, has proven to be challenging in the past. According to a 2017 Air Force Digital Service report, the F-35 program office faces challenges identifying and prioritizing ALIS capabilities across multiple services and international partners, and this has negatively affected the development of the system. <3.3.2. Software Development Model> DOD is unclear about the extent to which it can adopt a more flexible software development model known as Agile. As we reported in April 2019, the F-35 program as a whole is pursuing a faster and more incremental approach for delivering new aircraft capabilities to the warfighter in order to more flexibly address evolving threats. One approach to software development that helps facilitate such incremental delivery is Agile, which calls for the delivery of software in small, short increments rather than in the typically long, sequential phases of a traditional software development approach. More a philosophy than a methodology, Agile emphasizes early and continuous software delivery, as well as using collaborative teams, and measuring progress with working software. According to some F-35 program office officials, adopting Agile could result in a more secure system because it involves continually testing software for security vulnerabilities. Further, we have previously reported that following an incremental development approach, such as Agile, gives agencies the opportunity to obtain additional feedback from users, which increases the probability that each successive increment will meet user needs. The Mad Hatter initiative is experimenting with an Agile approach and has had some initial successes using this model. For example, in July 2019, we observed a demonstration of a Mad Hatter-developed application that allows the user to quickly and easily search through Joint Technical Data, an application within ALIS that has been reported by some users as being extremely difficult to navigate. However, the Mad Hatter initiative has operated outside of F-35 program office policies and processes and its applications are currently not integrated with the fielded ALIS system. Further, Mad Hatter and F-35 program office officials said that they have faced challenges communicating the value of their approach with one another, and according to a senior Air Force official associated with the Mad Hatter initiative, the F-35 program office has not clarified the role of Mad Hatter representatives in current planning efforts aimed at scaling the results of the Mad Hatter initiative to the entire F-35 enterprise. Separately, as part of its own ALIS initiative, prime contractor officials said that their company recently began taking steps to adopt best practices for delivering new ALIS software using an Agile model. However, these efforts are new, and the F-35 program office has not developed standards for software developed by the prime contractor using this model. DOD officials we spoke with expressed differing views on the extent to which DOD should adopt an Agile software delivery model for ALIS. For example, in a 2018 memorandum establishing the Mad Hatter pilot, a senior Air Force acquisition official stated that the F-35 program should embrace the tenets of this type of model in order to innovate and rapidly deliver useful capability through ALIS. Similarly, Air Force, Office of the Secretary of Defense, and some F-35 program office officials stated that modernizing ALIS will require DOD to adopt industry best practices by making decisions quickly, delivering usable products early and often, and revising plans to reflect experience from completed software iterations. In contrast, Marine Corps and some F-35 program office officials indicated that DOD should carefully consider different commercially-available software tools, as well as DOD-specific constraints, before delivering new ALIS capabilities. For example, F-35 program office officials associated with the ALIS Next initiative stated that they conducted an assessment of the commercial software tools that could be used for new ALIS software development. These officials said that some of the tools that were initially being used by the Mad Hatter initiative to develop applications make software development easier in the short-term but more difficult to switch toolsets and/or contractors in the long-term. Marine Corps and some F-35 program officials also noted that current DOD processes and procedures such as the software certification and cost-estimating processes may not be able to support quick software releases. While an Agile software delivery model has been identified as having the potential to improve the way in which the federal government develops and implements IT, we previously reported that this type of model requires significant procedural and organizational changes in order to be implemented successfully. <3.3.3. The Cloud Environment> DOD has not made a decision about the extent to which the ALIS re- design will be hosted in the cloud as opposed to onsite servers at the squadron level. In April 2019, we reported that cloud computing allows federal agencies to access on-demand, shared computing resources with the goal of delivering services more quickly and at a lower cost. More specifically, purchasing IT services through a provider enables agencies to avoid paying for all of the computing resources (e.g., hardware, software, networks) that would typically be needed to provide such services. This approach offers federal agencies a means to buy the services faster and possibly at less cost than building, operating, and maintaining these computing resources themselves. However, National Institute of Standards and Technology guidance states that public cloud computing represents a significant shift from the norms of on-site data centers and should therefore be approached carefully with consideration to the sensitivity of data. While the Mad Hatter initiative has embraced hosting ALIS in the cloud, including at the squadron level, ALIS Next is conducting an assessment of the extent to which a cloud-based system is the best option for ALIS. Further, as part of its internal ALIS investment, the prime contractor has designed an alternative model to the current system that includes an onsite server at each F-35 squadron. Office of the Secretary of Defense, Air Force, and F-35 program office officials we talked to agreed that the ALIS re-design will involve migrating some portions of ALIS from onsite servers to the cloud. For example, these officials agreed that DOD should explore options for migrating the ALOU and U.S. CPE to the cloud. However, these officials disagreed about how much of the future system should be cloud-based at the squadron level. For example, Air Force, Office of the Secretary of Defense, and some F-35 program office officials stressed that for day-to- day maintenance at U.S. bases, F-35 squadrons should be able to access ALIS using Wi-Fi, and that the reliance on onsite servers should therefore be minimal and limited to deployed scenarios. According to these officials, DOD can achieve significant cost savings by moving ALIS to the cloud. These officials also indicated that DOD s hesitation about moving from onsite servers to the cloud is mostly cultural and the result of a lack of understanding about what the cloud is. One senior Office of the Secretary of Defense official with software expertise stated that warfighters should be able to deploy with a minimal amount of ALIS hardware (for example, only a high-powered laptop). In contrast, other F- 35 program office officials told us that the F-35 program office is restricted in the extent to which it can migrate to cloud-based SOUs due to connectivity and security restrictions. Further, at an ALIS Next conference, some partner country representatives expressed concerns about hosting ALIS in the cloud, stating that stringent security requirements would likely prevent their governments from accepting a cloud-based solution for ALIS. <3.3.4. User Feedback> DOD does not have a plan for incorporating users early and often in the development of new ALIS software across the F-35 enterprise. Previous GAO reports as well as other DOD studies have found that giving users the opportunity to provide feedback on actual working software early and often in the software development process, and incorporating that feedback in subsequent development, is critical to the success of any software development effort. For example, in March 2019, we reported that obtaining frequent feedback is linked to reducing risk, improving customer commitment, and improving technical staff motivation. Historically, user feedback has not been prioritized in the ALIS software development process. According to users we talked to, working groups do exist that serve as a venue for voicing user-related issues; however, users stated that these working groups meet infrequently and often do not lead to desired changes. Further, prime contractor representatives told us that while they recently began soliciting user feedback as part of their ALIS initiative, the F-35 program office has not contractually required incorporating user feedback in the ALIS software development process. The Mad Hatter initiative is currently incorporating user feedback into new software development for ALIS and has established a process whereby F-35 users and Mad Hatter software developers can communicate directly about the Mad Hatter applications that are in development. As part of this process, Mad Hatter product teams develop simple applications, field the applications to users, and then use feedback from users obtained by email or videoconferences to adjust and enhance the applications. Although Mad Hatter s process for incorporating user feedback aligns with the practice of incorporating feedback early and often, the initiative is being executed at one F-35 installation, with one military service. Further, while the F-35 program office intends to eventually scale the results of Mad Hatter s experimentation to the rest of the F-35 enterprise, it has not formally outlined how it will institutionalize the initiative s process for incorporating user feedback across multiple services and international partners. <3.3.5. Primary ALIS Owner> DOD has not determined the roles of DOD and the prime contractor in future ALIS development and management. DOD officials stressed that historically, the department has relied heavily on the prime contractor to develop and manage ALIS. Officials also said that moving forward, DOD will need to play a more active role in the management of ALIS. For example, Air Force, Office of the Secretary of Defense, and F-35 program office officials all said that DOD should serve as the primary owner of the ALIS software system, with the prime contractor and other firms developing applications that will feed into DOD s software pipeline. However, the F-35 program office has not officially named DOD as the prime ALIS owner, or specified how it will coordinate software development across these multiple entities. Further, while one of the long- term objectives of the Mad Hatter initiative is to build DOD s capacity to manage and develop new ALIS software itself, Air Force officials involved in this initiative stated that DOD has not yet fully developed this capacity. As the original ALIS developer, prime contractor representatives stated that their company is in the best position to modernize ALIS. F-35 program office officials acknowledged that because the prime contractor plays such a critical role in the development and sustainment of the F-35, it will be necessary for DOD to work closely with the contractor, regardless of the direction DOD decides to take. For example, DOD officials said they have faced challenges obtaining key technical data from the prime contractor that would be required by DOD to lead ALIS software development, such as the underlying source code for current ALIS software, and that they were uncertain about the extent to which they would be able to obtain these data in the future. At a November 2019 congressional hearing, the Under Secretary of Defense for Acquisition and Sustainment stressed that many of the challenges with ALIS stem from the fact that ALIS data are fed back through prime- contractor computers, and there is resulting ambiguity over the ownership of that data. As we previously reported, DOD continues to lack clarity about the technical data it owns and the additional data it would require to maintain flexibility in the sustainment of the F-35. <3.3.6. Current ALIS Software> DOD has not agreed on the extent to which the ALIS re-design will incorporate current ALIS software consisting of 8 million lines of code. As part of the ALIS Next initiative, F-35 program office officials said they intend to review the underlying source code for ALIS to determine which aspects of the current software should be integrated in the re-design. These officials explained that redesigning ALIS software from scratch will take too long and the future ALIS system will therefore need to incorporate, to some extent, current ALIS software. In contrast, a senior Air Force official associated with the Mad Hatter initiative stated that the initiative intends to replace most current ALIS applications with commercial or new custom applications, retaining only those ALIS applications that can be cost-effectively modernized. Further, officials from the Air Force, Office of the Secretary of Defense, and F-35 program office indicated that because most of the ALIS source code has not been updated in years and contains numerous security vulnerabilities, the software should be completely re-designed. <3.4. DOD Does Not Have a Strategy for the Future Re- Design of ALIS> DOD is unclear about how it will approach the key technical and programmatic uncertainties surrounding ALIS because the department has not developed a strategy for the future re-design of the system. DOD guidance for program managers states that a sound strategy requires, among other things, a clear articulation of program goals as well as an understanding of the risks or uncertainties and costs associated with achieving those goals. While DOD and the prime contractor have established various initiatives to re-design ALIS, DOD has not developed a strategy for the future of ALIS that clearly identifies and assesses goals, key risks or uncertainties, and associated costs. For example, as discussed previously, DOD lacks clarity about the goals of the re-design, such as the capabilities that will be included in the future system and the extent to which ALIS will be hosted in the cloud. In addition, DOD has not fully assessed key risks or uncertainties, including the extent to which DOD can adopt an Agile software development approach or manage the system itself. Finally, because it has not answered key questions about the future of the system, such as the extent to which the re-design will incorporate current ALIS software, DOD has not been able to develop accurate cost estimates for the ALIS re-design. In the past, DOD has faced challenges estimating and tracking ALIS costs. For example, in 2016 we reported that while DOD had estimated that ALIS would cost approximately $17 billion, the estimate was not fully credible because DOD had not performed uncertainty and sensitivity analyses as part of the cost-estimating process. Further, for this review, the F-35 program office was not able to provide us with historic costs showing how much the department has spent on ALIS over the years. DOD officials stated that historically, the department has faced challenges allocating scarce resources across competing priorities, and that the F-35 air vehicle has generally been prioritized over ALIS. With the completion of F-35 developmental testing in April 2018, program officials said they are now in a better position to focus on ALIS and address long-standing issues with the system. However, efforts to correct ALIS are relatively new and have not been fully developed. Without a strategy to guide the re-design of ALIS, DOD will not be able to effectively plan for the transition from the current system to a future one. For example, according to F-35 program office officials, DOD recently procured additional hardware for the current system, which officials said may not be required if DOD is able to develop and field a re-designed ALIS in the near term. Officials from the Office of the Director of Operational Test and Evaluation stressed that effectively transitioning from the current system to a future one will be particularly challenging for DOD given the need to continue sustaining the more than 400 aircraft that have already been fielded with current ALIS. Further, as discussed above, there are divergent views in terms of how DOD should approach key technical and programmatic aspects of the re-design, and integrating the different efforts that are underway to fix ALIS led by the F-35 program office, Air Force, and prime contractor will therefore require significant direction and leadership. Without a strategy, DOD may not be able to effectively coordinate and leverage the different ALIS initiatives that are underway, potentially leading to inefficiencies. DOD also risks repeating history by failing to clearly articulate what it expects from ALIS and how it will play a more active role in the management of the system going forward. <4. Conclusions> The F-35 aircraft, with its advanced warfighting capabilities, provides critical tactical aviation for the Department of Defense. However, DOD will need to overcome substantial challenges related to ALIS if it wants to find successes in both sustainment and operations of the aircraft. Current ALIS users continue to report significant challenges with the system that are affecting day-to-day operations of the aircraft, adding additional flight line-related responsibilities, and, in some instances, causing squadron leadership to assume the risk of flying aircraft when ALIS tells them to stay on the ground. Although ALIS is not currently performing well, over 5 years after we recommended it, DOD has yet to establish a performance- measurement process that would define how ALIS should perform. In the absence of such a process, DOD will be challenged to address current and future ALIS-performance issues because it cannot measure ALIS functionality compared to intended system performance. Furthermore, ALIS users collectively agree that the issues with ALIS are affecting the readiness of the aircraft; however, the degree to which this is true remains unknown. Fleet-wide mission capability rates for the F-35 are still below the warfighter s minimum targets, but DOD does not have a process for measuring, collecting, and tracking information on how ALIS is affecting these rates. Without such a process, DOD may not understand all of the factors behind the reduced aircraft performance, thus limiting its ability to target appropriate solutions. DOD officials have acknowledged the ongoing challenges with ALIS and know that the system, as it stands today, cannot be sustained into the future; therefore, it is positive that the department has embarked on efforts to re-design and fix ALIS, as well as take on a more active role in the management of the system. However, DOD faces a significant challenge as there are several complex technical and programmatic uncertainties that will need to be resolved before any future ALIS solution can be realized. Additionally, there are divergent views among ALIS stakeholders about how to go about addressing these complex issues. The future of ALIS remains unclear because the department has not developed a strategy for the re-design of the system that would identify, among other things, what the system should look like, how will it be developed and managed, how it will address key risks, and how much it will ultimately cost. Without such a strategy, DOD will not be able to effectively plan for the transition from the current ALIS system, which is already embedded in over 400 aircraft across the global F-35 fleet, to whatever solution is determined. Furthermore, a strategy would help align what is currently a chorus of divergent views within the department on how to address the future of ALIS. With the worldwide fleet expected to grow to over 1,000 aircraft over the next four years, and with the U.S. services becoming increasingly reliant on the F-35 s capabilities to support their operational strategies, it will be imperative for DOD to address the ongoing issues related to the F-35 s logistics system. <5. Matter for Congressional Consideration> Congress should consider legislation requiring the Department of Defense to establish a performance-measurement process for ALIS that includes, but is not limited to, performance metrics and targets that (1) are based on intended behavior of the system in actual operations and (2) tie system performance to user requirements. (Matter for Consideration 1) <6. Recommendations for Executive Action> We are making the following two recommendations to DOD: The Secretary of Defense should ensure the Under Secretary of Defense for Acquisition and Sustainment, in consultation with the F-35 Program Executive Officer, develops a program-wide process for measuring, collecting, and tracking information on how ALIS is affecting the performance of the F-35 fleet to include, but not be limited to, its effects on mission capability rates. (Recommendation 1) The Secretary of Defense should ensure the Under Secretary of Defense for Acquisition and Sustainment, in consultation with the F-35 Program Executive Officer, develops and implements a strategy for the re-design of ALIS. The strategy should be detailed enough to clearly identify and assess the goals, key risks or uncertainties, and costs of re-designing the system. (Recommendation 2) <7. Agency Comments> We provided a draft of this report to DOD for review and comment. In its written comments, reproduced in appendix II, DOD concurred with our recommendations and identified actions that it was taking or planned in response. We agree that DOD is taking positive steps in addressing issues with ALIS, including the decision to replace ALIS with a future system that it has named the F-35 Operational Data Integrated Network (ODIN). According to DOD, the department is currently developing a strategy that will guide ODIN s development. As DOD proceeds with replacing ALIS with ODIN, it will be imperative for the department to carefully consider and assess the key technical and programmatic uncertainties discussed in this report. These issues including how much of ALIS will be incorporated in ODIN and the extent to which DOD has access to the data it needs to play a more active role in the management of the system are complex, and will require significant direction and leadership to resolve. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days from the report date. At that time, we will send copies of this report to congressional requesters; the Secretary of Defense; the Under Secretary of Defense for Acquisition and Sustainment; the F-35 Program Executive Officer; the Secretaries of the Air Force and Navy; and the Commandant of the Marine Corps. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members making key contributions to this report are listed in appendix III. Appendix I: Scope and Methodology For each of our objectives, we reviewed relevant F-35 sustainment and the Autonomic Logistics Information System (ALIS)-related data, plans, program briefs, guidance, and other documentation and collected information by interviewing officials from the Office of the Under Secretary of Defense for Acquisition and Sustainment, the F-35 Joint Program Office, the Director, Operational Test and Evaluation, the Defense Contract Management Agency, the U.S. Air Force, the U.S. Navy, the U.S. Marine Corps, the Air Force Digital Service, and the prime contractor, Lockheed Martin. To interview officials and observe ALIS- related operations, we conducted site visits to five F-35 locations Luke Air Force Base, Arizona; Edwards Air Force Base, California; Nellis Air Force Base, Nevada; Marine Corps Air Station Yuma, Arizona; and Naval Air Station Lemoore, California. We selected these locations to obtain perspectives from ALIS-users (i.e. maintainers, pilots, supply personnel, contractors) from all U.S. services participating in the F-35 program, including from operational, training, and testing locations. Additionally, we developed a data collection instrument to collect ALIS-related inputs and data from ALIS-users (i.e. maintainers, pilots, supply personnel, contractors) at all 10 U.S. F-35 locations Luke Air Force Base, Arizona; Edwards Air Force Base, California; Nellis Air Force Base, Nevada; Marine Corps Air Station Yuma, Arizona; Naval Air Station Lemoore, California; Hill Air Force Base, Utah; Naval Air Station Patuxent, Maryland; Eglin Air Force Base, Florida; Marine Corps Air Station Beaufort, South Carolina; and Marine Corps Air Station Iwakuni, Japan. Finally, we met with officials from the F-35 Joint Program Office, Massachusetts Institute of Technology (MIT) Lincoln Labs, Lockheed Martin Rotary and Mission Systems, Air Force Digital Service, Kessel Run (Air Force), and others to discuss ALIS-related improvement efforts. In support of our objectives, we gathered data from fiscal year 2019 (the most recent full fiscal year of data available at the time of our review) from the prime contractor on the performance of the F-35 fleet such as the full and mission capability rates. We also collected the most recent available information on ALIS software deficiencies. To determine the reliability of these data, we collected information on how the data were collected, managed, and used through a questionnaire and interviews. Although we identified some limitations in the way that certain data are being collected and reported such as data related to aircraft performance like mission capability rates we determined that they are sufficiently reliable for the way in which we reported them and our purposes of providing information on the progress and challenges within the program. All the performance data presented in our report are sufficiently reliable to provide a general comparison of capabilities to minimum targets. To assess the extent to which there have been improvements as well as key challenges with ALIS over the last 5 years, we interviewed officials and examined guidance and briefing documents from the Office of the Under Secretary of Defense for Acquisition and Sustainment, the U.S. Services, the F-35 Joint Program Office, the Defense Contract Management Agency and Lockheed Martin Rotary and Mission Systems officials to discuss the current status of the system and plans for mitigating risks. To determine user views on risks to (or issues with) ALIS, we interviewed officials at our 5 selected bases, conducted a short data collection instrument of the other 5 bases, interviewed officials at Air Force headquarters and the contractor, and reviewed relevant documents. At the 5 bases, we interviewed groups of pilots, maintainers and supply personnel about ALIS performance, challenges, and possible improvements. In addition, we posed several targeted questions based on risks found in our last report. In total, we received input from more than 160 users at the 5 bases we visited through group discussions or interviews. We analyzed the responses provided in these group interviews, and identified the issues/risks that at least one set of users reported at each of the 5 bases. We also considered any improvements that were described as having occurred during the last few years. We also compared the responses from the interviews at the 5 bases with our data collection responses, and the other testimonial and documentary evidence we obtained. The list of issues/risks we identified contains some that were reported in our 2016 report as well as some new ones. While this list summarizes the types of issues/risks described at the 5 bases, and also in other interviews and document review, individual user views and experiences could vary by base and user group. We also interviewed officials and reviewed reports from the Air Force Audit Agency, the Director, Operational Test and Evaluation, and the Department of Defense Inspector General to identify improvements as well as any functionality issues with ALIS. We interviewed and gathered information from DOD officials on testing for ALIS, metrics on ALIS s performance, and the operations of the system. As discussed previously, we collected and analyzed data for fiscal year 2019 that we obtained from the prime contractor on the overall aircraft performance such as the full mission capability and mission capability rates. We analyzed and compared information obtained from interviews, site visits, data collection instruments, and documents with guidance such as DOD s System Engineering Guide for System of Systems to determine the extent to which DOD has an effective procedure for addressing and mitigating specific risks and challenges that may be associated with a major weapon system. We also compared this information with previous GAO reports from 2014, 2016, and 2018 to determine the extent to which DOD has addressed our prior recommendations on ALIS-related issues. To assess the extent to which the F-35 program has addressed issues with ALIS, we gathered and analyzed data from the prime contractor on open and closed ALIS deficiencies identified from November 2017 through October 2018. We selected this timeframe because it included the most recent data on ALIS deficiencies at the time of our review and also allowed us to observe trends in ALIS deficiencies over a two-year period. The data we received included summary information on the total number of open deficiencies, the total number of closed deficiencies, the number of newly closed deficiencies, the number of newly identified deficiencies, and the total number of open category 1 through category 3 deficiencies (considered critical or adverse) for each month during the two-year period. To determine the reliability of these data, we conducted electronic tests to identify any internal inconsistencies with the data. We also reviewed documentation from the prime contractor on the management of ALIS deficiency data and collected information on how the data were collected, managed, and used through a questionnaire. Specifically, we asked questions about inconsistencies we identified through electronic testing of the data, the extent to which the prime contractor s system for collecting deficiency information includes edit checks or controls to help ensure the data are entered accurately, and limitations related to the accuracy or completeness of the data. As a result, we determined the data to be sufficiently reliable for the purpose of reporting trends in the number of open and closed ALIS deficiencies over time. To determine the extent to which DOD is taking actions to enhance the long-term viability of the system, we interviewed officials and reviewed guidance and/or planning documents from the Office of the Under Secretary of Defense for Acquisition and Sustainment, the F-35 Joint Program Office, and the Office of the Assistant Secretary of the Air Force for Acquisition, Technology, and Logistics. We interviewed officials from the prime contractor to determine their role in helping DOD mitigate risks regarding the long-term viability for ALIS. Additionally, we examined briefing documents from the MIT-Lincoln Labs, a federally-funded research and development center assisting the F-35 Joint Program Office, on plans, timelines, and risks for modernizing the hardware and software. We interviewed officials from the Air Force s Kessel Run team to discuss their Mad Hatter initiative (intended to improve ALIS functionality), the viability of current ALIS software, and any risks associated with the future of ALIS. We conducted a site visit to Nellis Air Force Base to observe the Mad Hatter initiative and discuss its results and the future of ALIS software. Further, as discussed previously, we analyzed data from November 2017 through October 2019 on ALIS deficiencies. We reviewed reports and interviewed officials from the Air Force Digital Service and the Director, Operational Test and Evaluation on the future viability of these long-term initiatives for ALIS. Finally, we analyzed and compared information obtained from interviews, site visits, and documents with applicable guidance to determine the extent to which DOD has an effective long-term plan for ALIS that addresses operational and financial risks. In support of our work, we interviewed officials from the following DOD organizations and other organizations during our review. We selected these organizations based on their oversight, planning, and/or execution roles related to F-35 ALIS operations. Office of the Under Secretary of Defense for Acquisition and Sustainment, Arlington, Virginia Office of the Director for Operational Test and Evaluation, Arlington, Defense Contract Management Agency Lockheed Martin, Orlando, F-35 Joint Program Office, Arlington, Virginia Office of the Assistant Secretary of the Air Force for Acquisition, Air Force F-35 Integration Office, Arlington, Virginia Kessel Run Team, Hanscom Air Force Base, Massachusetts Luke Air Force Base, Arizona 56th Maintenance Group 61st Aircraft Maintenance Unit 62nd Aircraft Maintenance Unit Edwards Air Force Base, California Nellis Air Force Base, Nevada 57th Aircraft Maintenance Squadron Navy F-35 Integration Office, Arlington, Virginia Naval Air Station Lemoore, California Strike Fighter Wing Pacific Strike Fighter Squadron 125 Strike Fighter Squadron 147 Marine Corps F-35 Integration Office Marine Corps Air Station Yuma, Arizona Marine Aircraft Group 13 Marine Aviation Logistics Squadron 13 Marine Fighter Attack Squadron 211 Marine Fighter Attack Squadron 122 Air Force Digital Service, Arlington, Virginia Lockheed Martin Rotary and Mission Systems, Orlando, Florida MIT Lincoln Laboratory, Lexington, Massachusetts We conducted this performance audit from August 2018 to March 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides and reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments <8. GAO Contact Staff Acknowledgments> Diana Maurer, (202) 512-9627, maurerd@gao.gov In addition to the contact named above, Alissa Czyz (Assistant Director), Matthew Bader, Vincent Buquicchio, Tracy Burney, Juana Collymore, Martin De Alteriis, Michael Holland, Jeff Hubbard, Clarice Ransom, and Elisa Yoshiara made key contributions to this report. Related GAO Products F-35 Aircraft Sustainment: DOD Faces Challenges in Sustaining a Growing Fleet. GAO-20-234T. Washington, D.C.: November 13, 2019. Space Command and Control: Comprehensive Planning and Oversight Could Help DOD Acquire Critical Capabilities and Address Challenges. GAO-20-146. Washington, D.C.: October 30, 2019. F-35 Joint Strike Fighter: Action Needed to Improve Reliability and Prepare for Modernization Efforts. GAO-19-341, Washington, D.C.: April 29, 2019. F-35 Aircraft Sustainment: DOD Needs to Address Substantial Supply Chain Challenges.GAO-19-321. Washington, D.C.: April 25, 2019. Cloud Computing: Agencies Have Increased Usage and Realized Benefits, but Cost and Savings Data Need to Be Better Tracked. GAO-19-58, Washington, D.C.: April 4, 2019. DOD Space Acquisitions: Including Users Early and Often in Software Development Could Benefit Programs. GAO-19-136. Washington, D.C.: March 18, 2019. F-35 Joint Strike Fighter: Development Is Nearly Complete, but Deficiencies Found in Testing Need to Be Resolved. GAO-18-321. Washington D.C.: June 5, 2018. Warfighter Support: DOD Needs to Share F-35 Operational Lessons Across the Military Services. GAO-18-464R, Washington D.C.: April 25, 2018. Military Aircraft: F-35 Brings Increased Capabilities, but the Marine Corps Needs to Assess Challenges Associated with Operating in the Pacific. GAO-18-79C. Washington, D.C.: March 28, 2018. Information Technology Reform: Agencies Need to Improve Certification of Incremental Development. GAO-18-148. Washington, D.C.: November 7, 2017. F-35 Aircraft Sustainment: DOD Needs to Address Challenges Affecting Readiness and Cost Transparency. GAO-18-75. Washington D.C.: October 26, 2017. F-35 Joint Strike Fighter: DOD s Proposed Follow-on Modernization Acquisition Strategy Reflects an Incremental Approach Although Plans Are Not Yet Finalized. GAO-17-690R. Washington, D.C. August 8, 2017. F-35 Joint Strike Fighter: DOD Needs to Complete Developmental Testing Before Making Significant New Investments. GAO-17-351. Washington, D.C.: April 24, 2017. F-35 Joint Strike Fighter: Continued Oversight Needed as Program Plans to Begin Development of New Capabilities. GAO-16-390. Washington, D.C.: April 14, 2016. F-35 Sustainment: DOD Needs a Plan to Address Risks Related to Its Central Logistics System. GAO-16-439. Washington, D.C.: April 14, 2016. F-35 Joint Strike Fighter: Preliminary Observations on Program Progress. GAO-16-489T. Washington, D.C.: March 23, 2016. F-35 Joint Strike Fighter: Assessment Needed to Address Affordability Challenges. GAO-15-364. Washington, D.C.: April 14, 2015. F-35 Sustainment: Need for Affordable Strategy, Greater Attention to Risks, and Improved Cost Estimates. GAO-14-778. Washington, D.C.: September 23, 2014. F-35 Joint Strike Fighter: Slower Than Expected Progress in Software Testing May Limit Initial Warfighting Capabilities. GAO-14-468T. Washington, D.C.: March 26, 2014. F-35 Joint Strike Fighter: Problems Completing Software Testing May Hinder Delivery of Expected Warfighting Capabilities. GAO-14-322. Washington, D.C.: March 24, 2014. F-35 Joint Strike Fighter: Restructuring Has Improved the Program, but Affordability Challenges and Other Risks Remain. GAO-13-690T. Washington, D.C.: June 19, 2013. F-35 Joint Strike Fighter: Program Has Improved in Some Areas, but Affordability Challenges and Other Risks Remain. GAO-13-500T. Washington, D.C.: April 17, 2013. F-35 Joint Strike Fighter: Current Outlook Is Improved, but Long-Term Affordability Is a Major Concern. GAO-13-309. Washington, D.C.: March 11, 2013. Software Development: Effective Practices and Federal Challenges in Applying Agile Methods. GAO-12-681. Washington, D.C.: July 27, 2012. Joint Strike Fighter: DOD Actions Needed to Further Enhance Restructuring and Address Affordability Risks. GAO-12-437. Washington, D.C.: June 14, 2012. Joint Strike Fighter: Restructuring Added Resources and Reduced Risk, but Concurrency Is Still a Major Concern. GAO-12-525T. Washington, D.C.: March 20, 2012. Joint Strike Fighter: Implications of Program Restructuring and Other Recent Developments on Key Aspects of DOD s Prior Alternate Engine Analyses. GAO-11-903R. Washington, D.C.: September 14, 2011. Joint Strike Fighter: Restructuring Places Program on Firmer Footing, but Progress Is Still Lagging. GAO-11-677T. Washington, D.C.: May 19, 2011. Joint Strike Fighter: Restructuring Places Program on Firmer Footing, but Progress Still Lags. GAO-11-325. Washington, D.C.: April 7, 2011. Joint Strike Fighter: Restructuring Should Improve Outcomes, but Progress Is Still Lagging Overall. GAO-11-450T. Washington, D.C.: March 15, 2011. | Why GAO Did This Study
The F-35 is DOD's most ambitious and costly weapon system in history, with U.S. sustainment costs estimated at about $1.2 trillion over a 66-year life cycle. Central to the F-35 is ALIS—a complex system that supports operations, mission planning, supply-chain management, maintenance, and other processes. A fully functional ALIS is critical to the F-35's operational success. However, over the past 5 years GAO has reported on key risks associated with the system, such as challenges deploying the F-35 with ALIS, inaccurate data that reside in ALIS, and ineffective training for personnel who need to use ALIS.
GAO was asked to review DOD's efforts to improve ALIS. This report assesses the extent to which (1) improvements have been made over the past 5 years and challenges remain for ALIS users, and (2) DOD is taking actions to enhance the long-term viability of the system. GAO reviewed F-35 and ALIS program documentation and data, interviewed DOD officials and contractor employees, and visited five U.S. F-35 sites.
What GAO Found
The Autonomic Logistics Information System (ALIS) is integral to supporting the F-35 fighter jet's operations and maintenance. F-35 personnel at 5 locations GAO visited agreed that ALIS is performing better in some aspects, such as faster processing speeds for some tasks. However, problems with ALIS continue to pose significant challenges for F-35 personnel (see figure).
The Department of Defense (DOD) has not (1) developed a performance measurement process for ALIS, which GAO recommended in 2014, or (2) determined how ALIS issues affect F-35 fleet readiness. Without efforts in these areas, DOD will be hindered in addressing ALIS challenges and improving aircraft readiness.
DOD and the prime contractor have a variety of initiatives underway for re-designing ALIS. However, these initiatives involve differing approaches and technical and programmatic uncertainties are hindering the re-design effort (see figure).
DOD has not developed a strategy for the future of ALIS that includes goals of the re-design, an assessment of key risks, or costs. Without this, DOD may not be able to coordinate various ALIS design-improvement initiatives that are under way or meaningfully enhance the system over the long term.
What GAO Recommends
GAO is recommending that DOD track how ALIS is affecting readiness of the F-35 fleet and develop a strategy for the ALIS re-design. In addition, GAO believes that Congress should consider requiring DOD to develop a performance measurement process for ALIS. DOD concurred with both of GAO's recommendations. |
gao_GAO-20-221 | gao_GAO-20-221_0 | <1. Background> On September 6, 2017, the eye of Hurricane Irma traveled about 50 nautical miles to the north of the northern shore of Puerto Rico as a category 5 hurricane. Less than two weeks later, Hurricane Maria made landfall as a category 4 hurricane on the main island of Puerto Rico on the morning of September 20, 2017, with wind speeds up to 155 miles per hour. The center of the hurricane moved through southeastern Puerto Rico to the northwest part of the island, as shown in figure 1 below. In response to the request of the governor of Puerto Rico, the president declared a major disaster the day after each hurricane impacted Puerto Rico. Major disaster declarations can trigger a variety of federal response and recovery programs, including assistance through FEMA s Public Assistance program. Under the National Response Framework, DHS is the federal department with primary responsibility for coordinating disaster response, and within DHS, FEMA has lead responsibility. <1.1. FEMA s Public Assistance Program> FEMA s Public Assistance program provides grant funding to state, territorial, local, and tribal governments, as well as certain types of private nonprofit organizations, to assist them in responding to and recovering from major disasters or emergencies. As shown in figure 2, Public Assistance program funds are categorized broadly as either emergency work or permanent work. Within those two broad categories are separate sub-categories. In addition to the emergency work and permanent work categories, FEMA s Public Assistance program includes Category Z, which represents any indirect costs, any direct administrative costs, and any other administrative expense associated with a specific project. <1.2. Entities Involved in Puerto Rico s Recovery> Given the immense scale and scope of devastation, disaster recovery in Puerto Rico is a complex and dynamic process involving a large number of entities. As shown in figure 3, implementing the Public Assistance program involves recovery partners from the federal government; the Commonwealth of Puerto Rico; and Puerto Rico government agencies, public corporations, municipalities, and eligible nonprofits in Puerto Rico. These recovery partners play a role in implementing the Public Assistance program by developing projects and providing or receiving grants and sub-grants (subawards). FEMA. FEMA administers the Public Assistance program in partnership with Puerto Rico and makes Public Assistance grant funding available to Puerto Rico. Puerto Rico Central Office of Recovery, Reconstruction and Resilience. Puerto Rico was required, as a condition to receiving Public Assistance grant funding, to establish an oversight authority supported by third-party experts and provide centralized oversight over recovery funds. In October 2017, the governor of Puerto Rico established the Central Office of Recovery, Reconstruction, and Resilience (central recovery office) to be the recipient for all Public Assistance funding consistent with the conditions provided in Amendment 5 to the President s disaster declaration. The central recovery office is a non-federal entity that provides a subaward to an applicant to carry out part of the federal program. As a recipient of federal funds, the central recovery office must oversee subrecipients to ensure that they are aware of and comply with federal regulations. According to central recovery office officials, the office was also established to ensure coordination with FEMA across the numerous partners in recovery. Commonwealth agencies, local entities, and private non-profits. Puerto Rico s agencies, such as the Department of Housing, and public corporations, such as the Puerto Rico Electric Power Authority, act as subrecipients. Specifically, they work with FEMA and the central recovery office to identify, develop, and implement Public Assistance projects. Local entities, including Puerto Rico s 78 municipalities and eligible private non-profits that provide critical services, are also subrecipients of FEMA Public Assistance funding. As subrecipients, these entities receive subawards from the central recovery office to carry out work under the Public Assistance program. <1.3. Alternative Procedures for Public Assistance Funds> According to a November 2017 amendment to Puerto Rico s major disaster declaration, FEMA must obligate all large project funding for Public Assistance permanent work through alternative procedures due to the extraordinary level of infrastructure damage caused by Hurricane Maria, as well as Puerto Rico s difficult financial position. To develop projects under the Public Assistance program, FEMA and Puerto Rico officials are to collaborate to identify and document the damage caused by a disaster to a particular facility. These officials are to then use the damage description to formulate the scope of work or activities required to fix the identified damage as well as the estimated cost of these activities. Under the standard Public Assistance program, FEMA will fund the actual cost of a large project, and will increase or reduce the amount of funding based on the cost of completed eligible work. In contrast, in Puerto Rico, the alternative procedures require that the central recovery office and subrecipients work collaboratively with FEMA to develop a fixed cost estimate. According to FEMA officials, once this fixed cost estimate is agreed to and obligated, subrecipients have flexibility within that fixed cost estimate to rebuild in the manner that they find most appropriate. Subrecipients could do the actual work used to develop the fixed cost estimate, or they could put funds towards another FEMA approved project. Unlike the standard Public Assistance program, the subrecipient is responsible for actual costs that exceed the fixed cost estimate. If actual costs are less than the fixed cost estimate, the subrecipient may use all or part of excess funds for other eligible purposes, such as for additional cost-effective hazard mitigation measures to increase the resiliency of public infrastructure, as detailed in figure 4 below. <1.4. The Bipartisan Budget Act of 2018> Section 20601 of the Bipartisan Budget Act of 2018 authorized FEMA, when using the alternative procedures, to provide assistance to fund the replacement or restoration of disaster-damaged infrastructure that provides critical services such as medical and educational facilities to an industry standard without regard to pre-disaster condition. It also allows for restoration of components not damaged by the disaster when necessary to fully effectuate restoration of the disaster-damaged components to restore the function of the facility or system to industry standards. For example, through the Act, FEMA may fund the restoration of a disaster-damaged school building which provides a critical service to accepted industry standards applicable to the construction of education facilities. Therefore, according to FEMA policy, if the school building was not up to industry standards, or in poor condition prior to the 2017 hurricanes, the Act allows FEMA to fund the restoration of this building to a better condition than it was in prior to the storms. Further, the Additional Supplemental Appropriations for Disaster Relief Act of 2019 (Supplemental Relief Act), which was signed into law on June 6, 2019, provides additional direction to FEMA in the implementation of section 20601. Following the Supplemental Relief Act, FEMA issued additional guidance in September 2019 that includes information on eligibility and applicable industry standards. <2. FEMA Obligated Nearly $6 Billion for Public Assistance in Puerto Rico as of September 2019, but FEMA and Puerto Rico Face Significant Challenges in Developing Projects> <2.1. Status of FEMA Public Assistance Funding in Puerto Rico> Since the 2017 hurricanes, FEMA has obligated nearly $6 billion in Public Assistance program funding for 1,558 projects across Puerto Rico, according to our analysis of FEMA s data as of September 30, 2019 (see fig. 5). Specifically, FEMA had obligated approximately $5.1 billion for emergency work projects (categories A and B), $487 million for permanent work projects (categories C through G), and $315 million for management costs (Category Z). Of the nearly $6 billion FEMA has obligated, Puerto Rico has expended approximately $3.9 billion as of September 30, 2019 about 65 percent of total Public Assistance program obligations to Puerto Rico to reimburse subrecipients for completed work. As shown in table 1, Puerto Rico has expended about $3.7 billion for emergency work projects, $39 million for permanent work projects, and $104 million for management costs. The majority of FEMA s obligations and the funding Puerto Rico expended as of September 30, 2019, are for emergency work because these projects began soon after the disasters struck and focused on debris removal and providing assistance to address immediate threats to life and property. In contrast, permanent work projects take time to identify, develop, and ultimately complete as they represent the longer- term repair and restoration of public infrastructure, such as a sports center in Caguas, Puerto Rico, as shown in figure 6 below. <2.2. FEMA and Puerto Rico Face Significant Challenges in Developing Public Assistance Projects> FEMA and Puerto Rico officials identified challenges in developing Public Assistance projects in Puerto Rico. Specifically, they cited: (1) delays in establishing a cost estimating guidance for projects in Puerto Rico, (2) the large number of damaged sites that require finalized fixed cost estimates, and (3) challenges with the implementation of the flexibilities authorized by section 20601 of the Bipartisan Budget Act. Delays in establishing cost estimating guidance. Given the importance of reaching mutual agreement on fixed cost estimates for alternative procedures projects, FEMA and Puerto Rico have taken a deliberative approach to establishing the data and procedures that will be used to develop these fixed cost estimates. This includes, among other things, adapting the way FEMA estimates costs to the specific post- disaster economic conditions in the territory, including developing exceptions to FEMA s cost estimating guidance. According to FEMA, these exceptions were developed to account for risk, including higher anticipated costs due to increased demand for labor, equipment, and materials in Puerto Rico s post-disaster economy. To develop these exceptions, FEMA and the central recovery office established a Center of Excellence staffed with mutually agreed upon representatives. FEMA used cost estimators from RAND Corporation (RAND) as their chosen representatives, while the central recovery office hired separate contractors as their representatives. According to FEMA officials, the Center of Excellence was established, among other things, to involve Puerto Rico in developing cost estimating guidance and to ensure that the exceptions made to FEMA s Cost Estimating Format were agreeable to both parties. However, this approach has been beset by delays. For example, it took nearly one year for Puerto Rico to hire its chosen representatives to the Center of Excellence. According to FEMA, the central recovery office did not select members for the Center of Excellence until February 2019, which delayed progress on the development of finalized fixed cost estimates for permanent work. In July 2019, FEMA leadership signed an agreement establishing the exceptions to FEMA s cost estimating guidance based on an assessment conducted by a panel of FEMA engineers. These exceptions are intended to address certain costs specific to post-disaster conditions in Puerto Rico, for example adjustments to account for increased labor and material costs. Large number of damaged sites requiring a fixed cost estimate. In addition, FEMA and Puerto Rico officials have cited the large number of sites requiring damage assessments, project development, and mutually agreed-upon fixed cost estimates as a challenge. As of September 30, 2019, FEMA identified a total of 9,344 damaged sites in various stages of development. According to FEMA, 6,304 sites (67.5 percent of total sites identified) have completed damage assessments; 3,021 sites (32.3 percent of total sites identified) are pending the completion of damage assessments to begin project development; and 19 projects (0.2 percent of total sites identified) have finalized fixed cost estimates. According to FEMA guidance, October 11, 2019, was the deadline for completing fixed cost estimates for Public Assistance alternative procedures projects. However, on October 8, 2019, officials from the central recovery office requested an extension to the deadline, which FEMA granted. FEMA officials acknowledged that significant work remains on the part of Puerto Rico, subrecipients, and FEMA towards developing fixed cost estimates for all Public Assistance alternative procedures projects in Puerto Rico. According to FEMA officials, as of October 2019, FEMA and Puerto Rico are working together to establish specific time frames for the completion of fixed cost estimates. Implementation challenges with Section 20601 of the Bipartisan Budget Act of 2018. Puerto Rican government and FEMA officials identified challenges with the implementation of the flexibilities authorized by section 20601 of the Bipartisan Budget Act. As previously discussed, this section of the Act allows for the provision of assistance under the Public Assistance alternative procedures to restore disaster-damaged facilities or systems that provide critical services such as medical and educational facilities to an industry standard without regard to pre-disaster condition. Officials from Puerto Rico s central government stated that they disagreed with FEMA s interpretation of the types of damages covered by section 20601 of the Bipartisan Budget Act of 2018. In response, FEMA officials in Puerto Rico stated they held several briefings with Puerto Rico s central recovery office to explain FEMA s interpretation of the section, and released new guidance in September of 2019. It is too soon to assess the impact this guidance may have on current and future projects, but we will continue to examine this in future work. We will continue to monitor the status of FEMA s cost estimating process, the development of the remaining fixed cost estimates for permanent work and the impact of FEMA s new guidance on the implementation of section 20601 of the Bipartisan Budget Act. <3. FEMA Has Adapted Cost Estimating Guidance to Specific Conditions in Puerto Rico, but Could Take Further Action to Fully Align the Guidance with Best Practices> <3.1. FEMA Has Adapted Its Guidance to Estimate Public Assistance Costs to Address Post-Disaster Conditions in Puerto Rico> As Puerto Rico is responsible for any costs that exceed fixed cost estimates for large infrastructure projects under the alternative procedures, FEMA has adapted its guidance for estimating costs to ensure that these estimates accurately reflect the total costs of Public Assistance projects. As previously mentioned, FEMA and Puerto Rico established a Center of Excellence to develop proposed exceptions to adapt FEMA s Cost Estimating Format the agency s standard guidance used for Public Assistance cost estimating nationwide to more accurately estimate costs in Puerto Rico. After consideration of these proposals, FEMA approved two exceptions: (1) a cost factor to account for local labor, equipment, and material costs in Puerto Rico, and (2) a future price factor and price curve to account for anticipated rises in construction costs over time due to the massive influx of disaster recovery funds, coupled with limited material and labor resources in Puerto Rico. Cost Factor: According to FEMA officials, during the development of a cost factor by the Center of Excellence, FEMA learned that Gordian, a company that provides local cost indices called RSMeans which FEMA uses as part of their standard Cost Estimating Format, was developing four localized cost indices to apply to San Juan, urban areas, rural areas, and remote island (the islands of Vieques and Culebra) areas of Puerto Rico. FEMA officials told us that these cost indices compile location-specific construction costs for each of the four areas. In 2019, a panel of FEMA engineers assessed the methodologies proposed by RAND, the Center of Excellence, and the RSMeans localized indices for Puerto Rico. On July 12, 2019, in agreement with the panel s assessment, FEMA decided to use RSMeans s localized cost indices to act as the cost factor for fixed cost estimates in Puerto Rico beginning on September 27, 2019. For fixed cost estimates developed before this date, FEMA used a different cost index that RSMeans had previously developed for San Juan. According to FEMA, cost estimates signed before September 27, 2019 using RSMeans s San Juan cost index as the cost factor are considered final. Future Price Factor and Curve: According to FEMA officials, FEMA began using a future price factor an economic model based on expected construction conditions to estimate construction costs across ten years in July 2019 to estimate costs in Puerto Rico. FEMA is using this future price factor along with the cost factor. FEMA has also asked RAND to develop a future price curve, an analysis that will adjust as time goes on to account for changing economic conditions, to eventually replace the future price factor. FEMA estimates that RAND will take until November 2019 to develop the future price curve, and that the future price factor is being used in the meantime. FEMA officials stated that cost estimates produced using the future price factor are considered final and will not be eligible for revisions in the future once FEMA implements the future price curve. According to FEMA officials, the use of the cost factor combined with the future price factor and curve are intended to adapt FEMA s cost estimating guidance to the specific post-disaster economic conditions in Puerto Rico. <3.2. FEMA Cost Estimating Guidance Met Most Cost Estimating Best Practices, but FEMA Could Take Further Action to Fully Align with Best Practices> FEMA s cost estimating guidance for Public Assistance fully or substantially met nine of the 12 steps from GAO s Cost Estimating and Assessment Guide (GAO Cost Guide). However, the guidance partially met two and minimally met one of the remaining cost estimating steps, as shown in figure 7 below. The GAO Cost Guide outlines best practices for cost estimating and presents 12 steps that, when incorporated into an agency s cost estimating guidance, should result in reliable and valid cost estimates that management can use to make informed decisions. A reliable cost estimate is critical to the success of any construction program. Such an estimate provides the basis for informed decision making, realistic budget formulation and program resourcing, and accountability for results. For example, FEMA, Puerto Rico and subrecipients rely on cost estimates to help ensure that funding is sufficient for the costs of the Public Assistance projects carried out under the fixed cost estimate. Accurate and reliable cost estimating is especially important in Puerto Rico where all large permanent Public Assistance projects are being developed under the alternative procedures, which require a fixed cost estimate that cannot be revised once the award is made. Given Puerto Rico s financial situation, accurate cost estimates are necessary so that Puerto Rico has adequate funds to complete Public Assistance projects. For example, on the basis of our analysis, we determined that FEMA s guidance fully met the step to define the estimate s purpose because it describes the estimate s purpose, level of detail required, and overall scope. In addition, the guidance provides a time frame for which the estimates must be developed and reach agreement. FEMA s guidance substantially met another step, identify the ground rules and assumptions , because it provides measures to ensure assumptions are not arbitrary, are founded on expert judgments, and are documented. However, we rated this step as substantially met instead of fully met because FEMA s guidance does not address all of GAO s best practices for ground rules and assumptions. For example, it does not discuss the risk of an assumption being incorrect and the resultant effect on the cost estimate. Additionally, FEMA guidance substantially met the step to document the estimate because it contains, among other things, basic information about the project and the estimate; a description of the scope of work; the basis for the estimate; and supporting backup information. However, we assessed this step as substantially met instead of fully met because FEMA policy does not require documentation to include a discussion of high risk areas. Further, we found that FEMA s guidance for cost estimating does not fully or substantially meet three steps: (1) conduct a sensitivity analysis; (2) obtain the data; and (3) conduct a risk and uncertainty analysis. Sensitivity analysis (Minimally met): We found that FEMA s cost estimating guidance only minimally met the best practice regarding sensitivity analysis. A sensitivity analysis addresses some of the uncertainty in a cost estimate by testing assumptions and other factors that could change cost. By examining each assumption or factor independently, while holding all others constant, the cost estimator can evaluate the results to discover which assumptions or factors most influence the estimate. A sensitivity analysis also requires estimating the high and low uncertainty ranges for significant cost driver input factors. According to the GAO Cost Guide, when an agency does not identify the effect of uncertainties associated with different assumptions, this increases the chance that decisions will be made without a clear understanding of these impacts on costs. According to FEMA officials, FEMA s cost estimating guidance accounts for construction, cost, and market risks over time which allows FEMA to plan and estimate costs for unknown or unforeseen circumstances such as cost escalation or overhead. In addition, FEMA officials stated that their use of RSMeans unit costs, a benchmark industry standard based on ongoing iterative analysis of construction costs nationwide, allows FEMA to account for fluctuations and uncertainties in the market. However, we rated this step as minimally met because FEMA guidance does not indicate that cost estimators are to conduct a sensitivity analysis as part of FEMA s cost estimating process. Specifically, the guidance does not require that an estimator examine the effect of changing assumptions and the effect these changes could have on a cost estimate. Since the guidance does not direct estimators to conduct a sensitivity analysis, estimators may not fully understand which variable most affects the cost estimate and FEMA risks making decisions without a clear understanding of the impact of costs. Obtaining the data (Partially met): We found that FEMA s cost estimating guidance only partially met the best practice for obtaining data assembling information to serve as the foundation of a cost estimate. The quality of the data obtained affects a cost estimate s overall credibility. Depending on the data quality, an estimate can range from a mere guess to a highly defensive cost position. We found that FEMA did not meet some of the best practices for obtaining data. Specifically, FEMA s guidance did not outline procedures for making sure data was validated using historical data as a benchmark for reasonableness. In addition, FEMA s guidance did not stipulate that data be normalized to remove the effects of inflation or analyzed with a scatter plot to determine trends and outliers. As mentioned previously, FEMA used a city cost index based on San Juan as an interim measure to estimate costs throughout Puerto Rico until September, 2019 when FEMA began using additional cost indices to target costs in particular regions of Puerto Rico. Similarly, FEMA has been using a static future price factor as an interim measure until a more dynamic and iterative future price curve is finalized. FEMA does not plan to adjust cost estimates developed using these interim measures. Without adjusting these costs when better data becomes available consistent with the obtaining the data step, FEMA risks creating estimates that may not be based on accurate data. According to FEMA, estimates are developed based on historical costs or nationally available industry standard data. In addition, FEMA officials stated that FEMA does not revisit cost estimates to reflect updated market conditions or newly available cost information because FEMA uses an industry standard cost database that is updated quarterly. FEMA officials stated that the interim measures used to estimate costs are intended to enable work to continue and cost estimates to be developed while the future cost curve is being developed. However, we rated the step relating to obtaining data as partially met because without finalizing the future cost curve, and updating estimates to reflect this information, estimates may not be based on accurate data. Additionally, while the use of industry standard cost estimating resources addresses some best practices for this step such as data normalization and data validation, industry data is only one of many sources referenced in FEMA s guidance. For other data sources identified, FEMA guidance does not describe a process to analyze the data for cost drivers or to adequately document the data. Risk and uncertainty analysis (Partially met): We found that FEMA s cost estimating guidance does not include best practices consistent with performing a statistical analysis of risk to determine a range of possible costs and the level of confidence in achieving the estimate. By conducting a risk and uncertainty analysis, a cost estimator can model the effect of schedules slipping and missions changing, allowing for a known range of potential costs. Having a range of costs around a point estimate is useful to decision makers because it conveys the level of confidence in achieving the most likely cost and informs estimators about potential risks. We found that FEMA s cost estimating guidance does not require a statistical analysis of risks to be performed to determine a range of possible costs. While contingencies are accounted for within the guidance, they are not derived from a statistical analysis, nor do they reflect a level of confidence in the estimate. According to FEMA, risks associated with changing costs and conditions over the life of a Public Assistance alternative procedures construction project is not a risk that the federal government takes on. Rather, the risk is transferred to the recipient and subrecipients responsible for executing work using Public Assistance alternative procedures funding. In addition, FEMA officials told us that alternative procedures funding is not always used to restore facilities to pre-disaster condition, and therefore may not represent the final cost of work completed. In addition, the procedures are designed to incentivize subrecipients to manage grants and use excess funds for eligible work, as described earlier. However, GAO s Cost Guide states that point estimates alone are insufficient for good decision- making. For management to make good decisions, the program estimate must reflect the degree of uncertainty, so that a level of confidence can be given about the estimate regardless of the entity holding the risk. In the case of alternative procedures projects in Puerto Rico, where actual costs that exceed the estimate are borne by the recipient or subrecipient, estimates that accurately reflect the degree of uncertainty are important in establishing a level of confidence about the estimate. While FEMA fully or substantially met nine of the 12 steps in the GAO Cost Guide, FEMA could improve its cost estimating guidance to ensure that all best practices in the 12 steps in the GAO Cost Guide are fully met. In doing so, FEMA could further enhance the reliability of its cost estimating guidance. <4. FEMA Has Developed Public Assistance Program Policies and Guidance over Time for Puerto Rico, but Recovery Partners Reported Challenges> <4.1. FEMA Public Assistance Program Policies and Guidance for Puerto Rico> In response to the complexity of the recovery, as well as the nature of change in a recovery environment, FEMA has developed and issued guidance that is specific to the implementation of the Public Assistance program in Puerto Rico. As previously discussed, disaster recovery in Puerto Rico is a complex and dynamic process that requires the coordination of many entities, including FEMA, the government of Puerto Rico, and numerous subrecipients. Recovery in Puerto Rico also involves the use of Public Assistance structures including alternative procedures and new flexibilities afforded to FEMA under the Bipartisan Budget Act of 2018. FEMA officials told us that many elements of the Public Assistance process in Puerto Rico are the same as in other declared disasters across the United States. Therefore, according to FEMA officials, the standard guidance for the Public Assistance program, Public Assistance Policy and Procedures Guide (Policies and Procedures Guide), generally applies in Puerto Rico. FEMA has also developed policies and guidance to address the specific recovery circumstances in Puerto Rico. For example, in April of 2018 and September of 2019, FEMA published the Public Assistance Alternative Procedures Guide for Permanent Work to clarify how FEMA would implement the program in Puerto Rico. This guidance describes the scope and limitations of the alternative procedures; highlights changes to aspects of the Public Assistance program to which these procedures apply; identifies responsibilities for certain activities; and documents timelines for key actions and decisions. FEMA also issued a policy on the agency s implementation of section 20601 of the Bipartisan Budget Act as it applies in Puerto Rico in September of 2018, detailing the applicability of the section to specific critical services and outlining eligible industry standards for purposes of authorized projects, among other things. Following the Supplemental Relief Act, FEMA issued guidance in September 2019 that includes additional information on eligibility and applicable industry standards. According to FEMA officials, FEMA has also developed and implemented training specific to recovery in Puerto Rico. This training has included presentations to the central recovery office and subrecipients on the flexibilities of the Bipartisan Budget Act and alternative procedures, among other things. <4.2. Recovery Partners in Puerto Rico Identified Challenges with the Accessibility of FEMA Public Assistance Policies and Guidance> FEMA has iteratively developed, refined, and clarified Public Assistance guidance in Puerto Rico to respond and adapt to changing recovery conditions since the 2017 hurricanes. While iterative and responsive guidance is necessary in a complex and changing recovery, the pace of change necessitates that all involved recovery entities have real-time accessibility to current applicable FEMA guidance. Officials from the central recovery office and four Puerto Rico government agencies we spoke with stated that they did not consistently have the guidance they needed to implement the Public Assistance program. For example, an official from one Puerto Rico agency said that they delayed starting on any large Public Assistance projects through alternative procedures because they were waiting for FEMA to issue additional guidance. Similarly, we reported in March 2019 that four municipal officials stated that they were waiting on additional instruction from FEMA to establish more clear and consistent guidance to begin projects in Puerto Rico. According to FEMA officials, the agency works with Puerto Rico government officials and subrecipients to provide relevant guidance and technical assistance throughout the Public Assistance project development process. However, we found that pertinent guidance may not be shared with key recovery partners. For example, FEMA officials told us that the Standard Operating Procedure for Alternative Procedures (SOP) was available as of March 2019, but remains in draft form as of October 2019, pending finalized information about cost estimating procedures. This SOP provides instruction on specific procedures to implement the Public Assistance alternative procedures guide. In April 2019, FEMA officials described the SOP as a living document ; they also stated that the draft SOP is in effect and has been sent to the central recovery office for further dissemination to subrecipients. While the SOP document is still in draft, according to FEMA officials, it is operative guidance that FEMA expects the central recovery office to disseminate to subrecipients. However, in June 2019, central recovery office personnel told us they did not view the SOP as being in effect as it was still in draft form. As such, central recovery office officials stated they had not distributed the SOP to subrecipients. FEMA officials stated that they rely on the central recovery office to disseminate at least some FEMA guidance and policy to subrecipients in Puerto Rico, including municipalities and government agencies. As the recipient for all Public Assistance funding in Puerto Rico, the central recovery office is responsible for monitoring and providing technical assistance to subrecipients to ensure that federal funding is used in accordance with federal statutes, regulations, and the requirements of the grant. FEMA officials also stated that subrecipients have an assigned FEMA point of contact to assist them through the project development process, including communicating policy information and updates. However, municipal and Puerto Rico agency officials we spoke to said that confusion persisted in part due to changing points of contact. FEMA s reliance on the central recovery office or individual FEMA staff to deliver and distribute FEMA guidance poses a risk that the guidance is not made accessible to all partners involved in recovery, including subrecipients. While FEMA officials told us that FEMA assigns a point of contact to subrecipients to provide guidance and other necessary information throughout the project development process, Puerto Rico officials described a significant amount of back and forth with FEMA regarding requests for clarification, guidance, or instruction. FEMA officials acknowledge that FEMA has faced difficulties in disseminating information in Puerto Rico. According to FEMA officials, FEMA does not maintain a repository of Public Assistance policies and guidance available to all relevant recovery partners. The accessibility of FEMA guidance is especially important because FEMA releases iterative guidance to respond and adapt to changing recovery circumstances, such as updated legislation, among other things. Misunderstandings across recovery partners about guidance applicability raise concerns that subrecipients do not understand which guidance is currently in effect or how they should proceed in accordance with FEMA policy. Without real-time access to the totality of FEMA s current applicable guidance, recovery partners risk using guidance that has been revised or replaced. According to FEMA s National Disaster Recovery Framework, the federal government has the role of ensuring that information is distributed in an accessible manner such that all partners are informed of and aware of the recovery process. Developing a repository of current applicable policy and guidance and making it available to all relevant recovery partners in Puerto Rico, including subrecipients, would improve the accessibility of the information and provide greater assurance that recovery partners are aware of current applicable guidance. <5. Puerto Rico and FEMA Have Structures in Place to Manage and Oversee Public Assistance Funding and FEMA Has Instituted Additional Controls to Mitigate Risk> <5.1. Puerto Rico Established an Office to Manage and Oversee Public Assistance Funding and Help Ensure Compliance with FEMA Policy> Following the 2017 hurricanes, Puerto Rico took several steps to provide management and oversight of the Public Assistance program to ensure the program is implemented in compliance with applicable laws and regulations, as well as FEMA policies and guidance. Specifically, Puerto Rico (1) established a central recovery office to provide management and oversight of recovery funds; (2) developed an administrative plan, as required by FEMA policy; (3) developed an internal controls and recovery management plan; and (4) created a system to oversee and assess subrecipient risk. First, in accordance with Amendment 5 to the President s disaster declaration, the central recovery office has been supported by third-party experts to help it establish its structure and carry out its management and oversight mission. Specifically, the central recovery office has hired contractors to help perform the following functions: Design a management guide and assess subrecipient risk. According to central recovery office officials, the office hired contractors to develop management protocols and guidance to ensure compliance with federal and state law, regulation, and guidance. The office also tasked these contractors with developing a system to oversee subrecipients using risk-based oversight. Provide technical assistance. Central recovery office officials also hired contractors to provide technical assistance and advise Puerto Rico s government agencies and municipalities regarding recovery processes. This includes helping subrecipients define the scope of damages, and providing technical assistance to develop Public Assistance projects, among other things. The recovery office also tasked these contractors with overseeing grant accounting and reviewing reimbursement requests from subrecipients for eligible Public Assistance work performed. Develop data systems to track the central recovery office s work. The central recovery office launched an online transparency portal, with the assistance of contractors, that is intended to provide a breakdown of FEMA Public Assistance and other federal funding made available for disaster recovery in Puerto Rico. According to central recovery office officials, in addition to the development of the online transparency portal, contractor personnel also developed systems to track internal recovery data. Second, to meet FEMA reporting requirements, the central recovery office developed an administrative plan or FEMA State Agreement in 2019 for the Public Assistance program following the 2017 hurricanes. This plan outlines the central recovery office s management and oversight activities as well as the procedures that Puerto Rico must follow in implementing the programs. Puerto Rico is responsible, as required in the FEMA State Agreement, to ensure that subrecipients are in compliance with the conditions of the disaster grant award. For example, the plan emphasizes FEMA s requirement that Puerto Rico submit quarterly progress and financial reports on the status of projects. Further, the plan describes Puerto Rico s specific roles and responsibilities for managing and overseeing the program. For example, according to the Puerto Rico 2019 Public Assistance Administrative Plan, the central recovery office is responsible for, among other things, processing requests for time extensions to complete projects and conducting quarterly reviews, site inspections, and audits to ensure program compliance. Third, in addition to the administrative plan, in March 2019, the central recovery office released the Disaster Recovery Federal Funds Management Guide (management guide) that includes an internal controls plan and other policies and procedures for managing recovery funds. The management guide s 14 chapters outline roles, responsibilities, policies and procedures on various recovery functions including procurement, payment and cash management, and subrecipient management and oversight, among other things. FEMA officials told us that they reviewed portions of the management guide, including sections on the central recovery office s payment and cash management plan and subrecipient oversight. Further, FEMA worked with the central recovery office to make revisions to the plan, which included, adding clarifying information and correcting instances of duplication in the guidance, among other things. In addition, the central recovery office, with the help of contractors, is taking steps to assist subrecipients in meeting compliance requirements and supplementing their management capacity. FEMA and Puerto Rico government agency officials cited varying levels of capacity to manage federal grant funds, including Public Assistance funding. For example, agency officials at one government agency we spoke with stated that they were performing their own federal grants management and had prior experience managing large federal funds. Other Puerto Rico government officials we interviewed reported that central recovery office contractors have helped augment capacity to oversee federal funds. For example, officials from one subrecipient, a Puerto Rico public corporation, said that their agency did not have prior experience managing federal funds on such a large scale. The official told us that in order to bolster the capacity of the agency to oversee these grant funds, central recovery office contractors work closely with the agency to help them manage Public Assistance funding. Similarly, officials at one Puerto Rico government agency stated that the central recovery office offered help on uploading and validating grant data. Fourth, as detailed in its management guide, the central recovery office has also developed criteria to evaluate subrecipients risk of noncompliance with federal laws and regulations, as well as FEMA policy. According to the procedures outlined in the central recovery office s management guide, each subrecipient is to be assessed annually to determine whether they are at a low, moderate, or high risk for noncompliance. The central recovery office is to place additional award conditions on subrecipients with risk factors identified through the risk assessment process. These may include additional oversight and more frequent on-site visits from the central recovery office. Additionally, central recovery office guidance states that corrective actions are to be taken in cases when deficiencies are found during audits. <5.2. FEMA Has Instituted Additional Controls to Protect the Federal Investment in Puerto Rico s Recovery> In March 2019, we reported that FEMA instituted a manual reimbursement process in November 2017 for subrecipients in Puerto Rico for federal funds, including Public Assistance funds, to mitigate fiduciary risk and decrease the risk of misuse of funds. Specifically, FEMA officials stated that they decided to institute this process because the government of Puerto Rico had expended funds prior to submitting complete documentation of work performed. According to FEMA officials, they also decided to institute the manual reimbursement process due to Puerto Rico s financial situation, weaknesses in internal controls, and the large amount of recovery funds, among other things. The manual reimbursement process required that FEMA review each reimbursement request before providing Public Assistance funds to mitigate risk and help ensure financial accountability. In Puerto Rico, the manual reimbursement process requires that the central recovery office fill out the Office of Management and Budget s Standard Form 270 and submit supporting documentation to FEMA before obligated funds can be withdrawn by Puerto Rico through the central recovery office and reimbursed to subrecipients. Subsequently, FEMA must review the submitted Standard Form 270 and all project documentation for completeness, compliance, and accuracy before disbursing funds to the recipient. In cases where FEMA requires additional documentation to process a Standard Form 270 request, FEMA will submit requests for information asking the central recovery office to supply the information needed for FEMA to complete the review. On March 25, 2019, FEMA and the government of Puerto Rico, through the central recovery office, signed an agreement allowing the central recovery office to directly access federal grant funds and reimburse subrecipients for Public Assistance work they perform. During FEMA s review of the central recovery office s management guide, FEMA asked for revisions to sections, including chapters related to payment and cash management and subrecipient management and monitoring. According to the March 2019 agreement, these policies and procedures were developed in collaboration with FEMA, and comments and concerns provided by FEMA were addressed. FEMA officials also told us that they sampled Public Assistance grant documentation for completeness to ensure that the reimbursement requested was eligible for payment. According to FEMA officials, FEMA communicated minor discrepancies with the central recovery office for resolution, but said that they did not find any significant discrepancies during their completeness review. On April 1, 2019, FEMA removed the manual reimbursement process and began a transition to allow the central recovery office to make direct payments to subrecipients. In July 2019, FEMA announced that it would reinstate the manual reimbursement process due to, ongoing leadership changes within the Puerto Rican government, combined with continued concern over Puerto Rico s history of fiscal irregularities and mismanagement. FEMA said that these additional steps are being taken in order to protect the federal investment in Puerto Rico s recovery. We previously reported that FEMA and central recovery office officials told us that the manual reimbursement process caused delays in reimbursements, but once FEMA increased the number of personnel devoted to reimbursement reviews, delays decreased. In September 2019, FEMA once again lifted the manual reimbursement process following a meeting between FEMA and Governor V squez s senior leadership. According to FEMA, the agreement to remove the manual reimbursement process is contingent on Puerto Rico s continued ability to implement the mutually-acceptable internal controls plan. FEMA officials also stated that they are selecting samples from fiscal year 2019 to test Puerto Rico s internal controls, and plan to move to a quarterly testing routine after testing for fiscal year 2019 is complete. As part of our ongoing review, we will continue to monitor the central recovery office s management and oversight of Public Assistance funding, as well as of FEMA s oversight of the federal investment in Puerto Rico s recovery. <6. Conclusions> After the devastation of the catastrophic 2017 hurricane season, FEMA and Puerto Rico face a recovery of enormous scope. Puerto Rico estimates that $132 billion in funding will be needed to repair and reconstruct the infrastructure damaged by the hurricanes through 2028, and FEMA has identified nearly ten thousand damaged sites in need of Public Assistance funding. FEMA has taken steps to adapt its guidance to estimate costs to post-disaster conditions in Puerto Rico, but strengthening its cost estimating guidance could help FEMA provide greater assurance that its cost estimating guidance for Public Assistance projects is reliable. In addition, given the large number of individuals and entities involved in Puerto Rico s complex recovery, ensuring that all recovery partners have easy access to the most current applicable policy and guidance could help clarify which FEMA guidance and policies are in effect. <7. Recommendations for Executive Action> We are making the following two recommendations to FEMA: The FEMA administrator should revise FEMA s cost-estimating guidance for Public Assistance projects to fully align with all 12 steps in the GAO Cost Estimating and Assessment Guide. (Recommendation 1) The FEMA administrator should develop a repository for all current applicable Public Assistance policies and guidance for Puerto Rico and make it available to all recovery partners, including subrecipients. (Recommendation 2) <8. Agency Comments and Our Evaluation> We provided a draft of this product to FEMA, DHS and Puerto Rico s Central Office of Recovery, Reconstruction, and Resilience (central recovery office) for comment. In its comments, reproduced in appendix III, DHS concurred with our recommendations. FEMA also provided technical comments, which we incorporated as appropriate. DHS concurred with our first recommendation that FEMA revise its cost- estimating guidance for Public Assistance projects to fully align with all 12 steps in the GAO Cost Estimating and Assessment Guide. DHS stated that FEMA will create a quality assurance checklist as an addendum to FEMA s Cost Estimating Format (CEF) to ensure that cost estimates reflect best practices from the GAO Cost Estimating and Assessment Guide. This action is a positive step to addressing our recommendation and we will monitor FEMA s efforts to complete this work. In DHS s concurrence to our second recommendation that FEMA develop a repository for all current applicable Public Assistance policies and guidance for Puerto Rico to be made available to all recovery partners, DHS requested that GAO consider this recommendation resolved and closed as implemented. DHS stated that FEMA maintains Public Assistance policy and guidance documents, including those specific to Puerto Rico, on the agency s public web site, which FEMA stated it will continue to update. DHS also stated that FEMA maintains non-publicly available reference documents on the agency s internal web site through the Grants Manager and Grants Portal systems. As we noted in our report, Puerto Rico s recovery is a complex and dynamic process that requires the coordination of many recovery partners, including numerous municipalities and commonwealth agencies. For this reason, ensuring that information is distributed in an accessible manner would provide greater assurance that all recovery partners are aware of the most current and applicable Public Assistance policies and guidance. We will monitor FEMA s public and internal web sites, including policy and guidance updates, to assess whether the actions outlined by FEMA meet the intent of our recommendation. COR3 also provided comments to our draft report, which we reproduced in appendix IV. In its comments, COR3 stated that it works with Public Assistance applicants to, among other things, provide technical assistance and training, and to monitor projects. COR3 also stated that it has established joint efforts with FEMA to improve COR3 s technical assistance, as well as compliance and monitoring efforts. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, the Administrator of FEMA, the Puerto Rico government, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you and your staff have any questions, please contact me at (404) 679- 1875 or curriec@gao.gov. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: The Status of Public Assistance Program Funding in Puerto Rico Since September 2017, the Federal Emergency Management Agency (FEMA) obligated nearly $6 billion in Public Assistance grant funding for 1,558 projects across Puerto Rico as of September 30, 2019. Specifically, FEMA had obligated $5.13 billion for emergency work projects (categories A and B), about $487 million for permanent work projects (categories C through G), and $315 million for management costs, (category Z). As of that date, Puerto Rico expended nearly $3.9 billion about 65 percent of total Public Assistance obligations to Puerto Rico to reimburse subrecipients for completed work. Of this, Puerto Rico expended about $3.7 billion (96 percent of all expended funds) for emergency work projects, $38.6 million (1 percent) for permanent work projects, and $104 million (3 percent) for management costs. The majority of FEMA s obligations and the funding Puerto Rico expended as of September 30, 2019 are for emergency work projects because these projects began soon after hurricanes Irma and Maria struck and focused on debris removal and providing assistance to address immediate threats to life and property. In contrast, permanent work projects take time to identify, develop, and ultimately complete as they represent the longer-term repair and restoration of public infrastructure. While the data in this appendix represent the status of Public Assistance funding as of September, 2019, the amount of grant funding FEMA obligates and Puerto Rico expends will likely increase over time as additional projects are finalized and approved. Emergency Work. As of September 30, 2019, FEMA obligated a total of $5.13 billion for approximately 1,200 emergency work projects across Puerto Rico. These projects focus on debris removal activities and providing assistance to address immediate threats to life and property. Category A: Debris Removal. FEMA obligated $637.0 million and Puerto Rico expended $427.1 million for 331 projects focused on debris removal activities in Puerto Rico under category A. Category B: Emergency Protective Measures. FEMA obligated nearly $4.5 billion for 871 projects under Category B. Of this, Puerto Rico has expended $3.29 billion. For example, FEMA has obligated more than $140 million to the Puerto Rico Aqueducts and Sewer Authority under category B to fund emergency protective measures, including using back-up generators to supply water to the island after Hurricane Maria, among other things. Permanent Work. As of September 30, 2019, FEMA has obligated about $487.3 million for 159 permanent work (Categories C through G) projects in Puerto Rico. These projects focus on the restoration of disaster- damaged infrastructure or systems. Category C: Roads and Bridges. FEMA obligated $140.5 million and Puerto Rico has expended $32.8 million for 20 projects focused on the permanent repair of roads and bridges in Puerto Rico, such as the damage illustrated in figure 8 below. Category D: Water Control Facilities. As of September 30, 2019, FEMA has obligated $435,493 for three projects, of which approximately $150,000 has been expended. This includes work on heavy water control infrastructure, such as berms or levees. Category E: Buildings and Equipment. FEMA obligated $43.5 million and Puerto Rico expended nearly $4 million for 87 projects focused on repairing and rebuilding damaged public buildings and equipment, such as the school shown in figure 9 below. Category F: Utilities. Of the $487 million FEMA obligated for permanent work projects, the largest share, $282 million was obligated for nine projects related to utilities, such as architectural and engineering design services for design work for electricity grid recovery projects. For example, in June 2019, FEMA obligated $111 million for architectural and engineering design services for design work for electricity grid recovery projects. Puerto Rico has expended just over $1 million of the funding obligated for projects related to repairing utilities. Category G: Parks, Recreational and Other Facilities. FEMA obligated approximately $20.9 million and Puerto Rico has expended just over $600,000 across 40 projects focused on repairing parks, playgrounds, and other facilities. Appendix II: Summary of GAO s Assessment of the Federal Emergency Management Agency s (FEMA) Cost Estimating Policies and Guidance GAO s Cost Estimating and Assessment Guide (GAO Cost Guide) outlines best practices pertaining to cost estimating principles, presenting 12 steps to create high-quality estimates. These steps are generally applicable in a variety of circumstances and range from defining the purpose of the estimate to obtaining data to presenting the estimate to management for approval. Application of these principles should result in reliable and valid cost estimates that management can use to make informed decisions. To assess the extent to which FEMA s cost estimating policy aligns with these best practices, we compared FEMA s information to the GAO Cost Guide. Specifically, we reviewed FEMA documents containing cost estimating information pertinent to Public Assistance projects including FEMA s Public Assistance Alternative Procedures Guide for Permanent Work FEMA-4339-DR-PR (Alternative Procedures Guide) and FEMA s Cost Estimating Format (CEF) for Large Projects Instructional Guide V2.1 (dated September 2009). We compared FEMA s guidance for developing cost estimates outlined in these documents against the 12 best practices described in the GAO Cost Guide. We assessed the extent to which these documents aligned with the best practices on a five point scale. Fully met. FEMA provided complete evidence that satisfies the elements of the step. Substantially met. FEMA provided evidence that satisfies a large portion of the elements of the step. Partially met. FEMA provided evidence that satisfies about half of the elements of the step. Minimally met. FEMA provided evidence that satisfies a small portion of the elements of the step. Not met. FEMA provided no evidence that satisfies any of the elements of the step. Taken together, FEMA s documents provided cost estimating information that either substantially or fully meets nine of the 12 cost estimating steps. Furthermore, the information partially met two of the 12 steps, and minimally met one of the 12 steps. Table 1 summarizes GAO s assessment of the extent to which FEMA s information aligns with the 12 steps identified in the GAO cost guide. Appendix III: Comments from the Department of Homeland Security Appendix IV: Comments from the Commonwealth of Puerto Rico Appendix V: GAO Contact and Staff Acknowledgments <9. GAO Contact> Chris Currie, (404) 679-1875 or curriec@gao.gov. <10. Staff Acknowledgments> In addition to the contact named above, Joel Aldape (Assistant Director), Taylor Hadfield (Analyst in Charge), Michelle Bacon, Brian Bothwell, Lorraine Ettaro, Eric Hauswirth, Heidi Nielson, Danielle Pakdaman, Amanda Prichard, Kevin Reeves, and Mary Weiland made key contributions to this report. GAO Related Products U.S. Virgin Islands Recovery: Additional Actions Could Strengthen FEMA s Key Disaster Recovery Efforts. GAO-20-54. Washington, D.C.: November 19, 2019. Disaster Resilience Framework: Principles for Analyzing Federal Efforts to Facilitate and Promote Resilience to Natural Disasters. GAO-20-100SP. Washington, D.C.: October 23, 2019. Disaster Recovery: Recent Disasters Highlight Progress and Challenges. GAO-20-183T. Washington, D.C.: October 22, 2019. Wildfire Disasters: FEMA Could Take Additional Actions to Address Unique Response and Recovery Challenges. GAO-20-5. Washington, D.C.: October 9, 2019. Puerto Rico Electricity Grid Recovery: Better Information and Enhanced Coordination Is Needed to Address Challenges. GAO-20-141. Washington, D.C.: October 8, 2019. Emergency Management: FEMA s Disaster Recovery Efforts in Puerto Rico and the U.S. Virgin Islands. GAO-19-662T. Washington, D.C.: July 11, 2019. 2017 Disaster Relief Oversight: Strategy Needed to Ensure Agencies Internal Control Plans Provide Sufficient Information. GAO-19-479. Washington, D.C.: June 28, 2019. Emergency Management: FEMA Has Made Progress, but Challenges and Future Risks Highlight Imperative for Further Improvements. GAO-19-617T . Washington, D.C.: June 25, 2019. Emergency Management: FEMA Has Made Progress, but Challenges and Future Risks Highlight the Imperative for Further Improvements. GAO-19-594T. Washington, D.C.: June 12, 2019. Disaster Assistance: FEMA Action Needed to Better Support Individuals Who Are Older or Have Disabilities. GAO-19-318. Washington, D.C.: May 14, 2019. Disaster Contracting: Actions Needed to Improve the Use of Post- Disaster Contracts to Support Response and Recovery, GAO-19-281. Washington, D.C.: April 24, 2019. 2017 Hurricane Season: Federal Support for Electricity Grid Restoration in the U.S. Virgin Islands and Puerto Rico. GAO-19-296. Washington, D.C.: April 18, 2019. FEMA Grants Modernization: Improvements Needed to Strengthen Program Management and Cybersecurity. GAO-19-164. Washington, D.C.: April 9, 2019. Disaster Recovery: Better Monitoring of Block Grant Funds Is Needed. GAO-19-232. Washington, D.C.: March 25, 2019. Puerto Rico Hurricanes: Status of FEMA Funding, Oversight, and Recovery Challenges. GAO-19-256. Washington, D.C.: March 14, 2019. Huracanes de Puerto Rico: Estado de Financiamiento de FEMA, Supervisi n y Desaf os de Recuperaci n. GAO-19-331. Washington, D.C.: March 14, 2019. High-Risk Series: Substantial Efforts Needed to Achieve Greater Progress on High-Risk Areas. GAO-19-157SP. Washington, D.C.: March 6, 2019. U.S. Virgin Islands Recovery: Status of FEMA Public Assistance Funding and Implementation. GAO-19-253. Washington, D.C.: February 25, 2019. 2017 Disaster Contracting: Action Needed to Better Ensure More Effective Use and Management of Advance Contracts. GAO-19-93. Washington, D.C.: December 6, 2018. Continuity of Operations: Actions Needed to Strengthen FEMA s Oversight and Coordination of Executive Branch Readiness. GAO-19-18SU. Washington, D.C.: November 26, 2018. Homeland Security Grant Program: Additional Actions Could Further Enhance FEMA s Risk-Based Grant Assessment Model. GAO-18-354. Washington, D.C.: September 6, 2018. 2017 Hurricanes and Wildfires: Initial Observations on the Federal Response and Key Recovery Challenges. GAO-18-472. Washington, D.C.: September 4, 2018. Federal Disaster Assistance: Individual Assistance Requests Often Granted but FEMA Could Better Document Factors Considered. GAO-18-366. Washington, D.C.: May 31, 2018. 2017 Disaster Contracting: Observations on Federal Contracting for Response and Recovery Efforts. GAO-18-335. Washington, D.C.: February 28, 2018. Disaster Recovery: Additional Actions Would Improve Data Quality and Timeliness of FEMA s Public Assistance Appeals Processing. GAO-18-143. Washington, D.C.: December 15, 2017. Disaster Assistance: Opportunities to Enhance Implementation of the Redesigned Public Assistance Grant Program. GAO-18-30. Washington, D.C.: November 8, 2017. Climate Change: Information on Potential Economic Effects Could Help Guide Federal Efforts to Reduce Fiscal Exposure. GAO-17-720. Washington, D.C.: September 28, 2017. Federal Disaster Assistance: Federal Departments and Agencies Obligated at Least $277.6 Billion during Fiscal Years 2005 through 2014. GAO-16-797. Washington, D.C.: September 22, 2016. Disaster Recovery: FEMA Needs to Assess Its Effectiveness in Implementing the National Disaster Recovery Framework. GAO-16-476. Washington, D.C.: May 26, 2016. Disaster Response: FEMA Has Made Progress Implementing Key Programs, but Opportunities for Improvement Exist. GAO-16-87. Washington, D.C.: February 5, 2016. Hurricane Sandy: An Investment Strategy Could Help the Federal Government Enhance National Resilience for Future Disasters. GAO-15-515. Washington, D.C.: July 30, 2015. Budgeting for Disasters: Approaches to Budgeting for Disasters in Selected States. GAO-15-424. Washington, D.C.: March 26, 2015. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. Emergency Preparedness: Opportunities Exist to Strengthen Interagency Assessments and Accountability for Closing Capability Gaps. GAO-15-20. Washington, D.C.: December 4, 2014. Fiscal Exposures: Improving Cost Recognition in the Federal Budget. GAO-14-28. Washington, D.C.: October 29, 2013. Federal Disaster Assistance: Improved Criteria Needed to Assess a Jurisdiction s Capability to Respond and Recover on Its Own. GAO-12-838. Washington, D.C.: September 12, 2012. | Why GAO Did This Study
In September 2017, two major hurricanes—Irma and Maria—struck Puerto Rico, destroying roads and buildings among other things. Puerto Rico estimates that $132 billion will be needed to repair and reconstruct infrastructure and services through 2028. FEMA is the lead federal agency responsible for assisting Puerto Rico to recover from these disasters. FEMA administers the Public Assistance program in partnership with Puerto Rico to provide funds to rebuild damaged infrastructure and restore services. GAO was asked to review federal recovery efforts in Puerto Rico.
In this report, GAO examines, among other things, (1) the status of FEMA Public Assistance program funding and any challenges in implementing the program, (2) the extent to which Public Assistance cost estimating guidance addresses conditions in Puerto Rico and aligns with best practices, and (3) the extent to which FEMA has developed policies and guidance for the program and any challenges with these policies and guidance. GAO reviewed FEMA's cost estimation guidance as well as documentation and data on the Public Assistance program through September 2019. GAO conducted site visits to Puerto Rico and interviewed FEMA and Puerto Rico government officials regarding the status of recovery efforts.
What GAO Found
As of September 30, 2019, the Federal Emeregency Management Agency (FEMA) had obligated nearly $6 billion in Public Assistance grants to Puerto Rico for 1,558 projects since the September 2017 hurricanes. Of this $6 billion, $5.1 billion was obligated for emergency work projects such as debris removal and temporary power restoration. However, FEMA and Puerto Rico faced challenges in developing long-term, permanent work projects under the Public Assistance program. The large number of damaged sites and delays in establishing cost estimation guidance specific to Puerto Rico have also presented challenges to developing projects, according to FEMA and Puerto Rico officials. Both parties must agree to fixed cost estimates for these projects before work can begin. FEMA and Puerto Rico had approved fixed cost estimates for 19 projects as of September 2019, out of 9,344 damaged sites in Puerto Rico, such as schools, hospitals, and roads. FEMA and Puerto Rico have recently taken actions, including extending the deadline for fixed cost estimates, to address these challenges. However, it is too soon to assess the impact of these actions.
FEMA has adapted its Public Assistance cost estimating guidance to accurately reflect costs in Puerto Rico but could improve the guidance to further enhance its reliability. GAO found that FEMA's guidance substantially or fully met best practices for nine of 12 steps included in the GAO Cost Estimating and Assessment Guide , such as documenting and defining the purpose of the estimate. However, FEMA could improve the guidance in three areas, including analyzing risks and future uncertainties that could affect these estimates.
FEMA has developed Public Assistance policies and guidance to respond to complex recovery conditions in Puerto Rico. However, Puerto Rico government officials GAO spoke with stated that they were not always certain about how to proceed in accordance with FEMA policy because they did not consistently understand what guidance was in effect. Further, FEMA does not maintain a repository of Public Assistance guidance available to all recovery partners that includes current applicable guidance. Without real time access to current applicable guidance, recovery partners risk using guidance that has been revised or replaced.
What GAO Recommends
GAO recommends that FEMA (1) revise its cost estimating guidance for Public Assistance to more fully adhere to best practices and, (2) develop a repository of current applicable Public Assistance guidance available to all relevant recovery partners in Puerto Rico. The Department of Homeland Security concurred with these recommendations. |
gao_GAO-19-356 | gao_GAO-19-356_0 | <1. Background> <1.1. Zika Transmission and Effects> Zika is spread to people primarily through the bite of an infected mosquito but can also be transmitted from mother to child during pregnancy or from person to person through sexual contact or blood transfusion. The disease can cause symptoms that include fever, rash, conjunctivitis ( pink eye where the eyes appear red or pink), and joint and muscle pain. Although most people with Zika have only mild symptoms or none at all, Zika in pregnant women has been linked to adverse pregnancy outcomes, such as miscarriage and stillbirth, and severe birth defects. Zika can be passed to the fetus and cause a birth defect of the brain called microcephaly and other severe brain defects, according to CDC. Zika is also linked to other problems such as Guillain-Barr syndrome, an uncommon condition of the nervous system. Although at present no vaccine has been approved by the U.S. Food and Drug Administration to prevent Zika, several vaccines are in different phases of development. <1.2. The Zika Epidemic> Zika was first identified in the Zika Forest in Uganda in 1947 and caused only sporadic human disease until 2007. In 2007, Zika was detected in Yap State, Federated States of Micronesia, and subsequent outbreaks occurred in Southeast Asia and the Western Pacific. In 2014, Zika spread east across the Pacific Ocean to French Polynesia, then to Easter Island. In May 2015, Brazil documented the first case of locally acquired Zika transmission in the Americas. See figure 1 for a timeline of the Zika outbreak and the U.S. Zika response overseas. According to WHO, in November 2015, Suriname, El Salvador, Guatemala, Mexico, Paraguay, and Venezuela reported cases of locally acquired Zika, followed by Panama, Honduras, French Guiana, Martinique, and Puerto Rico in December 2015. Zika continued to spread throughout the region, and on February 1, 2016, WHO declared that the recent association of Zika with clusters of microcephaly and other neurological disorders constituted a public health emergency of international concern. In November 2016, WHO declared an end of the public health emergency of international concern regarding microcephaly, other neurological disorders, and Zika. However, WHO announced that Zika and the associated health outcomes remained a significant public health challenge requiring intense action. Zika spread to multiple countries throughout the globe but primarily affected countries in Latin America and the Caribbean region. According to WHO, as of March 2017, transmission of the Zika virus was occurring in 79 countries or territories, most of which are located in the Western Hemisphere. According to WHO, from 2015 to 2017, there were approximately 583,000 suspected and 223,000 confirmed cases of Zika virus transmission in the Western Hemisphere. See figure 2 for the cumulative Zika incidence rates in each country in Latin America and the Caribbean from 2015 to 2017. <1.3. U.S. Response to Zika Overseas> In February 2016, the President submitted a request to Congress for emergency funding to enhance ongoing U.S. efforts to prepare for and respond to Zika, including a request for funding for USAID and State to respond to the outbreak overseas. In addition, in April 2016, USAID and State notified Congress of their intent to repurpose $215 million of fiscal year 2015 supplemental Economic Support Fund Ebola funding for the U.S. Zika response overseas, which included $78 million for CDC international Zika activities. In September 2016, Congress appropriated about $175.1 million in supplemental funding to USAID and State in the Zika Response and Preparedness Appropriations Act, 2016, for the U.S. Zika response overseas. USAID activities initially began in five countries Haiti, Honduras, Guatemala, El Salvador, and Dominican Republic based on an assessment of their Zika risk and limited host government capacity to prevent the spread and respond to the impact of the virus. USAID ultimately supported activities in 26 countries in the Latin America and Caribbean region. <1.4. USAID and State Obligated Almost All Funding Available for the Zika Response but Did Not Report Funding by Country USAID and State Obligated Almost All Funding Available for the Zika Response and Disbursed Approximately Two-Thirds> As of September 30, 2018, USAID and State had obligated about $385 million (99 percent) of the total $390 million available for the U.S. Zika response overseas and had disbursed approximately $264 million (68 percent). Specifically, USAID had obligated all of its funds available for the Zika response and disbursed about two-thirds, and State had obligated and disbursed more than three-quarters of its funding for Zika. USAID and State had disbursed a higher proportion of the repurposed Ebola funds than the funds appropriated in the Zika Response and Preparedness Appropriations Act, 2016. See figure 3 for USAID and State Zika response funding appropriations, obligations, and disbursements as of September 30, 2018. Of the $215 million in repurposed Ebola funds, USAID and State had obligated $215 million (100 percent) and had disbursed almost $201 million (93 percent) as of September 30, 2018. Of the approximately $175 million appropriated in the Zika Response and Preparedness Appropriations Act, 2016, USAID and State had obligated about $170 million (about 97 percent) and had disbursed about $63 million (36 percent) as of September 30, 2018. <1.5. USAID and State Track Zika Funding by Account and Activity> As of September 30, 2018, USAID had obligated all funds available for the Zika response and had disbursed about two-thirds, from three accounts. USAID has two sources of funding for Zika response activities: $211 million of fiscal year 2015 supplemental Economic Support Fund Ebola funding repurposed for the Zika response and about $155.5 million provided in the Zika Response and Preparedness Appropriations Act, 2016 including $145.5 million and $10.0 million through the Global Health Programs and Operating Expenses accounts, respectively for a total of $366.5 million. As of September 30, 2018, USAID had obligated approximately $366.5 million (100 percent) and had disbursed approximately $245 million (67 percent), from the Economic Support Fund, Global Health Programs, and Operating Expenses appropriations accounts. See figure 4 for USAID Zika response funding obligations and disbursements by account. USAID obligated all funding for Zika response activities within a year after it was repurposed or appropriated. As of September 30, 2018, USAID had disbursed a higher proportion of repurposed fiscal year 2015 supplemental Economic Support Fund Ebola funding (93 percent) compared with Global Health Programs and Operating Expenses funding (28 percent and 72 percent, respectively), which was appropriated in the Zika Response and Preparedness Appropriations Act, 2016, in September 2016. The $211 million in Economic Support Fund obligations supported 56 USAID activities, as well as a $78 million interagency transfer to CDC. The $145.5 million in Global Health Programs obligations supported 25 activities and program support. Obligations for USAID-supported activities ranged from $12,000 to $37 million and included support for activities such as the procurement of insect repellent to assist pregnant women in avoiding Zika infection and strengthening the ability of civil society and community networks to disseminate information related to Zika. CDC supported 25 activities that ranged from $276,000 to $13.6 million, including activities such as collecting and analyzing public health data, conducting epidemiological studies to better understand the prevalence of Zika and related risk factors, building laboratory capacity, and providing training to conduct Zika virus testing. As of September 30, 2018, State had obligated and disbursed more than three-quarters of funding available for the Zika response, from two accounts. State has two sources of funding for Zika response activities: $4 million from a fiscal year 2015 supplemental Economic Support Fund appropriation for the Ebola response that was repurposed for the Zika response and about $19.6 million provided in the Zika Response and Preparedness Appropriations Act, 2016, the majority of which was provided through the Diplomatic and Consular Programs account, for a total of about $23.6 million. As of September 30, 2018, State had obligated and disbursed about $18.3 million (almost 78 percent) from the Economic Support Fund and Diplomatic and Consular Programs accounts. See figure 5 for State Zika response funding obligations and disbursements by account. Under the Zika Response and Preparedness Appropriations Act, 2016, State was provided almost $14.6 million through the Diplomatic and Consular Programs account, $4 million through the Emergencies in Diplomatic and Consular Services account, and $1 million through the Repatriation Loans Program account, for a total of almost $19.6 million. In September 2017, State notified Congress of its intent to transfer the $4 million from the Emergencies in Diplomatic and Consular Services account and $870,000 from the Repatriation Loans Program account to the Diplomatic and Consular Programs account. These transfers resulted in a total of $19.5 million available under the Diplomatic and Consular Programs account and $130,000 under the Repatriation Loans Program account. The $4 million in Economic Support Fund obligations supported research and development activities by the International Atomic Energy Agency to control disease-carrying mosquito populations. The $14.3 million in Diplomatic and Consular Programs obligations supported activities including medical evacuations to protect the health of pregnant U.S. government personnel and eligible family members, mosquito abatement training and other measures to reduce Zika risk to overseas staff, as well as public diplomacy efforts to further inform journalists and the public about the U.S. response to Zika. <1.6. Agencies Did Not Track or Report Zika Funding by Country> In their reporting to Congress on the uses of Zika funds, USAID and State included some country information but did not track or provide information on funding uses broken down on a country basis. In October 2016, USAID and State submitted a consolidated report to the appropriations committees on the anticipated uses of funds made available to USAID and State by the Zika Response and Preparedness Appropriations Act, 2016, in response to a reporting requirement in Section 203 of the act. After the initial submission, the act required the agencies to update and submit the report to the committees on appropriations every 60 days until September 30, 2017. The initial report described ongoing Zika response activities in five countries as well as planned activities in additional countries. Subsequent reports listed specific countries where USAID and State supported Zika response activities. However, USAID and State did not provide information to Congress on the uses of funding appropriated by the Zika Response and Preparedness Appropriations Act, 2016, broken down by country. The reports also included obligation and disbursement information for the fiscal year 2015 supplemental Economic Support Fund Ebola funding that was repurposed for the international Zika response; however, similar to the information provided regarding the funds appropriated by the Zika Response and Preparedness Appropriations Act, 2016, the reports information on the use of the repurposed Ebola funds was also not broken down by country. USAID officials told us that Zika activities were designed to be implemented on a regional and multicountry basis. While over 95 percent of all U.S. government funds available for the Zika response overseas were obligated by USAID, and the agency had a number of financial tracking systems in place, the agency did not take steps to record its funding by country at the outset of Zika response programming. Specifically, USAID officials noted that the contracts and grants the agency had signed with its implementing partners did not include provisions requiring partners to provide information to USAID that broke down their use of funds by country. Consequently, USAID was unable to track the uses of Zika funds on a country basis. Federal internal control standards state that management should use and communicate the necessary quality information both internally and externally to achieve the entity s objectives and address related risks. According to USAID officials, tracking information on the uses of Zika response funding broken down by country would be helpful in the future for mission directors, chiefs of missions, and partner-country ministries of health, some of whom have requested this information. Moreover, data on USAID funding to address future infectious disease outbreaks if broken down by uses in each country could provide additional useful information to decision makers in assessing risks and planning responses. The ability to compile funding by country when responding to future infectious disease outbreaks would enable USAID to provide key decision makers, including Congress and agency officials, with additional information to better support spending oversight and inform budgetary and planning decisions. <2. USAID and State Supported a Broad Range of Activities in Response to Zika> <2.1. USAID Supported Mosquito Control, Public Awareness, Capacity Building, and Research Activities> As part of the U.S. Zika response overseas, USAID provided assistance to several countries in the Caribbean, Central America, and South America and conducted a variety of activities related to mosquito control, public awareness, capacity building, and research. <2.1.1. Mosquito Control> In support of mosquito control, USAID s Zika AIRS Project (ZAP) conducted activities that included Entomological monitoring: collecting and reporting information on the location and population of mosquitoes; Larviciding: placing agents that kill mosquito eggs in likely breeding sites, such as water receptacles; Source reduction interventions: facilitating the removal or mitigation of likely breeding sites, such as tires, pots, barrels, or anything that may allow for standing water; and Indoor residual spraying: spraying insecticide that has a lasting effect in houses. We observed mosquito control activities during our fieldwork. For example, in Honduras we followed a team as they went house to house to implement and facilitate mosquito control activities. They collected information from mosquito egg traps, which serve as indicator of breeding activity, and recorded it for monitoring purposes. They also examined the premises for potential mosquito breeding sites, treated susceptible areas such as wash basins with larvicide, and spoke with residents about picking up trash and covering outdoor plant pots to reduce potential breeding sites. To support raising public awareness of the risk of Zika virus and to promote behavior change to reduce the spread of the disease, USAID implementing partners such as the Red Cross and CARE told us that they collaborated with communities, local government, and schools to communicate information about Zika. For example, in Trinidad, the Red Cross conducted educational campaigns at schools to improve students awareness. During our fieldwork, we observed a session led by adult volunteers during which children played games and engaged in discussions designed to teach Zika prevention and response methods. Implementing partners told us that the impact of such efforts extends beyond those reached directly; for example, they said the children who learned about Zika risks and prevention also conveyed the knowledge to their families, who in turn may pass it on to friends or others in the community. In Peru, CARE worked with schools to develop written education guides for application in the classroom and conducted communication campaigns. During our fieldwork, we went to schools and observed students delivering oral presentations on Zika risks and prevention. In addition, we witnessed other student activities, such as classroom discussions and art projects focused on Zika, designed to demonstrate understanding, raise awareness, and promote behavior change. <2.1.2. Capacity Building> To support capacity building, the Applying Science to Strengthen and Improve Systems (ASSIST) activity, which USAID funding supported, focused on improving Zika-related health services. Specific efforts included conducting a baseline assessment of the quality of care, improving clinical guidelines, training health care providers, and implementing a quality improvement program. During our fieldwork in Honduras, we visited a hospital and met with ASSIST-supported health workers who told us that they applied new guidance in their practice, and as a result, improved care in areas including counseling, screening, diagnosis, and follow-up of those affected by Zika. We also visited a hospital in Dominican Republic, where health care workers stated that they collaborated with ASSIST in responding to Zika by training staff and producing guidance materials. These activities raised awareness, increased prevention efforts, and improved care, according to health care workers. <2.1.3. Research> USAID supported research, training, and innovation activities through its Grand Challenge program as well as its interagency agreement with CDC. USAID launched a series of Grand Challenge efforts, providing $30 million in grants to foster innovation on new methods and technologies to respond to Zika. One grant, for example, supported the World Mosquito Program s research into the feasibility and effectiveness of infecting mosquitoes with bacteria to hinder transmission of the Zika virus. We visited the program s operations in Colombia, met with scientists, and observed the breeding lab. Program scientists told us that initial efforts have been promising and that if more tests prove successful, the potential for reducing Zika transmission could be significant. Another USAID Grand Challenge grant supports research into the possible use of genetically modified yeast to prevent mosquito eggs from hatching. We spoke with scientists, lab technicians, and viewed facilities supported by this grant in Trinidad during our field work. Scientists stated that yeast attracts mosquitoes and is inexpensive, commonly available, and environmentally friendly. Testing is ongoing, but if successful, the approach could help reduce populations of mosquitoes in critical areas, according to the scientists. The USAID CDC interagency agreement identifies a range of activities that involve technical assistance to help strengthen surveillance, emergency operations and management, and epidemiological investigations and research. One CDC activity, for example, focuses on supporting public health surveillance and epidemiological studies to better understand the prevalence and risk factors for severe health outcomes related to Zika. Another activity aims to build laboratory capacity in areas such as Zika diagnostic test production and distribution. In addition, the objectives of CDC s Field Epidemiology Training Program are to train qualified professionals, build sustainable capacity for detecting and responding to health threats, and develop in-country expertise so that disease outbreaks can be detected locally and prevented from spreading. In Dominican Republic, CDC officials told us that this program delivers 3 months of classroom and field project training, and that as of August 2018, four cohorts of approximately 80 students each had completed the training. CDC officials told us that in addition to implementing various activities, CDC s Central America Regional Office in Guatemala played an important role in facilitating U.S. government cooperation with Colombia, which had the second largest outbreak of Zika after Brazil. <2.2. Implementing Partners Reported Various Results from Selected Activities> We reviewed status reports for six USAID activities that received among the highest amounts of funding, and each identified various results. Below, we describe the activities and examples of reported results. For more information, see appendix II. ASSIST: This activity sought to strengthen Zika-related health services and systems in Latin America and the Caribbean with a focus on pregnant women, newborns, and women of reproductive age. ASSIST reported that it conducted virtual and in-person training, courses, and workshops on Zika prevention, diagnosis, and care. ASSIST also reported that 8,133 health care workers had been trained as of March 2017, and that its efforts had supported the development of Zika care protocols and guidelines with a new emphasis on clinical care and support for affected infants and families. ASSIST further reported that through March 2018, 75 percent of children affected by Zika in Dominican Republic received specialized care at Hospital Infantil Robert Reid Cabral, an ASSIST- supported hospital in the capital, Santo Domingo. Red Cross: This activity aimed to reduce risks associated with Zika infection through community involvement, sharing lessons learned, and improving practices. The Red Cross reported that its communication efforts reached approximately 3,000 students, 29 communities, and almost 140,000 people via TV, radio, and social media engagement, providing them with information on risk and protection methods. Zika AIRS Project (ZAP): This is a mosquito control activity focused on reducing Zika transmission in Latin America and the Caribbean. Specific activities supported by USAID funding included entomological monitoring, larviciding, source reduction interventions, and indoor residual spraying. ZAP reported that five countries (El Salvador, Guatemala, Haiti, Honduras, and Jamaica) implemented comprehensive mosquito control activities. Population Services International: The purpose of this activity was to improve the capacity and raise awareness of people in countries affected by and at risk of Zika and other vector-borne diseases. Population Services International reported that through March 2018, 35 health providers in Dominican Republic, El Salvador, and Guatemala had been trained in raising awareness about Zika prevention and the use of printed educational materials. In addition, 1,006 pregnant women received counseling on Zika prevention, and 967 received prevention kits containing condoms, mosquito repellent, and printed educational materials. Additionally, 227 pharmacy attendants from 195 pharmacies received information on Zika prevention. Save the Children s Community Action on Zika (CAZ): The goal of this project was to reduce Zika transmission and minimize the risk of Zika-related microcephaly and other neurological disorders. The project focused on helping the most vulnerable through community- based prevention strategies in Colombia, Dominican Republic, and three Central American countries. CAZ reported that it had reached approximately 65,000 students and trained 3,838 community agents and volunteers who supported efforts to strengthen the capacity to prevent Zika in 921 communities. United Nations Children s Fund (UNICEF): This activity focused primarily on four countries: Guatemala, El Salvador, Honduras, and Dominican Republic. UNICEF worked to promote the adoption of prevention behaviors among at-risk populations through actions to raise awareness at multiple levels: individual, interpersonal, community, institutional, and national policy levels. UNICEF reported that these efforts reached more than 5.5 million people with key risk- communication messages and more than 150,000 people through coordinated social mobilization and person-to-person communication. For example, in Guatemala, UNICEF worked with a local partner to train young people and adolescents in schools and social groups to lead prevention activities in their communities. Moreover, around 25,000 pregnant women benefited from counseling sessions on Zika- prevention behaviors. <2.3. State Conducted Public Awareness Initiatives and Medical Evacuations> In response to Zika, State conducted public awareness and communication initiatives, medical evacuations for overseas staff, and other activities. According to a State official, State conducted Zika-related public outreach to U.S. citizens abroad through social media and the Smart Traveler Enrollment Program, a service that provides information from U.S. embassies about local safety conditions. According to a State official, State also implemented public diplomacy activities related to Zika awareness and communication. For example, one activity aimed to raise awareness of vector-borne diseases such as Zika and collect information on insect breeding grounds. Another supported the addition of a science envoy who focused specifically on Zika and mosquito-borne diseases. In addition, according to a State official, State conducted Zika-related medical evacuations as part of those normally offered to female staff who became pregnant while serving abroad. State s medical services division also supported overseas posts by purchasing and distributing mosquito repellent. State officials also told us that they coordinated Zika response efforts internally and externally. For example, State participated in a U.S. government interagency group led by CDC to exchange information on Zika and coordinated with other agencies on the response effort. <2.4. USAID Took Steps to Address Sustainability Challenge but Only Partially Mitigated Challenge to Timely Implementation USAID Implementing Partners Aligned Their Activities with Host Governments and Involved Local Communities to Address Sustainability Challenge> Over the course of our fieldwork, USAID and implementing partner officials identified two key challenges to the implementation of Zika response activities. The first was the long-term sustainability of Zika response activities. The second was the timely implementation of Zika response activities in countries without bilateral USAID health programs. While USAID took steps to address the challenge related to sustainability, it only partially mitigated the challenge to timely implementation of Zika response activities in countries without bilateral USAID health programs. Agency and implementing partner officials identified the sustainability of Zika response efforts as a key challenge. While USAID did not intend to continue U.S. Zika response activities after the one-time emergency funding, sustainability was a consideration and posed a challenge due to the short implementation time frame, according to agency and implementing partner officials. One official further elaborated that Zika funding efforts occurred during the acute phase of the outbreak, which made it difficult to focus on long-term needs. For example, an implementing partner said that Zika-affected children require long-term care that host country governments may not be able to support after U.S. assistance ends. In addition, host country government officials, U.S. government officials, and implementing partners said that some Zika activities may not be sustainable after U.S. assistance is finished due to a lack of funds and limited capacity to continue the work. To address this challenge and support the long-term continuation of Zika response activities, implementing partners aligned their activities with those of host country governments and other organizations. Implementing partners reported working with governments and other organizations to incorporate Zika activities into their plans and practices so they could continue over the long term. One implementing partner and the Dominican Republic s Ministry of Health, for example, planned mosquito control efforts together, and a Ministry of Health official said they intend to continue those control efforts after the end of Zika funding. Implementing partners in various countries also stated that Zika activities brought broader benefits to mosquito control, disability services, maternal health care, surveillance efforts, and emergency preparedness, which facilitated partners efforts to align their Zika response activities. For example, an implementing partner reported using Zika funding to develop organizational guidelines for treating Zika-affected children, which will be used by the health care system in Dominican Republic to treat children with related disabilities in the long term. According to some implementing partners in countries we visited, they developed Zika protocols and guidelines in response to new scientific information, trained government and other personnel on the protocols, and worked with officials of host country governments and other organizations to encourage adoption of Zika activities. For example, according to an agency official, an implementing partner in Peru developed a curriculum for epidemiologists and trained them on how to detect and contain mosquito-borne diseases, such as Zika. The agency official said that the implementing partner shared the training curriculum and materials with Peru s Ministry of Health so it could continue the trainings after the end of Zika funding. According to implementing partners, they also involved local communities in activities to increase community ownership and address sustainability. For example, an implementing partner official said they trained a cadre of community volunteers in Guatemala and El Salvador on behavior change practices so that they can continue activities after the end of Zika funding. In addition, implementing partner officials said that engaging with communities to learn about needs and resources is important to continued community interest in activities. For example, an implementing partner that works with communities on health priorities developed an approach that includes a toolkit for identifying a community s specific risks for Zika and the efforts best suited to helping the community eradicate mosquito breeding sites. In places affected by violence, some implementing partners engaged with communities to better understand how to prioritize community worker and volunteer safety to enable the continuation of activities. For example, an implementing partner in Guatemala engaged with local communities to understand areas they recommended health workers avoid due to safety concerns. <2.5. USAID Only Partially Mitigated the Challenge to Timely Implementation in Some Countries Where It Did Not Have Health Programs> Agency and implementing partner officials described timely implementation of activities in some countries without bilateral USAID health programs as a second key challenge. Twenty-two out of the 26 countries where USAID implemented its Zika response activities were countries without bilateral USAID health programs. USAID officials stated that, as a result, there were no USAID health program officials present in these countries to build on relationships with host country health officials and help facilitate the start of implementing partners activities during the Zika response. USAID officials noted two reasons that working with host country governments took time. First, some U.S. Zika response activities started after a decline in Zika cases, when some host country governments were no longer as focused on countering the disease. Implementing partners responded to this situation by identifying related health service improvements that could stem from implementing a Zika response and were of interest to the host country governments. Second, agency and implementing partner officials said that in some countries without bilateral USAID health programs it also took time to identify the appropriate points of contact and establish relationships preliminary steps needed to obtain approval from the host country government before activities could get underway. According to USAID officials, these relationships are critical to navigating bureaucratic systems and assist in designing activities that meet the needs of host country governments and communities, which are needed for timely implementation. USAID took some steps to address the timely implementation challenge in countries without bilateral health programs. For example, according to USAID officials, USAID worked with multilateral partners that had a health presence in those countries and relied on regional field-based Zika coordinators to build relationships with in-country points of contact. As noted above, however, agency officials indicated that Zika response activities took additional time to deploy in some of the countries without bilateral USAID health programs. Further, implementing partners reported it took additional time to start up activities in those countries because of the time it took to obtain approval for them from the ministries of health. For example, one implementing partner reported that activity startup was postponed for nearly 3 months until it received approval from the host country government. Another implementing partner said it was a challenge to get information on Zika from the host country government or establish dialogue until USAID officials became involved. USAID officials also said that efforts to start and integrate Zika response activities in countries with ongoing USAID health programs did not face a number of the obstacles to timely implementation experienced in countries without bilateral USAID health programs. According to federal internal control standards, agencies should design control activities, such as a plan, to achieve their objectives and address related risks, such as the challenge related to timely implementation. In an effort to enhance its planning for outbreaks, USAID developed an infectious disease response plan in July 2018 during the time frame of our review. However, the plan does not provide specific guidance on how to address the challenge of initiating emergency response activities in countries without bilateral USAID health programs, such as by noting particular practices that implementing partners and other officials can use to address that challenge. For example, our fieldwork and interviews with USAID officials indicate that the following may be helpful practices for infectious disease response: Immediately establish an in-country working group that includes implementing partners, host country government officials, and U.S. government officials to help initiate and coordinate outbreak response. Communicate a current list of health ministry and other relevant government officials to implementing partners and other officials so they can quickly identify the appropriate points of contact. According to USAID officials, USAID missions maintain regular contact with host country governments, maintain contact lists, and participate in coordination meetings. However, in the case of overseas Zika response, some implementing partner officials in the field told us that they did not initially know who to contact in the host country government. Likewise, a host country government official told us that a working group on Zika outbreak response was not established until after officials recognized that implementing partner and host country government officials did not have regular channels of communication. By taking steps to improve planning for countries without bilateral USAID health programs such as by adding specific guidance for initiating emergency response activities in such countries to its July 2018 plan USAID would be better positioned to quickly build relationships with health ministry and other key government officials in host countries and thus be better able to provide a timely infectious disease response to future outbreaks. <3. Conclusions> The Zika virus quickly spread to dozens of countries in 2015 and 2016, prompting WHO to declare the virus and associated health risks an international public health emergency. As future infectious disease outbreaks arise, Congress will be called on to fund overseas response efforts, as it did with the Zika outbreak, and USAID is likely once again to play a vital role in those efforts. Because USAID did not provide key decision makers with information on how Zika funding was distributed across the various countries where it conducted response activities, decision makers lack visibility into a key aspect of the overall U.S. Zika response overseas. The ability to compile this information by country when responding to future infectious disease outbreaks would enable USAID to provide key decision makers, including Congress and agency officials, with additional information to better support spending oversight and inform budgetary and planning decisions. Further, while USAID took steps to address the challenge of sustaining Zika response activities over the long term, it did not fully mitigate the challenge of timely implementation of activities in countries without bilateral USAID health programs. As a result, the agency s response to Zika took additional time in some countries without bilateral USAID health programs. Infectious disease response planning that addresses countries without bilateral USAID health programs would better position USAID to quickly respond to infectious disease outbreaks, such as Zika, whenever the need arises. <4. Recommendations for Executive Action> We are making the following two recommendations to USAID: The Administrator of USAID should take steps to ensure that, in responding to future public health emergencies of international concern, the agency is able to compile funding information broken down by country. (Recommendation 1) The Administrator of USAID should take steps to improve its infectious disease response planning to address the challenge of initiating response activities in countries without bilateral USAID health programs in a timely manner. (Recommendation 2) <5. Agency Comments and Our Evaluation> We provided a draft of this report to USAID, State, and CDC for review and comment. USAID provided written comments, which we have reproduced in appendix III. In its comments, USAID agreed with our findings and recommendations and identified a number of actions it plans to take in response. Specifically, USAID stated that in responding to future public health emergencies of international concern, it plans to compile and report on funding by country. USAID also outlined the steps it plans to take to develop additional guidance for USAID officials in countries without bilateral health programs. State and CDC did not provide formal responses. CDC provided technical comments, which we incorporated throughout the report, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Administrator of USAID, the Secretaries of State and of Health and Human Services, and to other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov If you or your staff have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology The Zika Response and Preparedness Appropriations Act, 2016, included a provision for us to review the status of U.S. Agency for International Development (USAID) and Department of State (State) actions to respond to Zika. In this report, we examine (1) the status of USAID and State funding for the U.S. Zika response overseas, (2) activities supported by these funds, and (3) challenges, if any, to implementing Zika response activities and actions taken to address any challenges. To examine the status of funding for U.S. Zika response overseas, we reviewed USAID and State s reports to the Senate and House Committees on Appropriations mandated by Section 203 of the Zika Response and Preparedness Appropriations Act, 2016. We reviewed agency reporting submitted to Congress and discussed the reports with agency officials. We also reviewed USAID and State s reports to the Senate and House Committees on Appropriations mandated by the Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015. We obtained additional funding and activity information from USAID covering a period beyond that included in the reports to Congress. We reviewed the interagency agreement between USAID and the Centers for Disease Control and Prevention (CDC), outlining the CDC s Zika response activities supported by $78 million in funds USAID obligated to CDC. We also obtained additional funding data from CDC and interviewed CDC officials to discuss the status of the agencies obligations and disbursements for Zika response activities. We analyzed USAID s and State s obligations and disbursements that the agencies reported as supporting the U.S. Zika response overseas, as of September 30, 2018. We analyzed agency obligations and disbursements across agency bureaus, funding accounts, and activities for the Zika response. Additionally, we interviewed officials from USAID and State to discuss the agencies obligations and disbursements for Zika response activities. We then reviewed the funding data and related documentation and consulted with USAID and State officials on the accuracy and completeness of the data. In the small number of instances where we identified potential issues or inconsistencies in the data, we contacted relevant agency officials and obtained information from them necessary to resolve the discrepancies. We assessed USAID s tracking of funding data against federal internal control standards related to using quality information. We also utilized information from data reliability assessments for two recent GAO reports that utilized funding data from the same USAID and State systems. We determined that the data we used were sufficiently reliable for our purposes of examining USAID s and State s obligations and disbursements of the funds. To examine activities that USAID and State implemented in response to Zika overseas, we conducted fieldwork, analyzed agency documents, and interviewed officials. We examined the status and progress related to Zika response activities. We conducted a teleconference with officials in Haiti and El Salvador and conducted fieldwork in Barbados, Colombia, Dominican Republic, Guatemala, Honduras, Peru, and Trinidad and Tobago. We selected these countries based on the following criteria: (1) geographic diversity to include the Caribbean, Central America, and South America; (2) coverage of the main lines of effort (mosquito control, public awareness, capacity building, and research); and (3) the presence of activities under way that accounted for a significant portion of funding. During our fieldwork, we interviewed agency officials who played a role in Zika response activities, which included officials from State, USAID, and CDC. We also interviewed host government officials, implementing partners, health care workers, community volunteers, and researchers. In addition, we visited offices, toured facilities, and observed operations. We also attended a conference in Guatemala that addressed topics including status, successes, challenges, and lessons learned related to USAID s Zika response. We reviewed agency documents describing the plans and goals of activities. We also analyzed progress reports of six activities to provide illustrative examples of results. We selected activities from those with among with the highest amounts of funding and that together represented approximately 33 percent of all USAID funding for Zika response and a range of countries, lines of effort, and types of implementing partners (such as nongovernmental organizations and international organizations). The sample is not generalizable to all of USAID s Zika response activities. To examine challenges, if any, to implementing Zika response activities and actions taken to address any challenges, we interviewed U.S. government officials, USAID implementing partners, and host government officials, and we analyzed progress reports from selected USAID-funded Zika response activities. We identified key challenges based on the nature of the description and the degree to which a diversity of interviewees and documents made mention of them. We reviewed USAID policy, USAID s infectious disease response plan, federal internal controls, implementing partner progress reports, and interviews with officials to determine what agencies did to address these challenges. We assessed USAID s infectious disease response plan against relevant federal internal control standards. We conducted this performance audit from December 2017 to May 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate, evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Illustrative Examples of Results for Selected Zika Response Activities To provide illustrative examples of the results of Zika response activities funded by the U.S. Agency for International Development (USAID), we analyzed implementing partners progress reports for a sample of six activities. We selected activities from those with among the highest amounts of funding and that together represented approximately 33 percent of all USAID funding for Zika response and a range of countries, lines of effort, and types of implementing partners (such as nongovernmental organizations and international organizations). Quantitative figures related to individual indicators listed below reflect the targeted population of the activity. Start dates and funding information provided below reflect the date of the report to Congress in which the activity first appeared and the associated funds obligated. The sample is not generalizable to all USAID s Zika response activities. <6. Applying Science to Strengthen and Improve Systems> Table 1 presents the progress on key indicators as of March 2018 reported to USAID by the Applying Science to Strengthen and Improve Systems activity. The aim of the activity was to strengthen Zika-related health services and systems in Latin America and the Caribbean with a focus on pregnant women, newborns, and women of reproductive age. <7. International Federation of Red Cross and Red Crescent Societies Global Health> Table 2 presents the progress on key indicators as of May 2018 reported to USAID by the International Federation of Red Cross and Red Crescent Societies Global Health activity. The activity aimed to reduce risks associated with Zika infection through promoting community involvement, sharing lessons learned, and improving practices. <8. United Nations International Children s Emergency Fund> Table 3 presents the progress on key indicators as of March 2018 reported to USAID by the United Nations International Children s Emergency Fund activity. The activity aimed to promote the adoption of prevention behaviors among at-risk populations through actions targeting multiple levels of their environment: individual, interpersonal, community, institutional, and national policy levels. <9. Save the Children Community Action on Zika> Table 4 presents the progress on key indicators as of September 2017 reported to USAID by the Save the Children Community Action on Zika project. The goal of the project was to reduce Zika transmission and minimize the risk of Zika-related microcephaly and other neurological disorders among the most vulnerable through community-based prevention strategies. <10. Population Services International> Table 5 presents the progress on an illustrative selection of key indicators, by objective, as of March 2018 reported to USAID by the Population Services International activity. The purpose of the activity was to improve the capacity and raise awareness of people in countries affected by and at risk of Zika and other vector-borne diseases. <11. Zika AIRS Project (ZAP)> Table 6 presents illustrative examples of accomplishments as of March 2018 reported to USAID by the Zika AIRS Project (ZAP). This was a mosquito control project focused on reducing Zika transmission in Latin America and the Caribbean. Specific activities included entomological monitoring, larviciding, source reduction interventions, and indoor residual spraying. Appendix III: Comments from the U.S. Agency for International Development Appendix IV: GAO Contact and Staff Acknowledgments <12. GAO Contact> <13. Staff Acknowledgments> In addition to the contact named above, Joyee Dasgupta (Assistant Director), Marc Castellano (Analyst-in-Charge), Diana Blumenfeld, Alana Miller, Fatima Sharif, David Dayton, Francisco Enriquez, Christopher Keblitis, Amber Sinclair, and K. Nicole Willems made key contributions to this report. | Why GAO Did This Study
The World Health Organization (WHO) declared the Zika virus a public health emergency of international concern in February 2016. According to WHO, as of March 2017, 79 countries and territories—including 48 in the Western Hemisphere—reported evidence of ongoing Zika transmission. In April 2016, USAID and State repurposed $215 million for Zika from funds appropriated for Ebola. Subsequently, the Zika Response and Preparedness Appropriations Act, 2016, provided over $175 million in supplemental funding to USAID and State to support Zika response efforts overseas. The act also included a provision for GAO to review the status of USAID and State actions to respond to Zika. In March 2019, the Centers for Disease Control and Prevention downgraded its international travel warning for Zika.
This report examines (1) the status of USAID and State funding for U.S. Zika response overseas, (2) activities supported by these funds, and (3) implementation challenges, if any, and responses to any challenges. GAO reviewed information from U.S. agencies and met with U.S. and host country officials in Washington, D.C. GAO also conducted fieldwork in a nongeneralizable sample of countries in Latin America and the Caribbean where agencies implemented key response activities.
What GAO Found
The U.S. Agency for International Development (USAID) and the Department of State (State) obligated $385 million of the total $390 million available for international Zika response and disbursed $264 million as of September 2018. USAID obligated 95 percent of the total funding. USAID and State provided some country information to Congress but did not provide, or take steps to track, funding on a country basis. According to USAID officials, tracking funding information by country would be helpful in the future. The ability to compile funding by country when responding to future infectious disease outbreaks would enable USAID to provide additional information to key decision makers to better support spending oversight and inform budgetary and planning decisions.
In response to the Zika outbreak, USAID and State supported a broad range of activities overseas, including mosquito control, research efforts, and medical evacuations. In one activity, USAID implementing partners monitored mosquito populations; in another, they researched methods to reduce Zika virus transmission rates. USAID implementing partners reported various outputs from selected activities. For example, an implementing partner reported that its awareness campaign on Zika prevention reached more than 5 million people.
USAID faced sustainability and timeliness challenges in implementing its Zika response. According to agency and other officials, one-time funding and a short time frame posed a challenge related to sustainability of Zika response activities. In response, USAID worked to align activities with those of host governments and other organizations so they could continue in the long term. However, USAID's emergency response planning did not fully address the challenge of timely implementation of response activities in countries without bilateral USAID health programs. Twenty-two of 26 countries with Zika response activities did not have bilateral USAID health programs when the Zika outbreak began. As a result, response activities took additional time to deploy in some countries where USAID first had to establish relationships with key host country officials. Although USAID developed an infectious disease response plan in 2018, the plan does not provide guidance on how to address the timely implementation challenge in countries without bilateral health programs. By improving its planning, such as by adding such guidance in its 2018 plan, USAID would be better positioned to respond quickly to future disease outbreaks.
What GAO Recommends
USAID should (1) take steps to ensure it is able to compile funding information by country for future infectious disease emergency responses and (2) take steps to improve its infectious disease response planning. USAID concurred with GAO's recommendations. |
gao_GAO-20-134 | gao_GAO-20-134_0 | <1. Background Immigration and Nationality Act Provisions for Temporary Protected Status> The INA provides for the Secretary of Homeland Security, after consultation with other agencies, to designate a foreign country for TPS if the conditions in that country fall into one or more of three statutory categories. These categories are generally described as consisting of (1) ongoing armed conflict, (2) environmental disaster, and (3) extraordinary and temporary conditions. The Secretary may designate a country for a period of at least 6 months but no more than 18 months. At least 60 days before the end of the designation period, the Secretary is required, after consulting with other appropriate agencies, to undertake a review of the conditions in the foreign country for which a designation is in effect and to determine whether the conditions for such designation continue to be met. The Secretary must subsequently take one of the following actions: Extend the country s TPS designation for a period of 6, 12, or 18 months, if the Secretary determines that country conditions warrant an extension of TPS. This provides TPS beneficiaries with an extended period of protection from removal. Terminate the country s TPS designation, if the Secretary determines that the country no longer meets the statutory criteria. This results in an expiration of the period of protection for foreign nationals who were granted TPS under a country s designation. In addition, the Secretary may exercise his or her discretion, on the basis of this review, to redesignate the country for TPS. With a redesignation, the Secretary allows eligible nationals from the designated foreign country who have arrived in the United States since the initial designation, or another date established by the Secretary, to apply for TPS. TPS provides temporary humanitarian protection to eligible foreign nationals in the United States who, for various reasons, may not have otherwise lawful status and therefore, in the absence of TPS, would be subject to enforcement and removal under the INA. Foreign nationals may be present in the United States without valid status and potentially removable for various reasons, such as having entered without inspection and admission at a port of entry or having remained in the country beyond the expiration of previous temporary status (e.g., tourist, foreign student). Eligible foreign nationals may also seek TPS when they currently have another lawful status, according to USCIS officials. USCIS officials noted that this may occur, for example, when a foreign national has a temporary nonimmigrant status nearing its end date when TPS is designated for his or her country and applies for TPS before the existing status expires. Under the INA, applicants for TPS must apply during the registration period established by the Secretary of Homeland Security for a particular country designation. To be eligible for TPS, an applicant from a designated country must have been physically present in the United States continuously since the most recent designation s effective date and must have resided in the United States continuously since the date established by the Secretary of Homeland Security. The INA also specifies that an individual is ineligible for TPS if he or she has been convicted of any felony or of two or more misdemeanors committed in the United States; if any of the statutory bars to asylum apply, such as involvement in persecution of others; or if he or she is reasonably regarded as a danger to the security of the United States, among other bases. In addition to protecting beneficiaries from removal, TPS authorizes them to work in the United States for the designation period. To receive evidence of work authorization, TPS beneficiaries generally apply to USCIS for an employment authorization document, Form I-766. USCIS provides this document as a plastic card that shows proof of the individual s authorization to work in the United States and includes a photograph of the individual. Although USCIS does not require beneficiaries to apply for an employment authorization document, according to USCIS officials, beneficiaries typically apply to obtain these cards as evidence of their authorization to work in the United States. Figure 1 shows an example of an employment authorization document issued by USCIS. <2. Key DHS and State Components That May Be Involved in TPS Reviews> Several key DHS and State components may be involved in the TPS decision process, as table 1 shows. Additionally, other DHS offices and components, as well as agencies such as the Department of Defense or U.S. Agency for International Development, may provide information about country conditions to help inform the Secretary of Homeland Security s decisions. Foreign Nationals from 22 Countries Have Received TPS, Totaling About 430,000 Beneficiaries in Fiscal Years 2000- 2018 TPS Has Been Granted to Foreign Nationals from 22 Countries since It Was Established Since TPS was established in 1990, foreign nationals in the United States from 22 countries have been granted TPS. Our review of Federal Register notices published in fiscal years 1990 through 2019 found varying bases for the 22 countries TPS designations. We also found that designations for 20 of these countries were subsequently extended or the countries were redesignated one or more times. Somalia, first designated for TPS in September 1991, had the longest overall designation period since TPS was established. As of the end of fiscal year 2019, Somalia s designation had been extended 21 times and the country had been redesignated twice; its most recent extension was set to expire in March 2020. Designations for only two countries were terminated without any extensions or redesignations Kuwait, designated in 1991, and Guinea-Bissau, designated in 1999. Figure 2 shows all effective dates of TPS designations and subsequent decisions, including extensions, terminations, and redesignations, as well as the bases for the designations for each of the 22 countries in fiscal years 1990 through 2019. As figure 2 shows, 26 TPS designations occurred in fiscal years 1990 through 2019, and 22 designations were extended at least once. As of September 30, 2019, the designations for all but four countries had been terminated and the termination of six countries designations since fiscal year 2018 had been temporarily halted because of ongoing litigation. Redesignations occurred 20 times. Designations. Of the 26 TPS designations, three were for one country, Liberia, and four were for two countries, El Salvador and Sierra Leone, that were each designated twice. Extensions. The majority of TPS designations (17 of 26 designations) were extended up to eight times. Designations for five countries El Salvador, Honduras, Nicaragua, Somalia, and Sudan were extended more than 10 times each. Three of the 22 countries designations were not extended before termination. Terminations. The TPS designations for all countries except Somalia, South Sudan, Syria, and Yemen had been terminated as of September 30, 2019. The termination of six countries designations since fiscal year 2018 had been temporarily halted because of ongoing litigation. Several lawsuits had been filed regarding the Secretary of Homeland Security s decisions to terminate TPS for El Salvador, Haiti, Honduras, Nepal, Nicaragua, and Sudan. In October 2018, a U.S. district judge in California issued a preliminary injunction for one of the lawsuits, temporarily blocking DHS from enforcing the Secretary s TPS termination decisions for El Salvador, Haiti, Nicaragua, and Sudan. The U.S. government filed an appeal in response to the preliminary injunction. According to USCIS officials, DHS has regularly published notices of its continued compliance with the court s injunction and has stated that it will continue to publish such notices pending resolution of the case In April 2019, a district court judge in New York issued a second preliminary injunction covering Haiti, which the U.S. government appealed in June 2019. Additionally, under an agreement to stay the proceedings in response to a lawsuit filed in California in February 2019, the government stipulated that it would temporarily halt terminations for Honduras and Nepal until the appeal of the October 2018 injunction had been resolved. Redesignations. Of the 20 TPS redesignations, six were for countries that were redesignated once, two were for one country that was redesignated twice, and twelve were for four countries that each were redesignated thrice the largest number of TPS redesignations. <3. About 430,000 Eligible Foreign Nationals Received TPS in Fiscal Years 2000-2018> USCIS data show that applications for TPS were approved for a total of 431,848 foreign nationals in fiscal years 2000 through 2018 and that the number of TPS beneficiaries each year grew from about 70,000 in fiscal year 2000 to about 420,000 in fiscal year 2018. The number of TPS beneficiaries increased most rapidly in fiscal years 2000 through 2005, particularly after the designation of Honduras in 1999 and El Salvador in 2001. According to USCIS officials, because adjudicating all TPS applications can take years, depending on the number of applicants from a country, the number of TPS beneficiaries for a designated country may continue rising after the established registration period for the specific designation. For example, although Honduras was initially designated for TPS in 1999, with an applicant registration period that ended on July 5, 1999, USCIS data show that the number of beneficiaries from Honduras who were granted TPS peaked in 2007 at 85,759 foreign nationals. See appendix II for additional information on the numbers of TPS beneficiaries in fiscal years 2000 through 2018, by country. Data on the number of TPS beneficiaries for fiscal year 2018 the most recent available show that the majority of TPS beneficiaries were from three countries (El Salvador, Honduras, and Haiti), as figure 3 shows. About 98 percent of beneficiaries from six countries (Sudan, Honduras, Nicaragua, El Salvador, Haiti, and Nepal) in fiscal year 2018 408,773 foreign nationals held TPS because the termination of their country s TPS designation was temporarily halted because of ongoing litigation. In addition, about 2 percent of beneficiaries from four countries (Somalia, South Sudan, Syria, and Yemen) in fiscal year 2018 9,019 foreign nationals held TPS because their country s designation was extended. See appendix II for additional information about beneficiary characteristics in fiscal year 2018, including age, gender, and location. DHS s Approach to Inform the Secretary s TPS Reviews Includes Three Primary Steps Our review of documentation for selected TPS decisions in fiscal years 2014 through 2018 and our interviews with DHS, USCIS, and State officials indicated that DHS s approach for initial or subsequent reviews of countries for TPS consists of three primary steps: 1. The Secretary of Homeland Security initiates a review of a country for TPS. For an initial TPS designation, the Secretary may initiate consideration of a country in response to various triggering factors. Such factors may include, for example, a request from a U.S. government entity or a foreign government for a TPS designation based on the statutory conditions for TPS (i.e., armed conflict, environmental disaster, or extraordinary and temporary conditions). For an existing designation approaching its end date, a statutory deadline requires the Secretary to undertake a review. 2. DHS collects information on country conditions and recommendations from USCIS and State and provides this information to the Secretary of Homeland Security to inform his or her decision regarding an initial or existing TPS designation. Other DHS components and non-DHS entities, including other agencies and nongovernmental organizations, may also provide information to the Secretary or USCIS. 3. The Secretary of Homeland Security receives the information and recommendations and makes a decision about TPS for the country. The Secretary exercises discretion in determining whether to initially designate a country for TPS. For an existing designation, under the INA, the Secretary is required to determine whether country conditions warrant an extension of TPS or whether the country no longer meets the statutory criteria and TPS must be terminated. Also, the Secretary exercises discretion in determining whether to redesignate a country that was previously designated for TPS. Figure 4 illustrates these three steps. <4. Secretary of Homeland Security May Consider a Country for Initial TPS Designation in Response to Various Factors, and Statute Requires Subsequent Reviews> Various factors may trigger consideration of a country for an initial TPS designation, according to USCIS officials. Officials stated that the Secretary of Homeland Security s consideration of a country for an initial designation is discretionary. However, subsequent reviews of existing designations are required by statute. See figure 5. USCIS and State officials stated that for initial TPS designations, a request from DHS, State, the White House, members of Congress, or foreign governments may trigger consideration of whether to designate a country on the basis of one or more of the three statutory categories (i.e., armed conflict, environmental disaster, or extraordinary and temporary conditions). USCIS officials added that, under the INA, the Secretary of Homeland Security has the sole authority to determine whether and when to consider a country for an initial TPS designation. Further, they noted that a request does not automatically result in a formal review of a country for TPS even if the country has experienced country conditions specified in one or more of the statutory categories, such as an armed conflict or environmental disaster. For subsequent reviews of existing TPS designations, at least 60 days before the end of the designation period, the Secretary is required, after consulting with other appropriate agencies, to undertake a review of the conditions in the foreign country for which a designation is in effect. <5. DHS Collects Country Conditions Reports and Recommendations to Inform the Secretary s TPS Decision> DHS collects similar information for each review of a country for TPS, according to DHS officials and our review of selected decisions. DHS officials identified four primary sources of information that the department collects to inform the Secretary of Homeland Security s TPS decisions: country conditions reports compiled by USCIS and State and recommendations from USCIS and State leadership. According to DHS and State officials, DHS generally consults with State on TPS decisions, although it is not specifically required to do so under the statute. Our review of 26 TPS decisions for the eight selected countries found that DHS collected the following documents to inform each decision: 1. a country conditions report compiled by USCIS, 2. a memo with a recommendation from the USCIS Director to the Secretary of Homeland Security, 3. a country conditions report compiled by State, and 4. a letter with a recommendation from the Secretary of State to the Secretary of Homeland Security. USCIS manages and coordinates the TPS information-gathering process for the Secretary of Homeland Security. While State formally provides its input through the Secretary of State s letter and recommendation to the Secretary of Homeland Security, USCIS officials said that USCIS generally incorporates the input from State into USCIS s country conditions report and recommendation on TPS. DHS officials noted that other internal DHS components, government agencies, and other entities may also provide information about country conditions or other factors to inform the Secretary of Homeland Security s decisions. Figure 6 shows the information collected to support the Secretary of Homeland Security s TPS reviews. USCIS officials indicated that the time frames for conducting TPS reviews may vary. They noted that a review for an initial designation may have a shorter time frame than a review for an existing designation, depending on the situation. In addition, the officials noted that USCIS generally starts the review process for an existing TPS designation about 6 months to a year before the end date of the country s current designation. They added that they generally start the review process within this timeframe, given the INA requirement that the Secretary of Homeland Security either undertake a review and make a determination regarding country conditions at least 60 days in advance of the prior designation s end date or automatically extend the designation for 6 months. According to USCIS officials, at the start of a review for an initial or existing designation, USCIS s Office of Policy & Strategy generally reaches out to USCIS s Refugee, Asylum and International Operations Directorate (RAIO) to request input on country conditions. USCIS officials also said that the office coordinates with State s Bureau of Population, Refugees, and Migration regarding the target time frame for receiving State s input. In general, once USCIS receives the input from RAIO and State, USCIS finalizes its country conditions report and recommendation memo for the Secretary of Homeland Security. Our review of documentation for the eight countries in our nongeneralizable sample of 26 TPS decisions found variation in the time frames for USCIS s recommendation memos and for State s recommendation letters. For the 24 reviews of existing TPS designations, USCIS provided recommendation memos to the Secretary of Homeland Security about 2 to 7 months before the end date of the prior designations. Most of State s 26 recommendation letters were dated about 2 days to 6 months before the USCIS recommendation memos. RAIO officials noted that they use an internal template as informal guidance for the draft country conditions reports that they compile for USCIS s Office of Policy & Strategy for reviews for initial or existing TPS designations. We reviewed the RAIO template and found, for example, that for reporting on a country being considered for a TPS designation on the basis of an environmental disaster, the template includes sections (e.g., several paragraphs) about the population harmed, damage to infrastructure, disruption in services, and status of disaster response and reconstruction. Officials added that country conditions reports may deviate from the template, because its use is not required; instead, it serves as general, informal guidance. RAIO officials also noted that information in the country conditions reports they compile is generally based on publicly available information or data related to country conditions. According to the officials, sources for such information may include U.S. agencies, foreign governments, international organizations, nongovernmental organizations, and news articles. According to State officials, after State initiates its internal process for compiling information for the Secretary of Homeland Security s TPS review, the Bureau of Population, Refugees, and Migration generally requests input internally from the relevant regional bureau and post before compiling information for the Secretary of State s consideration. See the text box for more details of State s internal process for developing country conditions reports and recommendation letters to inform the Secretary of Homeland Security s TPS reviews. State Department s Internal Process for Compiling Information for the Secretary of Homeland Security s Temporary Protected Status Reviews The Department of State s (State) internal process for developing input for the Secretary of Homeland Security s Temporary Protected Status (TPS) reviews generally includes compiling information on country conditions as well as proposed recommendations from the relevant regional bureau and overseas post, according to documentation for selected TPS decisions in fiscal years 2014 through 2018 and our interviews with DHS, USCIS, and State officials. State s Bureau of Population, Refugees and Migration (PRM) facilitates and coordinates State s internal process for developing this input, according to informal guidance, which State officials said the bureau has used at the working level since 2012, as well as our interviews with State officials. After DHS initiates a TPS review, PRM generally directs the relevant regional bureau to reach out to overseas posts for information about country conditions, according to State officials. State officials noted that in some cases, the regional bureau s country desk officer takes the lead in drafting the country conditions report, depending on the country context. Officials stated that the regional bureau generally uses a questionnaire on country conditions to request information from the post for a TPS review and that the post generally also provides a recommendation, in addition to the questionnaire responses, via cable or email to the regional bureau. For example, for a country that had an existing TPS designation based on ongoing armed conflict in the country, a country conditions cable provided, among other things, information about the status of the armed conflict, an assessment of whether the return of foreign nationals would pose a serious threat to their personal safety and whether the country was unable to handle the return of nationals, and information about the impact of the conflict on economic and humanitarian conditions. State and U.S. Agency for International Development (USAID) officials noted that other agencies represented at the overseas posts, such as USAID, may provide information for a post s input on country conditions, including information gathered on the ground as well as from publicly available sources. Once the regional bureau receives any input from post, the bureau desk officer prepares a draft country conditions report and recommendation, and the regional bureau works with PRM to compile a joint action memo. PRM generally provides the joint action memo, which includes a country conditions report, to the Secretary of State, according to State officials. The memo may include a joint recommendation or varying recommendations (e.g., from PRM and the regional bureau) for the Secretary s consideration. After the Secretary determines what the department will recommend, State provides a final country conditions report and recommendation letter to the Secretary of Homeland Security as well as to U.S. Citizenship and Immigration Services Office of Policy & Strategy. We found that the USCIS and State country conditions reports and recommendation memos or letters that DHS and State provided for our nongeneralizable sample of 26 TPS decisions included information such as background on the cause (or reason for consideration) of the initial TPS designation and a summary of the country s recovery from, or the status of, the situation to date. In addition, documentation provided to us for some of the TPS decisions included other information, such as certain economic indicators or broader country context. Specifically: Cause and recovery or status. USCIS and State documentation for each of the 26 TPS decisions in our review generally included (1) information related to the cause (or reason for consideration) of the initial TPS designation and (2) a summary of the country s recovery from, or the status of, the situation to date. For example, documentation for a country designated on the basis of armed conflict described the status of the conflict and ceasefire agreements; provided information about violence against civilians and recruitment of child soldiers; provided an update on civilian casualties since the prior review; and described humanitarian challenges stemming from the conflict, such as the risk of famine. For a country designated on the basis of environmental disaster, documentation described the status of investments in recovery and efforts to rebuild after the disaster, including the number of houses and schools that had been rebuilt or repaired. This documentation also included assessments of disruption in living conditions and the extent to which economic activity and basic services had been restored. Economic indicators. USCIS documentation for 16 TPS decisions and State documentation for 12 TPS decisions in our review included information about economic indicators. Examples of such information included an estimate of damages from an environmental disaster as a percentage of a country s gross domestic product, a summary of growth in a country s gross domestic product in recent years, and data on the increase in food prices as a result of armed conflict in a country. Broader country context. USCIS documentation for 23 TPS decisions in our review and State documentation for 20 TPS decisions provided information about broader country context. For example, documentation for a country designated on the basis of armed conflict included broader context regarding topics such as recent natural disasters and the country s geography. As another example, documentation for a country designated on the basis of environmental disaster provided information about subsequent natural disasters as well as violence, criminal activity, and corruption in the country. In addition to USCIS and State, other DHS offices and components and non-DHS entities may provide information to inform the Secretary s decision. DHS officials noted that such information varies, may be solicited or unsolicited, and may be provided directly to the Secretary of Homeland Security or to USCIS. We reviewed examples of such information for several of the TPS decisions in our nongeneralizable sample. This information included items such as immigration data or intelligence analyses from other DHS offices and components for example, the Office of Immigration Statistics, U.S. Customs and Border Protection, and U.S. Immigration and Customs Enforcement; updates from the Department of Defense on the security situation in a technical input from the Centers for Disease Control and Prevention regarding the status of an epidemic; and input from other entities, including letters from members of Congress, foreign government officials, and nongovernmental organizations. In addition, DHS officials stated that the Secretary of Homeland Security may hold briefings or meetings on TPS reviews both internally and with external entities, such as White House officials, foreign government officials, and nongovernmental organizations or advocacy groups. According to DHS officials, after USCIS and State compile their country conditions reports and recommendations for the Secretary of Homeland Security s consideration, other DHS components including the Office of Strategy, Policy, and Plans; the Office of the General Counsel; and the Management Directorate review the documents as part of the standard departmental clearance process before providing them to the Secretary. Officials from these DHS components noted that the purpose of their review is generally to provide relevant technical comments and ensure that complete information has been gathered for the Secretary s review. <6. Secretary of Homeland Security Makes a TPS Decision> According to USCIS officials, after receiving the information and recommendations from USCIS and State, as well as information from any other sources, the Secretary of Homeland Security makes a decision regarding a country s initial or existing TPS designation. USCIS officials indicated that the Secretary s decisions may not always follow the recommendations of the USCIS Director or the Secretary of State. For example, among the 26 TPS decisions from 2014 through 2018 that we reviewed, the Secretary of Homeland Security s decision was the same as State s recommendation in 21 cases and differed from State s recommendation in five cases. Initial designation. USCIS officials stated that if the Secretary of Homeland Security determines a country meets the statutory criteria for designation, the Secretary may then exercise discretion in deciding whether to initially designate the country for TPS. Existing designation. According to USCIS officials, the Secretary of Homeland Security exercises discretion in determining whether the conditions in a country satisfy statutory conditions for retaining an existing designation. However, the officials indicated that if the Secretary determines that the conditions for TPS designation continue to be met, the Secretary is required under the INA to extend the designation. Additionally, USCIS officials stated that if the Secretary determines a country no longer meets conditions for TPS designation, the Secretary is required under the INA to terminate the designation. Finally, USCIS officials stated that the Secretary may exercise discretion in deciding to redesignate a country with an existing designation and that factors such as a significant deterioration in country conditions may weigh in favor of a redesignation. Once the Secretary of Homeland Security decides whether to designate a country or to extend or terminate TPS, the decision may be documented through a signed memorandum or communicated orally to USCIS, according to USCIS officials. DHS provided memorandums or notices documenting the Secretary s TPS decisions for all 26 decisions in our nongeneralizable sample. After the Secretary makes a TPS decision, DHS typically communicates the decision to State before announcing it to the general public. Either DHS or State then communicates the decision to the foreign embassy in Washington, D.C., and State may communicate it to the foreign government overseas. Finally, under INA provisions related to TPS, the Secretary s decision is published in the Federal Register (see fig. 7). DHS Has Communicated TPS Decisions through Required Federal Register Notices but Provided Inconsistent Guidance on Employment Authorizations DHS Has Communicated TPS Decisions to the Public through Required Federal Register Notices and Other Mechanisms Since 1990, all TPS decisions have been communicated to the public through statutorily required notices in the Federal Register. DHS has also used other mechanisms, including press releases and its website, to help disseminate TPS-related information to the public. We found that a Federal Register notice was published for all TPS decisions, as required under the INA, from November 1990 to September 2019. In addition, DHS frequently used Federal Register notices as a mechanism for communicating other related information, such as effective dates for TPS designation periods, applicant registration periods, TPS beneficiary eligibility requirements, and information about employment authorization for beneficiaries. For example, the Federal Register notice extending the TPS designation of El Salvador, published on July 8, 2016, included the following: summary information about the extension, such as the period of extension and the start and end date of the extension; procedures and eligibility information for beneficiaries to register or reregister for TPS and to apply for renewal of employment authorization documents, including required forms and fees to register or reregister; directions for obtaining additional information and help with questions by accessing the USCIS website or by contacting an identified USCIS official or a USCIS customer contact center; and general information about TPS as well as information about El Salvador s initial TPS designation and about the Secretary s authority and reason for extending TPS for El Salvador. For a Federal Register notice of a TPS decision, according to USCIS officials, USCIS generally takes about 2 weeks to draft the notice. DHS then completes an internal review before submitting the notice to the Office of Management and Budget (OMB) for interagency review, according to officials. OMB s Office of Information and Regulatory Affairs coordinates the notice review process, including gathering comments or proposed revisions from relevant executive branch agencies. For example, we reviewed examples of technical comments from the Centers for Disease Control and Prevention regarding draft notices of TPS decisions for the Ebola-affected countries that included information and data on the status of the epidemic and an assessment of health care infrastructure. According to USCIS officials, OMB comments are returned to DHS without identifying the agency that made each comment, and additional interagency review and comment may occur before DHS publishes the notice in the Federal Register. USCIS officials also noted that, under regulation, OMB can take up to 90 days to complete the interagency review, although the officials added that OMB aims to complete the process in a timely manner for TPS notices and generally takes about a month. According to USCIS officials, to help raise awareness of TPS decisions, USCIS has generally also issued press releases announcing all TPS decisions and published them on its website in addition to publishing Federal Register notices. Table 2 summarizes information from DHS s publication of a press release and Federal Register notice for a 2016 TPS decision. USCIS has also taken other steps to communicate TPS decisions and related information to the public. USCIS has updated its TPS country- specific webpages with alerts about the latest TPS decisions and registration periods, among other information. Further, according to USCIS officials, the Office of Public Affairs hosted periodic national TPS teleconferences for stakeholders and conducted outreach meetings to respond to questions and discuss TPS information in communities where there might be a large number of TPS beneficiaries. For example, a teleconference invitation from USCIS to stakeholders to discuss the extension of Haiti s TPS designation in May 2017 indicated that USCIS officials would share information about the TPS reregistration period and procedures for eligible Haitian nationals and would respond to stakeholder questions. Officials from USCIS s Office of Public Affairs also stated that the office has drafted guidance for communicating most TPS decisions. We reviewed examples of the guidance, which included planned time lines for publishing the press releases and information to USCIS s website as well as for conducting outreach to Congress, stakeholder groups, and TPS beneficiaries. <7. DHS Published Most Federal Register Notices of Decisions on Existing TPS Designations before Previous Designations End Date> USCIS officials noted that once the Secretary of Homeland Security makes a TPS decision, time frames for publishing the Federal Register notice may vary. USCIS officials stated that, in an effort to ensure public awareness of the decisions as soon as possible, USCIS has in some cases published a press release before the Federal Register notice of a decision was finalized and published. In reviewing TPS decisions for existing designations (i.e., extensions, terminations, and redesignations) in fiscal years 1990 through 2019, we found the following: About two-thirds of Federal Register notices announcing TPS decisions for these existing designations were published at least 30 days before the end date of the previous designation period (100 of 158 total notices). In fiscal years 1990 through 2005, 21 Federal Register notices announcing TPS decisions for existing designations were published after the end of the previous designation period. In fiscal years 2006 through 2019, all 71 Federal Register notices announcing TPS decisions for existing designations were published 4 to 159 days before the end date of the previous designation period. See figure 8 for more details. <8. USCIS Published Guidance Has Not Consistently Identified All Mechanisms Used to Communicate Automatic Extensions of TPS Employment Authorization Documents> Since 1990, two mechanisms Federal Register notices and individually mailed notifications, which TPS beneficiaries may use as evidence of their eligibility for employment have been used to communicate automatic extensions of employment authorization documents. However, USCIS s published guidance has not consistently identified each of these as official mechanisms to verify eligibility, resulting in confusion among employers about TPS beneficiaries employment eligibility. The INA states that DHS shall provide TPS beneficiaries with an employment authorized endorsement or other appropriate work permit but does not specify the mechanisms that DHS should use to communicate TPS employment authorization. To receive documentation of work authorization, TPS beneficiaries generally apply for an employment authorization document after an initial TPS designation and also after any subsequent extensions or redesignations of TPS. See the text box for a description of the process that TPS beneficiaries and employers must follow to verify beneficiaries employment eligibility. According to USCIS officials, USCIS aims to adjudicate both initial employment authorization applications and renewal applications within 90 days after receiving an application. When it is unable to process the adjudications in this time frame, USCIS issues automatic extensions of expiring employment authorization documents for TPS beneficiaries from a specific country, to allow time for USCIS to process the volume of applications associated with a TPS reregistration period. In some instances, USCIS may issue additional automatic extensions of employment authorization documents for specific countries if it has been unable to process all pending applications within the initial automatic extension period, according to USCIS officials. When employment authorization documents are automatically extended for eligible TPS beneficiaries, the documents may appear to have expired even though they remain valid. According to USCIS officials, DHS has used the Federal Register notices announcing TPS decisions to communicate most automatic extensions of TPS employment authorization documents. For example, on January 17, 2017, DHS published a Federal Register notice extending the TPS designation of Somalia for 18 months and, in the same notice, automatically extended for 6 months the validity of employment authorization documents issued under Somalia s TPS designation. DHS has also communicated automatic extensions of TPS employment authorization documents through Federal Register notices independent of a TPS decision. Generally, Federal Register notices announcing automatic extensions of TPS employment authorization documents include instructions for employers for completing the Form I-9, among other things. Additionally, some notices state that, to reduce employer confusion regarding automatic extensions of TPS employment authorization documents, beneficiaries should explain the extension to their employer and may also provide their employer with a copy of the relevant Federal Register notice. In five cases, beginning in fiscal year 2018, USCIS mailed notifications of automatic extensions of employment authorization documents to thousands of TPS beneficiaries from Haiti, El Salvador, Syria, and Honduras as an alternative or a supplement to posting the information in Federal Register notices. USCIS officials told us that in these cases, they mailed individual notifications of the automatic extensions to ensure that the beneficiaries would not experience any gaps in employment authorization. According to the officials, they began this practice because of the large number of affected beneficiaries. Our examination of USCIS documents found that in four of these five cases, USCIS mailed individual notifications to the TPS beneficiaries without also posting a Federal Register notice communicating the automatic extension. In all five cases, USCIS published guidance on its website to inform TPS beneficiaries and employers about the use of individually mailed notifications to communicate employment authorization document extensions. USCIS s website states that TPS beneficiaries may present the Federal Register notice or individually mailed notification to their employer along with their expired employment authorization documents to show proof of continued employment authorization. The individual notifications also state that beneficiaries may show the notifications, along with the expired employment authorization document, to any U.S. employer as proof of continued employment authorization. However, a USCIS handbook for employers and related guidance do not specifically identify the individually mailed notifications as an official means of communicating these extensions. USCIS s Handbook for Employers: Guidance for Completing Form I-9 (M-274) provides guidance for employers on how to properly complete Form I-9, which helps employers verify that individuals are authorized to work in the United States. The handbook contains a section about automatic employment authorization document extensions for TPS beneficiaries that references USCIS s use of Federal Register notices to inform the public of these extensions. However, the handbook for employers does not mention USCIS s use of individually mailed notifications to communicate the automatic extensions. USCIS s Instructions for Form I-9, Employment Eligibility Verification notes that certain employees, including TPS beneficiaries, may present an expired employment authorization document, which may be considered unexpired if the document has been extended by USCIS. The guidance also notes that employees should enter the expiration date of an automatic extension on Form I-9. However, the instructions for Form I-9 do not detail USCIS s mechanisms for communicating these extensions, including its use of individually mailed notifications. Some employers have reportedly refused to accept expired employment authorization documents as proof of work authorization when the documents had been automatically extended. For example, the Department of Justice s Civil Rights Division telephone interventions website indicates that on approximately 50 occasions from September 2017 through May 2019, the Immigrant and Employee Rights Section intervened to deter employers or medical licensing boards from rejecting valid work authorization documents and, in some cases, from terminating employment for TPS beneficiaries whose employment authorization documents had been automatically extended. Also, a letter to USCIS signed by 70 law professors and scholars states that some legal service providers have reported instances of employers terminating TPS beneficiaries employment because the employer did not understand or accept the individually mailed notifications. Further, USCIS has received feedback from certain stakeholders concerned that beneficiaries might not be receiving the individual notifications in time to avoid any potential gaps in work authorization, according to USCIS officials. USCIS officials told us that the Federal Register process may be beneficial for communicating employment authorization in some cases but that they may also continue to use the individually mailed notifications as a mechanism to communicate future extensions, depending on the circumstances. USCIS has acknowledged the potential benefits of updating external guidance regarding automatic extensions of TPS employment authorization documents. However, as of December 2019, USCIS had not taken action to do so. Replying to a letter of concern from an advocacy group, USCIS stated that it could consider updating the handbook for employers to add additional guidance regarding individually mailed notifications. Effective information and communication are vital for an entity to achieve its objectives. According to Standards for Internal Control in the Federal Government, management should document policies in the appropriate level of detail and externally communicate the necessary quality information to achieve an entity s objectives. Updating external guidance, such as the employer handbook, to clearly identify each of the official mechanisms that USCIS may use to communicate automatic extensions of TPS employment authorization documents could help USCIS ensure that employers understand and accept each of its official mechanisms for communicating these automatic extensions. This, in turn, would help to reduce the risk of employers terminating beneficiaries from their jobs as a result of confusion caused by unclear or inconsistent guidance. Conclusions The Secretary of Homeland Security has granted TPS, providing work authorization and protection from removal, to foreign nationals from 22 countries since TPS was established in 1990. DHS has generally communicated information about employment authorization for TPS beneficiaries in a Federal Register notice, although in some cases USCIS used individually mailed notifications to communicate automatic extensions of employment authorization documents. However, USCIS s published guidance has not consistently identified individually mailed notifications as a mechanism that may be used, leading to confusion about beneficiaries employment eligibility and reportedly resulting in termination of some beneficiaries employment. Consistent published guidance that clearly identifies each of the mechanisms used to communicate automatic extensions of TPS employment authorization documents could help USCIS ensure that employers understand and accept the evidence USCIS provides for employment authorization, reducing the risk of erroneous termination of beneficiaries employment. Recommendation for Executive Action The Director of USCIS should update published guidance, such as Handbook for Employers: Guidance for Completing Form I-9 (M-274), to consistently identify each of the official mechanisms that USCIS may use to communicate automatic extensions of TPS employment authorization documents. (Recommendation 1) Agency Comments We provided a draft of this report to DHS, State, the Department of Defense, the Department of Health and Human Services, and the U.S. Agency for International Development for review and comment. In its written comments, reproduced in appendix III, DHS agreed with our recommendation and noted planned actions to implement it, including updating guidance in DHS s M-274 handbook. DHS s planned actions will address the intent of our recommendation if they include updating guidance regarding each of the official mechanisms that USCIS may use to communicate automatic extensions of TPS employment authorization documents, including the use of individually mailed notifications. The U.S. Agency for International Development also provided written comments, which are reproduced in appendix IV. In addition, DHS and State provided technical comments that we incorporated as appropriate. The Department of Defense and the Department of Health and Human Services did not provide comments. We are sending copies of this report to the appropriate congressional committees, and the Acting Secretary of Homeland Security and Secretary of State, as well as the Secretary of Defense, the Secretary of Health and Human Services, the Director of the Centers for Disease Control and Prevention, and the Administrator of the U.S. Agency for International Development. If you or your staff have any questions about this report, please contact Chelsa Gurkin at (202) 512-2964 or GurkinC@gao.gov, or Rebecca Gambler at (202) 512-6912 or GamblerR@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this reports are listed in appendix V. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) describe Temporary Protected Status (TPS) determinations and numbers of beneficiaries since TPS was established in 1990; (2) describe the approach that the Department of Homeland Security (DHS), in consultation with the Department of State (State) and other relevant agencies, takes to inform the Secretary of Homeland Security s TPS reviews; and (3) examine DHS s public communication regarding TPS decisions and related information, including work authorization. To describe TPS determinations since TPS was established in 1990, we reviewed information and data in Federal Register notices for all TPS designations in fiscal years 1990 through 2019. Specifically, we reviewed the designation time frames and bases (i.e., ongoing armed conflict, environmental disaster, or extraordinary and temporary conditions) for each designation since TPS was established. We also analyzed U.S. Citizenship and Immigration Services (USCIS) data on numbers of TPS beneficiaries for fiscal years 1990 through 2018. In addition, we analyzed USCIS data on TPS beneficiaries characteristics, such as numbers, location, age, and gender of foreign nationals granted TPS, for fiscal year 2018. To assess the reliability of USCIS data on TPS beneficiaries, we reviewed documentation and interviewed USCIS officials to identify and rectify any missing or erroneous data. According to USCIS officials, USCIS removes from its data on TPS beneficiaries any who become U.S. citizens or whose status is withdrawn, either because they no longer meet eligibility requirements or because they requested that USCIS withdraw their status. However, according to officials, the data may include foreign nationals who have since died, moved out of the country, or have an additional immigration status. Additionally, because the data comprise information provided by TPS applicants, the data may include a small number of applicant errors, according to officials. We determined that the data for fiscal years 2000 through 2018 were sufficiently reliable to provide general information about the size and characteristics of TPS beneficiaries. USCIS was not able to provide reliable data on numbers of TPS beneficiaries before fiscal year 2000 because, according to USCIS officials, these data were not consistently entered electronically in USCIS information systems. To describe the approach that DHS, in consultation with State and other relevant agencies, takes to inform the Secretary of Homeland Security s TPS reviews, we reviewed provisions in the Immigration and Nationality Act (INA) related to TPS as well as DHS and State documentation, such as informal guidance documents used since fiscal year 2014 or earlier regarding steps taken for a TPS review. We also conducted interviews with DHS and State officials related to the processes they have used to collect information for TPS reviews since fiscal year 2014. Specifically, we interviewed DHS officials from U.S. Customs and Border Protection; the U.S. Coast Guard; U.S. Immigration and Customs Enforcement; the Management Directorate; the Office of the Executive Secretary; the Office of Intelligence and Analysis; the Office of Legislative Affairs; the Office of Partnership and Engagement; the Office of Public Affairs; the Office of Strategy, Policy, and Plans, including the Office of Immigration Statistics; and USCIS in particular, USCIS s Office of Policy and Strategy and USCIS s Refugees, Asylum, and International Operations Directorate. We interviewed State officials from the Bureau of Population, Refugees, and Migration and several regional bureaus, including desk officers from the Bureaus of African Affairs, Near Eastern Affairs, South and Central Asian Affairs, and Western Hemisphere Affairs. We also interviewed State officials from overseas posts for countries that we selected for our review, including El Salvador, Haiti, Honduras, Nepal, Sudan, and Yemen. We reviewed documentation that DHS and State provided for a judgmental, nongeneralizable sample of eight countries for which DHS rendered TPS decisions in fiscal years 2014 through 2018 (El Salvador, Haiti, Honduras, Nepal, Nicaragua, Sudan, Syria, and Yemen); the TPS decisions for these eight countries represented 26 of a total of 42 TPS decisions for 13 countries in that period. We selected this sample to represent a range of decision types and designation reasons, among other factors. While this sample cannot be generalized to the countries or decisions we did not review, it provided valuable information about the approach that DHS uses for TPS reviews. The primary documents that we reviewed for each decision included information about country conditions that USCIS and State had compiled and recommendations that USCIS and State leadership had provided to the Secretary of Homeland Security. Some of the documents that we received had been redacted because of ongoing litigation related to TPS. Table 3 provides additional details of the decisions in our judgmental sample. In addition, we reviewed examples of other information that may be provided for a TPS review, including examples of input from other DHS components, other U.S. agencies, the White House, members of Congress, foreign governments, and nongovernmental organizations. Specifically, we received examples of this type of information for each of the eight countries in our judgmental, nongeneralizable sample, representing 15 of the 26 TPS decisions. For example, this information included immigration data and internal intelligence analyses compiled by DHS s Office of Immigration Statistics, U.S. Customs and Border Protection, U.S. Immigration and Customs Enforcement, and Office of Intelligence and Analysis. We also reviewed examples of updates provided by senior Department of Defense officials for the Secretary of Homeland Security regarding the security situation in a country; technical input from the Department of Health and Human Services Centers for Disease Control and Prevention about the status of an epidemic in a country; and information from the U.S. Agency for International Development about country conditions on the ground. In addition, we interviewed officials from these three agencies regarding the types of information that they may provide for TPS reviews. Further, we reviewed examples of letters from members of Congress, foreign government officials, and nongovernmental organizations related to TPS reviews. Moreover, we reviewed examples of briefing or meeting agendas and related materials for internal and external briefings, including external briefings with White House officials, foreign government officials, and nongovernmental organizations. To examine DHS s public communication regarding TPS decisions and related information, including work authorization, we reviewed DHS s public communications related to TPS, including Federal Register notices, press releases, and USCIS s website, among other information. We analyzed information in Federal Register notices published from November 29, 1990, through October 1, 2019 (the most recent available at the time of our review), to determine the timing of notices for TPS decisions and the types of information included in the notices. We reviewed examples of USCIS s Office of Public Affairs guidance for public communication of TPS decisions. We also interviewed USCIS officials regarding the mechanisms that DHS used to communicate TPS decisions and related information, including DHS s process for drafting and publishing Federal Register notices. Further, we examined DHS s guidance and procedures as of fiscal year 2019 for communicating TPS employment authorization, including automatic extensions of employment authorization. We reviewed USCIS s public communications related to automatic extensions of TPS employment authorization for both beneficiaries and employers in Federal Register notices, individually mailed notifications, an employer handbook, and information published on USCIS s website. We interviewed USCIS officials regarding USCIS s approach to communicating TPS employment authorization, including automatic extensions. We also reviewed information from the Department of Justice Civil Rights Division s website related to confusion over automatic extensions of employment authorization documents for TPS beneficiaries. Additionally, we reviewed a letter to USCIS signed by 70 law professors and scholars related to instances of employers terminating TPS beneficiaries. Finally, we compared DHS s guidance and procedures with federal internal control standards related to documenting policies and externally communicating information. We conducted this performance audit from September 2018 to March 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Numbers and Characteristics of Temporary Protected Status Beneficiaries, Fiscal Years 2000-2018 Table 4 lists the numbers of TPS beneficiaries, by country of citizenship, in fiscal years 2000 through 2018. During this period, the country with the largest number of TPS beneficiaries in any given fiscal year was El Salvador, with 262,262 in fiscal year 2010; followed by Honduras, with 85,759 in fiscal year 2007; and Haiti, with 58,294 in fiscal year 2014. In contrast, during the same period, Montserrat had the smallest maximum number of TPS beneficiaries in any given fiscal year, with a maximum of 21 in fiscal year 2004; followed by Angola, with a maximum of 47 in fiscal year 2002; and Burundi, with a maximum of 50 in fiscal year 2007. Appendix III: Comments from Department of Homeland Security Appendix IV: Comments from U.S. Agency for International Development Appendix V: GAO Contacts and Staff Acknowledgments <9. GAO Contacts> <10. Staff Acknowledgments> In addition to the contacts named above, Miriam Carroll Fenton and Taylor Matheson (Assistant Directors), Elisabeth Helmer, Cristina Norland, Ben DeYoung, Martin De Alteriis, Neil Doherty, Jenny Grover, Reid Lowe, Mary Moutsos, Jan Montgomery, Jon Najmi, Nicole Willems, and Bailey Wong made key contributions to this report. Alana Miller and Danielle Rudstein provided technical assistance. | Why GAO Did This Study
The INA includes provisions for eligible foreign nationals residing in the United States to obtain temporary humanitarian protection from removal, as well as work authorization, when their country of origin is designated for TPS. Since 1990, nationals of 22 countries have received TPS. The Secretary of Homeland Security may designate a country for TPS after consulting with other agencies and determining that the country meets statutory criteria related to armed conflict, environmental disaster, or extraordinary or temporary conditions that prevent its nationals from returning in safety. The Secretary may designate a country for TPS for periods of 6 to 18 months and can extend a TPS designation if deemed appropriate.
GAO was asked to review the TPS decision process. This report, among other things, (1) describes the approach DHS takes to inform the Secretary of Homeland Security's TPS reviews and (2) examines DHS's communication to the public regarding TPS decisions and related information, including employment authorization. GAO reviewed documentation and data related to TPS decisions, including a nongeneralizable sample of 26 decisions for eight countries in fiscal years 2014 through 2018. GAO selected the countries to reflect various types of TPS decisions, among other factors. GAO also interviewed agency officials.
What GAO Found
The Department of Homeland Security's (DHS) reviews of countries for Temporary Protected Status (TPS) include three main steps, according to DHS and other agencies' documents and officials. First, the Secretary of Homeland Security may initiate a review of a country for TPS designation in response to various triggering factors, such as a request from a foreign government, on the basis of one or more statutory conditions. The Immigration and Nationality Act (INA) requires subsequent reviews after an initial designation. Second, U.S. Citizenship and Immigration Services (USCIS)—which manages and coordinates the TPS review process for DHS—and the Department of State (State) compile country conditions reports and recommendations to inform the Secretary's decision. Although the INA does not prescribe the other agencies that must be consulted for a TPS review,State generally has a role in providing input for the Secretary of Homeland Security's consideration. GAO found DHS collected country conditions reports and recommendations from USCIS and State for all eight of the countries GAO selected for its review. Other DHS components and non-DHS entities may also provide information. Third, under the INA,the Secretary of Homeland Security exercises discretion in deciding whether to initially designate a country for TPS. For an existing designation, the Secretary determines whether country conditions warrant an extension or termination of TPS. DHS provides official notice of decisions in the Federal Register.
DHS has communicated TPS decisions to the public through required Federal Register notices as well as other mechanisms. However, DHS has not provided consistent guidance regarding mechanisms it uses to communicate automatic extensions of TPS employment authorization documents. USCIS officials stated that the agency has typically communicated these extensions of documents for TPS beneficiaries through Federal Register notices. However, for five recent automatic extensions, USCIS instead mailed individual notifications to thousands of beneficiaries. USCIS guidance on its website identifies the individual notifications as a mechanism for communicating automatic extensions, but an employers' handbook and related guidance do not. As a result, some employers reportedly terminated TPS beneficiaries' employment because the employers did not understand or accept the notifications as proof of employment authorization. Consistent guidance about the mechanisms USCIS uses could help reduce the risk that TPS beneficiaries will lose their jobs because of confusion about their authorization to work in the United States.
What GAO Recommends
GAO recommends USCIS consistently identify in published guidance the mechanisms used to communicate automatic extensions of TPS employment authorization documents. DHS concurred with GAO's recommendation. |
gao_GAO-19-718T | gao_GAO-19-718T_0 | <1. Background> <1.1. Federal Onshore Oil, Gas, and Coal Lease Terms and Conditions> BLM leases federal lands to private entities for oil and gas development generally through auctions. In the auctions, if BLM receives any bids that are at or above the minimum acceptable bid amount of $2 an acre called bonus bids the lease is awarded to the highest bidder (leases obtained in this way are called competitive leases). Tracts of land that do not receive a bid at the auction are made available noncompetitively for a period of 2 years on a first-come, first-served basis (leases obtained in this way are called noncompetitive leases). The government collects revenues from oil and gas leases under terms and conditions that are specified in the lease, including rental fees and royalties. Annual rental fees are fixed fees paid by lessees until production begins on the leased land or, when no production occurs, until the end of the period specified in the lease. For federal oil and gas leases, generally the rental rate is $1.50 per acre for the first 5 years, and $2 per acre each year thereafter. Once production of the resource starts, the lessees pay the federal government royalties of at least 12.5 percent of the value of production. Oil and gas parcels are generally leased for a primary term of 10 years, but lease terms may be extended if, for example, oil or gas is produced in paying quantities. A productive lease remains in effect until the lease is no longer capable of producing in paying quantities. The fiscal system refers to the terms and conditions under which the federal government collects revenues from production on leases, including from payments specified in the lease (e.g., royalties and rental payments). We reported in December 2013 that, since 1990, all federal coal leasing has taken place through a lease-by-application process, where coal companies propose tracts of federal lands to be put up for lease by BLM. BLM is required to announce forthcoming lease sales, and the announcement notes where interested stakeholders can view lease sale details, including bidding instructions and the terms and conditions of the lease. BLM leases a tract to the highest qualified bidder, as long as its bonus bid meets or exceeds $100 per acre and BLM s confidential estimate of fair market value. Annual rental fees are at least $3 an acre, and royalties are 8 percent of the sale price for coal produced from underground mines and at least 12.5 percent of the sale price for coal produced from surface mines. Tracts are leased for an initial 20-year period, as long as the lessee produces coal in commercial quantities within a 10-year period and meets the condition of continued operations. <1.2. Oil, Gas, and Coal Bonding> Bonds can help ensure lands affected by energy development are properly reclaimed, that is, according to BLM, restored to as close to their original natural states as possible. Bonds provide funds that can be used by the relevant regulatory authority to reclaim such lands if the operator or other liable party does not do so. For oil and gas developed on federal lands, BLM requires operators to provide a bond before certain drilling operations begin. Wells are considered orphaned and fall to BLM to reclaim if they are not reclaimed by their operators, there are no other responsible or liable parties to do so, and their bonds are too low to cover reclamation costs. For surface coal mining, the Surface Mining Control and Reclamation Act of 1977 (SMCRA) requires operators to submit a bond to either Interior s Office of Surface Mining Reclamation and Enforcement (OSMRE) or an approved state regulatory authority before mining operations begin for development on federal or nonfederal lands. Among other bonding options, coal operators may choose to self-bond, whereby the operator promises to pay reclamation costs. <1.3. Federal Oil and Gas Royalty Compliance> Royalties that companies pay on the sale of oil and natural gas extracted from leased federal lands and waters constitute a significant source of revenue for the federal government. The Federal Oil and Gas Royalty Management Act of 1982 requires, among other things, that Interior establish a comprehensive inspection, collection, and fiscal and production accounting and auditing system for these revenues. In particular, the act requires Interior to establish such a system to provide the capability of accurately determining oil and gas royalties, among other moneys owed, and to collect and account for such amounts in a timely manner. To accomplish this, Interior tasks its Office of Natural Resources Revenue (ONRR) with collecting and verifying the accuracy of royalties paid by companies that produce oil and gas from over 26,000 federal leases. Each month, these oil and gas companies are to self-report data to ONRR on the amount of oil and gas they produced and sold, the value of this production, and the amount of royalties that they owe to the federal government. To ensure that the data provided to ONRR are accurate and all royalties are being paid, ONRR relies on its compliance program. Under this program, ONRR initiates compliance activities by selecting companies and properties for review to assess the accuracy of their royalty data and their compliance with all relevant laws and regulations. <1.4. Natural Gas Emissions on Federal Lands> Under the Minerals Leasing Act of 1920, Interior is authorized to collect royalties on oil and gas produced on federal lands, and BLM is required to ensure that operators producing oil and gas take all reasonable precautions to prevent the waste of oil or gas developed on these lands. While most of the natural gas produced on leased federal lands and waters is sold and therefore royalties are paid on it, some is lost during production for various reasons, such as leaks or intentional releases for ongoing operational or safety procedures. Natural gas that is released for operational or safety procedures is released directly into the atmosphere (vented) or burned (flared). In addition to gas that is lost during production, some natural gas may be used to operate equipment on the lease (lease use). We use the term natural gas emissions to refer to vented, flared, and lease use gas collectively. Interior has generally exempted operators from paying royalties on reported natural gas emissions, and so such emissions represent a loss of royalty revenues for the federal government. Venting and flaring natural gas also has environmental implications as it adds greenhouse gases to the atmosphere primarily methane and carbon dioxide. Natural gas consists primarily of methane, and methane (which is released through venting) is 34 times more potent by weight than carbon dioxide (which is released through flaring) in its ability to warm the atmosphere over a 100-year period, and 86 times more potent over a 20-year period, according to the Intergovernmental Panel on Climate Change. <2. Key Terms and Conditions for Federal Oil, Gas, and Coal Leases Are the Same as They Were Decades Ago, though Market Conditions Have Changed> Key federal lease terms are the same as they were decades ago, and Interior has not adjusted lease terms for inflation or other factors, such as changes in market conditions, which may affect the government s fair return. In addition, preliminary observations from our ongoing work indicate that federal oil and gas lease terms and practices differ from those of selected states, with selected state governments generally charging higher royalty rates on production on state lands than the federal government charges for production on federal lands. We have previously recommended that Interior should establish procedures for determining when to conduct periodic assessments of the oil and gas fiscal system, including how the federal government s share of revenues compares with those of other resource owners. Interior has established procedures for determining when to conduct periodic assessments of the oil and gas fiscal system, and according to its policy, BLM plans to complete the next assessment in late 2019. <2.1. Key Federal Lease Terms Are the Same as They Were Decades Ago though Market Conditions Have Changed> Key federal lease terms are the same as statutory minimums established decades ago. For onshore oil and gas leases, the minimum royalty rate of 12.5 percent has been in place since 1920, and minimum bonus bids and rental rates are currently set at the statutory minimums established in 1987. For coal, the royalty rate for surface mining is set at the statutory minimum set in the Mineral Leasing Act. We previously found that royalty rates for oil and gas leases have not been adjusted to account for changes in market conditions, and our preliminary analysis for our ongoing work suggests that adjusting rental rates for inflation could generate increased federal revenues. We reported in December 2013 that Interior offers onshore leases with lease terms terms lasting the life of the lease that have not been adjusted in response to changing market conditions, potentially foregoing a considerable amount of revenue. Energy markets have also changed since federal oil and gas lease terms were established. For example, we reported in June 2017 that, according to the U.S. Energy Information Administration, almost all of the recent increase in overall oil and gas production had centered on oil and gas located in shale and other tight rock geologic formations, spurred by advances in production technologies such as horizontal drilling and hydraulic fracturing. In addition, we estimate that, based on preliminary observations, the rental rate would be $2.91 per acre if it were adjusted for inflation, which would have generated about $3.6 million for the first year for new leases issued in fiscal year 2018, or an additional $1.8 million. In June 2017, we reported that raising federal royalty rates for onshore oil, gas, and coal resources could decrease oil and gas production on federal lands by either a small amount or not at all but could increase overall federal revenue, according to studies we reviewed and stakeholders we interviewed. The two oil and gas studies we reviewed for that report modeled the effects of different policy scenarios on oil and gas production on federal lands and estimated that raising the federal royalty rate could increase net federal revenue from $5 million to $38 million per year. One of the studies stated that net federal revenue would increase under three scenarios that modeled raising the royalty rate from the current 12.5 percent to 16.67 percent, 18.75 percent, or 22.5 percent. The other study noted that the effect on federal revenue would initially be small but would increase over time. The two coal studies we reviewed for our June 2017 report analyzed the effects of different policy scenarios on coal production on federal lands, and both studies suggested that a higher royalty rate could lead to an increase in federal revenues. Specifically, one study suggested that raising the royalty rate to 17 percent or 29 percent might increase federal revenue by up to $365 million per year after 2025. The other study suggested that increasing the effective rate could bring in an additional $141 million per year in royalty revenue. However, we reported that the extent of these effects was uncertain and depended, according to stakeholders, on several other factors, such as market conditions and prices. <2.2. Federal Onshore Lease Terms Differ from Those of Selected States> Based on preliminary observations from our ongoing work, federal onshore lease terms and practices for oil and gas development differ from those of selected states (see table 1). For example, selected state governments tend to charge higher royalty rates for oil and gas development on state lands than the federal government charges for production on federal lands. For coal production, we reported in June 2017 that royalty rates charged by selected states were generally the same as federal rates. Royalty rates for the six states representing over 90 percent of total federal oil, gas, and coal production in fiscal year 2015 ranged from 8 to 12.5 percent for surface coal and from 8 to 10 percent for underground coal. Other factors influence the competitiveness of the development of oil and gas resources on federal land versus nonfederal land. We also reported in June 2017 that some stakeholders we spoke with stated that there was already a higher regulatory burden for oil and gas companies to develop resources on federal lands than on nonfederal lands. For coal, BLM officials stated that assuming the royalty rate was the same the main difference between federal and nonfederal coal was the additional regulatory burden of producing on federal lands. In our ongoing work examining the oil and gas lease permitting process, our preliminary interviews indicate that drilling permit fees are higher for federal lands than for the states we reviewed. However, operators we interviewed said that the filing fee was not an important or major factor in their decisions to apply for federal drilling permits. In addition to regulatory differences, in June 2017 we reported that a few stakeholders told us that competitiveness of federal lands for development depends on the location of the best resources such as areas with low exploration and production costs. We also reported in June 2017 that most areas with major U.S. tight oil and shale gas plays areas of known oil and gas sharing similar properties and major U.S. coal basins do not overlap with federal lands. <2.3. Interior Has Taken Steps to Assess Its Oil and Gas Lease Terms and Conditions> We have reported on steps Interior has taken to assess its oil and gas fiscal system the terms and conditions under which the federal government collects revenues from production on leases and have made recommendations intended to help ensure that the federal government receives a fair return on its oil and gas resources. For example, in September 2008, we found that Interior had not evaluated the federal oil and gas fiscal system for over 25 years and recommended that a periodic assessment was needed. In response to our September 2008 report, Interior contracted for a study that was completed in October 2011 and compared the federal oil and gas fiscal systems of selected federal oil and gas regions to that of other resource owners. However, in December 2013, we reported that Interior officials said that the study was not adequate to determine next steps for onshore lease terms. Interior has considered making changes to improve its management of federal oil and gas resources. For example, in April 2015, BLM sought comments on a number of potential reforms to the oil and gas leasing process, including changing royalty rates, but took no further action. In November 2016, BLM did issue the Methane and Waste Prevention Rule, which incorporated flexibility for the bureau to make changes to onshore royalty rates, as we recommended in December 2013. Officials told us in October 2018 that they were not aware of BLM issuing any recent competitive leases with a royalty rate higher than 12.5 percent. In addition, in March 2017, the Secretary of the Interior established the Royalty Policy Committee (committee), which was to be comprised of stakeholders representing federal agencies, states, Indian tribes, mining and energy, academia, and public interest groups. The purpose of the committee was to advise the Secretary on the fair market value of mineral resources developed on federal lands, among other issues. The committee met four times over the 2 years it was in effect and approved recommendations related to Interior s oversight of its oil and gas programs. This included two recommendations to conduct studies that compare the U.S. oil and gas fiscal system to certain other countries fiscal systems. However, a U.S. District Court found that the establishment of the committee violated the law and prohibited Interior from relying on any of the committee s recommendations. Interior has established procedures for assessing the oil and gas fiscal system. In December 2013, we found that Interior did not have documented procedures for determining when to conduct additional periodic assessments of the oil and gas fiscal system, and we recommended that Interior put such procedures in place. Further, we reported that documented procedures could help Interior ensure that its evaluations take relevant factors into consideration. These factors may change over time as the market for oil and gas, the technologies used to explore and produce oil and gas, or the broader economic climate changes. In August 2016, in response to our recommendation, Interior reported that it had developed documented procedures for conducting assessments of the oil and gas fiscal system, fully implementing our recommendation. To meet this recommendation, BLM established a fiscal assessment policy that describes actions it will take every 3 years and every 10 years. Based on this policy, the next assessment is expected to be completed in late 2019. According to the policy, every 3 years BLM plans to conduct a review of the oil and gas fiscal systems of the states with significant oil and gas leasing activity where there is also significant federal onshore leasing activity. The policy states that every 10 years depending on available appropriations Interior plans to co-sponsor with the Bureau of Ocean Energy Management an independent study of government take from lease and development of federal oil and gas resources. In February 2019, as part of our ongoing work examining oil and gas leases, BLM officials told us that the bureau had contracted for an external fiscal assessment in 2018 and that the report would be completed in mid-2019. According to Interior officials, the study is undergoing final review. <3. Weaknesses in Coal, Oil, and Gas Bonding Present Financial Risks to the Federal Government> We have reported that weaknesses with bonds for coal mining and for oil and gas development pose a financial risk to the federal government as laws, regulations, or agency practices have not been adjusted to reflect current economic circumstances. We have also reported that BLM has no mechanism to pay for reclaiming well sites that operators have not reclaimed. <3.1. Coal Self-Bonding Presents a Financial Risk to the Government> We reported in March 2018 that self-bonding for coal mining creates a financial risk for the federal government. If specific conditions are met, SMCRA allows states to let an operator guarantee the cost for reclaiming a mine on the basis of its own finances a practice known as self- bonding rather than by securing a bond through another company or providing collateral, such as cash, letters of credit, or real property. We reported that as of 2017, eight states held coal self-bonds worth over $1.1 billion. In the event a self-bonded operator becomes bankrupt and the regulatory authority is not able to collect sufficient funds to complete the reclamation plan, the burden could fall on taxpayers to fund reclamation. According to stakeholders we interviewed for our March 2018 report, self- bonding for coal mining presents a financial risk to the federal government for several reasons. It is difficult to (1) ascertain the financial health of an operator, in part, because greater financial expertise is often now needed to evaluate the complex financial structures of large coal companies as compared to when self-bonding regulations were first approved in 1983; (2) determine whether an operator qualifies for self- bonding; and (3) secure a replacement for existing self-bonds when an operator no longer qualifies. For example, some stakeholders we interviewed told us that the risk from self-bonding is greater now than when OSMRE first approved its self- bonding regulations in 1983; at that time, the office noted there were companies financially sound enough that the probability of bankruptcy was small. However, according to an August 2016 OSMRE policy advisory, three of the largest coal companies in the United States declared bankruptcy in 2015 and 2016, and these companies held approximately $2 billion in self-bonds at the time. Because SMCRA explicitly allows states to decide whether to accept self-bonds, eliminating the risk that self-bonds pose to the federal government and states would require SMCRA to be amended. In our March 2018 report, we recommended that Congress consider amending SMCRA to eliminate self-bonding. Interior did not provide written comments on the report. <3.2. Oil and Gas Bonds Do Not Provide Sufficient Financial Assurance to Prevent Orphaned Wells> We reported in September 2019 that bonds held by BLM have not provided sufficient financial assurance to prevent orphaned oil and gas wells on federal lands. Specifically, we reported that BLM identified 89 new orphaned wells from July 2017 through April 2019, and 13 BLM field offices identified about $46 million in estimated potential reclamation costs associated with orphaned wells and inactive wells that officials deemed to be at risk of becoming orphaned in 2018. Although BLM does not estimate reclamation costs for all wells, it has estimated reclamation costs for thousands of wells whose operators have filed for bankruptcy. Based on our analysis of these estimates, we identified two cost scenarios: low-cost wells typically cost about $20,000 to reclaim, and high-cost wells typically cost about $145,000 to reclaim. In our September 2019 report, based on our cost scenarios described above, we found that most bonds (84 percent) that we were able to link to wells in BLM data are likely too low to fund reclamation costs for all the wells they cover. Bonds generally do not reflect reclamation costs because most bonds are set at regulatory minimum values, and these minimums have not been adjusted to account for inflation since they were first set in the 1950s and 1960s, as shown in figure 1. In addition, these minimums do not account for variables, such as the number of wells they cover, or other characteristics that affect reclamation costs, such as increasing well depth. In addition to the wells identified by BLM as orphaned over the last decade, in our September 2019 report we identified inactive wells at increased risk of becoming orphaned and found their bonds are often not sufficient to reclaim the wells. Our analysis of BLM bond value data as of May 2018 and ONRR production data as of June 2017 revealed that a significant number of inactive wells remain unplugged and could be at increased risk of becoming orphaned. Specifically, we identified 2,294 wells that may be at increased risk of becoming orphaned because they have not produced since June 2008 and have not been reclaimed. Since these at-risk wells are unlikely to produce again, an operator bankruptcy could lead to orphaned wells unless bonds are adequate to reclaim them. In our September 2019 report, we stated that if the number of at-risk wells is multiplied by our low-cost reclamation scenario of $20,000, it implies a cost of about $46 million to reclaim these wells. If the number of these wells is multiplied by our high-cost reclamation scenario of $145,000, it implies a cost of about $333 million. When we further analyzed the available bonds for these at-risk wells, we found that most of these wells (about 77 percent) had bonds that would be too low to fully reclaim the at-risk wells under our low-cost scenario. More than 97 percent of these at-risk wells have bonds that would not fully reclaim the wells under our high-cost scenario. Without taking steps to adjust bond levels to more closely reflect expected reclamation costs, BLM faces ongoing risks that not all wells will be completely and timely reclaimed, as required by law. We recommended in our September 2019 report that BLM take steps to adjust bond levels to more closely reflect expected reclamation costs. BLM concurred with our recommendation. However, while BLM stated it had updated its bond review policy, it is unclear whether the updated policy will improve BLM s ability to secure bond increases. <3.3. BLM Does Not Currently Assess User Fees to Fund Orphaned Well Reclamation> In addition to fulfilling its responsibility to prevent new orphaned wells, it falls to BLM to reclaim wells that are currently orphaned, and BLM has not always been able to do so quickly. For example, we reported in September 2019 that there were 51 wells that BLM identified as orphaned in 2009, and that they had not been reclaimed as of April 2019. As noted above, BLM faces significant estimated potential reclamation costs associated with orphaned wells and inactive wells. The Energy Policy Act of 2005 directs Interior to establish a program that, among other things, provides for the identification and recovery of reclamation costs from persons or other entities currently providing a bond or other financial assurance for an oil or gas well that is orphaned, abandoned, or idled. In our September 2019 report we described one way in which BLM may be able to accomplish this is through the imposition of user fees, such as at the time an operator submits an application for permit to drill or as an annual fee for inactive wells. Some states, such as Wyoming, have dedicated funds for reclaiming orphaned wells. According to one official we interviewed with the Wyoming Oil and Gas Conservation Commission, the Commission has reclaimed approximately 2,215 wells since 2014 under its Orphan Well Program, which is funded through a conservation tax assessed on the sale of oil and natural gas produced in the state. Developing a mechanism to obtain funds from operators for such costs could help ensure that BLM can reclaim wells completely and timely. In commenting on a draft of our September 2019 report, BLM stated that it does not have the authority to seek or collect fees from lease operators to reclaim orphaned wells. We continue to believe a mechanism for BLM to obtain funds from oil and gas operators to cover the costs of reclamation of orphaned wells could help ensure BLM can completely and timely reclaim these wells, some of which have been orphaned for at least 10 years. Accordingly, in our September 2019 report, we recommended that Congress consider giving BLM the authority to obtain funds from operators to reclaim orphaned wells and requiring BLM to implement a mechanism to obtain sufficient funds from operators for reclaiming orphaned wells. <4. ONRR Compliance Goals May Not Align with the Agency Mission to Account for Royalty Payments, Despite Agency Efforts to Improve Operations> In May 2019, we found that ONRR had begun implementing several initiatives to help the agency operate more effectively, according to ONRR officials. For example, in March 2017, ONRR initiated Boldly Go, an effort to assess its organizational structure and identify and implement potential improvements. ONRR was also in the process of implementing a new electronic compliance case management and work paper tool referred to as the Operations and Management Tool. According to ONRR documents, this tool was to combine multiple systems into one and was intended to serve a variety of functions. ONRR documents stated that the tool is designed to be a single, standardized system that reduces manual data entry, creates a single system of record for ONRR case data, offers checks to eliminate data entry errors, and provides greater transparency for outside auditors. The agency also introduced a new auditor training curriculum in April 2018. In our May 2019 report, we also found that ONRR reported generally meeting its annual royalty compliance goals for fiscal years 2010 through 2017. However, we found that while ONRR s fiscal year 2017 compliance goals could be useful for assessing certain aspects of ONRR s performance, they may not have been effectively aligned with the agency s statutory requirements or its mission to account for all royalty payments. For example, ONRR s fiscal year 2017 compliance goals did not sufficiently address its mission to collect, account for, and verify revenues, in part, because its goals did not address accuracy, such as a coverage goal (e.g., identifying the number of companies or percentage of royalties subject to compliance activities over a set period). We stated that by establishing a coverage goal that aligns with the agency s mission, ONRR could have additional assurance that its compliance program was assessing the extent to which oil and gas royalty payments were accurate. Overall, we made seven recommendations, including that ONRR establish an accuracy goal that addresses coverage that aligns with its mission. Interior concurred with our recommendations. <5. Limitations Exist in Interior s Accounting and Management of Natural Gas Emissions> We issued reports in October 2010 and July 2016 that included several recommendations regarding steps Interior should take to better account for and manage natural gas emissions associated with oil and gas development. In October 2010, we reported that data collected by Interior to track venting and flaring on federal leases likely underestimated venting and flaring because they do not account for all sources of lost gas. For onshore federal leases, operators reported to Interior that about 0.13 percent of produced gas was vented or flared. Estimates from the Environmental Protection Agency and the Western Regional Air Partnership showed volumes as high as 30 times higher. We reported that economically capturing onshore vented and flared natural gas with then-available control technologies could increase federal royalty payments by $23 million annually. We also found limitations in how Interior was overseeing venting and flaring on federal leases, and made five recommendations geared toward ensuring that Interior had a complete picture of venting and flaring and took steps to reduce this lost gas where economic to do so. Interior generally concurred with our recommendations. In July 2016, we found that limitations in Interior s guidance for oil and gas operators regarding their reporting requirements could hinder the extent to which the agency can account for natural gas emissions on federal lands. Without such data, Interior could not ensure that operators were minimizing waste and that BLM was collecting all royalties that were owed to the federal government. We recommended, among other things, that BLM provide additional guidance for operators on how to estimate natural gas emissions from oil and gas produced on federal leases. BLM concurred with the recommendation. Interior has taken steps to implement our past recommendations regarding the control of natural gas. Accounting for natural gas is important for ensuring that the federal government receives all royalties it is due and because methane which comprises approximately 80 percent of natural gas emissions is a potent greenhouse gas that has the ability to warm the atmosphere. In addition, we reported in July 2016 that increased oil production in recent years has resulted in an increase in flared gas in certain regions where there is limited infrastructure to transport or process gas associated with oil production. In November 2016, Interior issued regulations intended to reduce wasteful emissions from onshore oil and gas production that were consistent with our recommendations. In June 2017, however, Interior postponed the compliance dates for relevant sections of the new regulations and then suspended certain requirements in December 2017. Interior subsequently issued revised regulations in September 2018 that are not consistent with the findings and recommendations in our prior work. In our prior work and preliminary observations in our ongoing work, we have found that some states have requirements that are more stringent than BLM s regarding accounting for and managing natural gas emissions. For example, we reported in July 2016 that North Dakota targeted the amount of gas flared from two geologic formations in the state by imposing restrictions on the amount of gas operators may flare from existing and new sources. We also reported that North Dakota requires operators to include a gas capture plan when they apply to drill a new oil well. According to state officials we interviewed for our report, gas capture plans help facilitate discussions between oil producers and firms that process and transport gas and have improved the speed at which new wells are connected to gas gathering infrastructure. In the course of our ongoing work, we obtained documents indicating that per its regulations, North Dakota requires all gas produced and used on a lease for fuel purposes or that is flared must be measured or estimated and reported monthly, and that all vented gas be burned and the volume reported. In addition, based on preliminary observations in our ongoing work, Colorado and Texas both charge royalties on vented and flared gas volumes. In the course of our ongoing work, we obtained documents indicating that the Colorado Oil and Gas Conservation Commission, which regulates oil and gas activity in the state, addresses both venting and flaring as well as leaks. Colorado officials we interviewed with the State Land Board told us in September 2019 that, since 2018, the state charges royalties on all vented and flared gas volumes, with certain exceptions. These officials told us that prior to 2018, vented and flared gas could be exempt from royalties, but that it was uncommon. In addition, in Texas, a state official we interviewed told us that vented or flared volumes must be reported monthly and that charging royalties on these volumes increases revenues. Chairman Lowenthal, Ranking Member Gosar, and Members of the Subcommittee, this completes my prepared testimony. I would be pleased to respond to any questions you may have at this time. <6. GAO Contacts and Staff Acknowledgments> If you or your staff have any questions about this testimony, please contact Frank Rusco, Director, Natural Resources and Environment at (202) 512-3841 or RuscoF@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. GAO staff who made key contributions to this testimony are Quindi Franco (Assistant Director), Marie Bancroft (Analyst-In- Charge), Antoinette Capaccio, John Delicath, Jonathan Dent, Elizabeth Erdmann, Glenn C. Fischer, Emily Gamelin, William Gerard, Cindy Gilbert, Holly Halifax, Richard P. Johnson, Christine Kehr, Michael Kendix, Greg Marchand, Jon Muchin, Marietta Mayfield Revesz, Dan Royer, and Kiki Theodoropoulos. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
Interior oversees energy production on federal lands and waters and is responsible for ensuring taxpayers receive a fair return for access to federal energy resources. Oil, gas, and coal on federal lands provide an important source of energy for the United States; they create jobs; and they generate billions of dollars in revenues that are shared between federal, state, and tribal governments. However, when not managed properly, energy production on federal lands can create risks to public health and the environment, such as contaminated surface water. In February 2011, GAO designated Interior's management of federal oil and gas resources as a program at high risk for fraud, waste, abuse, and mismanagement or the need for transformation. This testimony discusses GAO's work related to ensuring a fair return on resources from federal lands. To do this work, GAO drew on reports issued from May 2007 through September 2019 and preliminary observations from ongoing work. GAO reviewed relevant federal and state laws, regulations, and policies; analyzed federal data; and interviewed federal, state, and industry officials, among others.
What GAO Found
GAO's prior and ongoing work found challenges related to ensuring a fair return for oil, gas, and coal developed on federal lands in areas, including the following:
Oil, Gas, and Coal Lease Terms and Conditions. Key federal lease terms are the same as they were decades ago, and Interior has not adjusted them for inflation or other factors that may affect the federal government's fair return. In June 2017, GAO reported that raising federal royalty rates—a lease term that defines a percentage of the value of production paid to the government—for onshore oil, gas, and coal resources could decrease production on federal lands by a small amount or not at all but could increase overall federal revenue. Also, preliminary observations from GAO's ongoing work indicate that selected states charge royalty rates for oil and gas produced on state lands at a higher rate than the federal government charges for production on federal lands.
Oil, Gas, and Coal Bonding. GAO found in September 2019 that oil and gas bonds do not provide sufficient financial assurance because, among other things, most individual, statewide, and nationwide lease bonds are set at regulatory minimum values that have not been adjusted for inflation since the 1950s and 1960s (see figure). Further, GAO reported in March 2018 that coal self-bonding (where an operator promises to pay reclamation costs without providing collateral) poses financial risks to the federal government. Bonds provide funds that can be used to reclaim lands—restore them as close to their original natural states as possible—if an operator or other liable party does not do so.
Natural Gas Emissions. In October 2010, GAO reported that data collected by Interior likely underestimated venting and flaring because they did not account for all sources of lost gas. GAO reported that economically capturing vented and flared natural gas could increase federal royalty payments by $23 million annually and made recommendations to help Interior better account for and manage emissions. In November 2016, Interior issued regulations consistent with GAO's recommendations, but Interior has since issued revised regulations, which are inconsistent with GAO's recommendations.
What GAO Recommends
For the reports discussed in this testimony, GAO has made 20 recommendations and three matters for congressional consideration. Interior has taken steps to implement a number of these recommendations, but 10 recommendations and two matters for congressional consideration remain unimplemented, presenting opportunities to continue to improve management of energy resources on federal lands. |
gao_GAO-20-212 | gao_GAO-20-212_0 | <1. Background> The Medicaid Drug Rebate Program was established through the Omnibus Budget Reconciliation Act of 1990 and requires drug manufacturers to pay rebates to states on outpatient drugs as a condition of having their drugs covered by Medicaid. The 340B Program, named for the statutory provision authorizing it in the Public Health Service Act, was created in 1992 following the enactment of the Medicaid Drug Rebate Program and allows covered entities to purchase outpatient drugs at discounted prices. HRSA and CMS both have roles in overseeing compliance with the prohibition on duplicate discounts. <1.1. The Medicaid Drug Rebate Program> The Medicaid Drug Rebate Program helps to offset the federal and state costs of most outpatient prescription drugs dispensed to Medicaid beneficiaries. Under the rebate program, drug manufacturers pay rebates to states as a condition for the federal contribution to Medicaid spending for the manufacturers outpatient drugs. State Medicaid programs generally must cover all of the drugs of manufacturers that participate in the rebate program. Originally, rebates were available only for drugs paid for by the state on a FFS basis, but the Patient Protection and Affordable Care Act extended the program to outpatient drugs paid for under Medicaid managed care; there are more Medicaid enrollees, prescriptions, and spending for drugs under managed care than FFS. The rebates received for both FFS and managed care are shared by the federal government and states. The amount of Medicaid rebates for a drug is based on a statutory formula. Using that formula CMS calculates a unit rebate amount for each drug and provides that amount to states so they can determine the amount of rebates to request. Every quarter, each state multiplies the number of units of each drug it either paid for on a FFS basis or provided through its managed care plans by the CMS-provided unit rebate amount. For drugs provided under FFS, the state calculates the number of units based on drug claims it reimbursed, while states use drug utilization data provided by managed care plans to determine the number of units of each drug that were provided by the plans to Medicaid beneficiaries. Each state then sends rebate requests to each manufacturer reflecting the total quarterly amount of rebates owed for each of the manufacturer s drugs. States are to exclude claims for 340B drugs from their rebate requests. <1.2. 340B Program> Participation in the 340B Program is voluntary for both covered entities and drug manufacturers, but there are strong incentives for both to do so. Covered entities can realize substantial savings through the program s price discounts. In addition, covered entities can generate revenue to the extent that they can purchase 340B drugs for eligible patients whose insurance reimbursement exceeds the price paid. Incentives for participation by drug manufacturers are strong because they must participate in the 340B Program to receive Medicaid reimbursement for their drugs. Covered entities generally become eligible for the 340B Program by qualifying as certain federal grantees or as one of six specified types of hospitals. Eligible federal grantees include federally qualified health centers, which provide comprehensive community-based primary and preventive care services to medically underserved populations, as well as certain other federal grantees, such as family planning clinics and Ryan White HIV/AIDS program grantees, among others. Eligible hospitals include critical access hospitals small, rural hospitals with no more than 25 inpatient beds; disproportionate share hospitals general acute care hospitals that serve a disproportionate number of low-income patients; and four other types of hospitals. To participate in the 340B Program, covered entities must register with HRSA and annually recertify their continuing eligibility. Once their eligibility is approved by HRSA, covered entities can begin purchasing drugs from manufacturers at the 340B discounted prices. Covered entities may provide drugs, including 340B drugs, to patients through one or more dispensing methods. Specifically, covered entities may dispense these drugs through pharmacies either through in-house pharmacies they own; through the use of contract pharmacy arrangements, in which they contract with outside pharmacies and pay them to dispense drugs on their behalf; or both. In addition, providers who work at covered entities, such as doctors and nurses, may administer 340B drugs to patients directly, such as during office visits. These are known as provider-administered drugs. As a condition of participating in the 340B Program, covered entities must follow certain requirements. For example, they are prohibited from diverting a 340B drug to an individual who is not a patient of the covered entity. Covered entities are also prohibited from subjecting manufacturers to duplicate discounts. <1.3. Preventing Duplicate Discounts and Forgone Rebates> Both states and covered entities play key roles in preventing duplicate discounts and forgone rebates. States must know whether covered entities provided 340B drugs to Medicaid beneficiaries in order to exclude those drugs from the rebate requests they submit to manufacturers. When covered entities provide 340B drugs to Medicaid beneficiaries, it is known as carving in; if covered entities do not dispense these drugs to Medicaid beneficiaries, it is known as carving out. As shown in figure 1, if a state is not aware that a covered entity provided 340B drugs to Medicaid beneficiaries, it would not know to exclude those drugs from its rebate requests, which could lead to duplicate discounts. In contrast, if a state mistakenly believes the entity used 340B drugs when it did not, it might exclude those drugs from its rebate requests and would forgo eligible rebates. To help prevent duplicate discounts, in 1993, HRSA and CMS collaborated to establish the Medicaid Exclusion File (MEF) as a mechanism to assist in the identification of 340B drugs provided to Medicaid FFS beneficiaries. The MEF lists the covered entities that reported to HRSA that they choose to use or carve in 340B drugs for their Medicaid FFS patients. Specifically, HRSA requires that covered entities that decide to carve in these drugs for Medicaid provide the agency with the provider number or numbers that the entities use to bill the state for those drugs. The entity and the provider number or numbers it specifies are then listed on the MEF. HRSA guidance specifies that all drugs billed with the provider numbers listed on the MEF should be 340B drugs so a state that choses to use the MEF knows the drugs should be excluded from rebate requests; there is no requirement for states to use the MEF to identify 340B drugs. If a covered entity wants its contract pharmacy to dispense 340B drugs to patients covered under Medicaid FFS, HRSA guidance requires the covered entity, the contract pharmacy, and the state Medicaid program to have an arrangement to prevent duplicate discounts; any such arrangement must be reported to HRSA. When the MEF was created, Medicaid drug rebates were only required for drugs provided under FFS. As such, in a 2014 policy release, HRSA clarified that the MEF is only intended for use for Medicaid FFS, that is, only covered entities that elect to carve in 340B drugs for Medicaid FFS are required to provide the provider numbers used for billing Medicaid FFS for inclusion on the MEF. The MEF is not intended to capture whether covered entities have decided to carve in 340B drugs for Medicaid managed care and, if so, what provider numbers they use for billing for those drugs. HRSA has not created a mechanism for covered entities to use to identify 340B drugs provided to Medicaid managed care beneficiaries, but encourages covered entities to work with states to develop strategies to prevent duplicate discounts for drugs reimbursed through managed care. While HRSA requires covered entities to use the MEF, there is no similar requirement for state Medicaid programs. CMS provides states the flexibility to determine procedures for identifying and excluding 340B drugs from their Medicaid rebate requests. Under a May 2016 final rule, states contracts with Medicaid managed care plans that provide coverage of outpatient drugs must require the plans to provide the states with drug utilization data that is necessary for the states to claim Medicaid rebates. In addition, the contracts must require the plans to establish procedures for excluding 340B drugs from the drug utilization data provided to states for purposes of rebate collection. <1.4. Federal Oversight> To oversee covered entities compliance with 340B Program requirements, in fiscal year 2012, HRSA implemented a systematic approach to conducting audits of a small sample of covered entities, and began conducting audits of 200 entities per year in fiscal year 2015. HRSA audits include covered entities that are randomly selected based on risk-based criteria (approximately 90 percent of all audits conducted each year), or targeted based on information from stakeholders such as drug manufacturers about potential noncompliance (10 percent of the audits conducted). HRSA s criteria for risk-based audits include a covered entity s volume of 340B drug purchases, number of contract pharmacies, time in the program, and complexity of its program. Among other things, HRSA s audits include reviews of each covered entity s policies and procedures, an assessment of the entity s compliance with respect to 340B Program requirements, including the prevention of duplicate discounts in Medicaid FFS, and reviews of a sample of prescriptions filled during a 6-month period to identify any instances of noncompliance. Under HRSA s audit procedures, a covered entity with audit findings is required to 1) submit a corrective action plan to HRSA that indicates it will determine the full scope of any noncompliance (beyond the sample of prescriptions reviewed during an audit) and 2) outline the steps it plans to take to correct findings of noncompliance, including any necessary repayments to manufacturers, among other things. If the HRSA audit shows that duplicate discounts may have occurred, the covered entity must, as part of its corrective action plan, contact the state Medicaid program to determine whether duplicate discounts actually occurred namely, whether the state requested a rebate on the claims in question, and if so, contact the drug manufacturer to offer repayment. HRSA closes the audit when a covered entity submits a letter attesting that its corrective action plan, including its assessment of the full scope of noncompliance, has been implemented and any necessary repayments to manufacturers have been resolved. In addition, HRSA may re-audit a covered entity (i.e. subject it to a targeted audit) to determine whether it has implemented its corrective action plan. To oversee the Medicaid Drug Rebate Program, CMS receives copies of states Medicaid rebate requests each quarter. States are required to submit this data to manufacturers for FFS and managed care drugs, which should not include drugs purchased through the 340B Program, within 60 days of the end of the quarterly rebate period. Specifically, states provide drug utilization data that includes the drug name, national drug code (a unique identifier for each drug), the unit rebate amount, the number of units reimbursed, the rebate amount claimed, and the number of prescriptions, among other things. CMS has a system that reviews this information for errors, such as the inclusion of drugs from manufacturers that no longer participate in the Medicaid Drug Rebate Program, and generates a discrepancy report for the state. CMS also has a system in place to identify, for state review, cases in which the utilization data reflect a substantial increase or decrease in the number of FFS records submitted compared to prior quarters; such a review is not currently performed for managed care. In addition, CMS reviews state Medicaid programs contracts with managed care plans using a checklist to ensure that the contracts include elements required by statute or regulation. <2. State Medicaid Programs Policies on the Use and Identification of 340B Drugs Vary, Are Not Always Documented, and May Not Prevent Duplicate Discounts State Medicaid Programs Policies for Use and Identification of 340B Drugs Vary> State Medicaid programs policies varied in whether they allowed covered entities to use 340B Program drugs for Medicaid beneficiaries. Most states allowed covered entities to decide whether to use, or carve in, 340B drugs for Medicaid beneficiaries at their in-house pharmacies and for provider-administered drugs. Fewer states allowed covered entities to dispense these drugs to Medicaid beneficiaries at contract pharmacies, particularly beneficiaries whose drugs were covered under FFS. Table 1 below summarizes states policies on covered entities use of 340B drugs for Medicaid beneficiaries for both FFS and managed care by dispensing method. In addition to varying by state, policies on the use of 340B drugs sometimes varied within a state; that is, some states had different policies depending on whether the drugs were provided to Medicaid FFS or managed care beneficiaries, the dispensing method used, or both. For example, Oregon allowed covered entities to decide whether to dispense 340B drugs at contract pharmacies to Medicaid managed care beneficiaries, but required covered entities to carve out (not use) these drugs at contract pharmacies under Medicaid FFS. Illinois required covered entities to carve in 340B provider-administered drugs and those dispensed at in-house pharmacies for Medicaid beneficiaries in both FFS and managed care, but prohibited their use for Medicaid beneficiaries at contract pharmacies. See appendix II for information on each state Medicaid program s policies regarding covered entities use of 340B drugs. The states that allowed or required covered entities to carve in 340B drugs for Medicaid beneficiaries used several different procedures to identify and exclude those drugs from Medicaid rebate requests. These procedures included relying on the MEF, requiring covered entities to use a 340B claim identifier a code on the claim that indicates that the drug used was purchased at the 340B discounted price, or using other state- developed procedures to identify and exclude 340B drugs from rebate requests. The procedures states used varied between Medicaid FFS and managed care, and among dispensing methods. For example, states were more likely to use HRSA s MEF to identify and exclude provider- administered drugs in both Medicaid FFS and Medicaid managed care and to use a 340B claim identifier to identify and exclude drugs dispensed at in-house pharmacies. Some states used a combination of procedures or created their own state-specific procedures. For example, 11 states required that covered entities inform them of their decisions to carve in 340B drugs for Medicaid beneficiaries. The states then maintained a list of these covered entities or their providers, which they used to exclude 340B drugs from rebate requests. Oregon required covered entities to provide the state with a list of each 340B drug dispensed to a Medicaid managed care beneficiary at a contract pharmacy so that the state could exclude those drugs from its rebate requests. Vermont required covered entities, on a monthly basis, to send the state a file listing each 340B drug provided to a Medicaid beneficiary; the state used this information to exclude those drugs from rebate requests. See table 2 for a summary of the procedures used by states to identify 340B drugs provided to Medicaid beneficiaries, and appendix III for a listing of the procedures by state. <2.1. State Medicaid Programs Policies on the Use and Identification of 340B Drugs Are Not Always Documented and May Not Prevent Duplicate Discounts> State Medicaid programs policies related to 340B drugs were not always documented and some states policies may not prevent duplicate discounts. Some states had written policies for the use of 340B drugs, and procedures to identify them, for some dispensing methods, but not for others, such as states that had documented policies for in-house pharmacies but not contract pharmacies. Without written policies, covered entities in those states may not be aware of requirements for dispensing and identifying 340B drugs, increasing the risk of duplicate discounts. Specifically, we found that nine states did not have written policies or procedures on the use or identification of 340B drugs for all dispensing methods. Seven of the nine states had policies or procedures regarding the use and identification of 340B drugs that were used in practice, but these policies and procedures were not always documented. For example: Connecticut did not have documented policies on the use and identification of 340B drugs, but officials from the state reported that it allowed covered entities to provide these drugs to Medicaid beneficiaries and relied on the MEF to identify and exclude them from rebate requests. While Pennsylvania and Ohio had written policies regarding the use of 340B drugs in Medicaid FFS and for some dispensing methods under managed care, the states policies requiring covered entities to carve out these drugs for Medicaid managed care beneficiaries at contract pharmacies were not documented. The remaining two states did not have policies or procedures, documented or otherwise, for all dispensing methods: Officials from Washington, D.C. reported that D.C. did not have a policy regarding the use of provider-administered 340B drugs nor did it have procedures to identify and exclude those drugs from its Medicaid drug rebate requests. A Rhode Island Medicaid official told us that the state did not have written policies regarding the identification of 340B drugs dispensed to Medicaid FFS beneficiaries at in-house pharmacies, and that the state did not have procedures, written or otherwise, by which to exclude such drugs from rebate requests. Additionally, while the state had a written policy for identifying and excluding 340B drugs administered by providers at hospitals, officials told us that they had no policy or exclusion procedures for drugs administered by providers at other types of covered entities. In addition, we found that states policies may not prevent duplicate discounts. For example, some states used the MEF to identify and exclude 340B drugs from their rebate requests in a manner contrary to the MEF s purpose as set forth by HRSA. As noted previously, HRSA guidance specifies that the MEF is not intended to be used to identify and exclude 340B drugs provided to Medicaid managed care beneficiaries from Medicaid drug rebate requests. Covered entities are only required to be listed on the MEF if they carve in 340B drugs for Medicaid FFS. Since the MEF may not accurately reflect covered entities use of 340B drugs for Medicaid managed care, states use of the MEF in this instance may increase the risk of duplicate discounts or forgone rebates unless states require covered entities to make the same decisions on the use of 340B drugs for FFS and managed care. For example, as shown in figure 2, a state s use of the MEF for managed care would likely result in a duplicate discount if covered entities carve out 340B drugs for Medicaid FFS, but carve in these drugs for managed care, as those entities would not be listed on the MEF. Consequently, the state would not know to exclude drugs provided by those entities from the managed care plans utilization data that are used for requesting rebates. If covered entities did the opposite carved in for FFS and carved out for Medicaid managed care then the state would likely forgo Medicaid rebates as it would exclude drugs from its rebate request that were not purchased through the 340B Program. Seven of the 13 states that used the MEF exclusively to identify and exclude Medicaid managed care drugs from rebate requests for at least one dispensing method did not require covered entities to make the same carve-in decisions for both FFS and managed care. Additionally, while the six remaining states required covered entities to make the same decision regarding use of 340B drugs in FFS and managed care, that requirement was not always clearly explained in the states policies. For example, an official from Arkansas, which used the MEF for identifying and excluding 340B drugs from rebate requests, told us that covered entities are required to make the same carve-in decisions for both Medicaid FFS and managed care. However, it is unclear how covered entities would be aware of that requirement, as it was not documented in the state s policy manuals at the time of our information request. Finally, states that rely on the MEF or state-developed lists of providers carving in 340B drugs for Medicaid beneficiaries may not be able to identify instances where covered entities are unable to purchase drugs at the 340B Program discounted price, and instead need to purchase drugs outside of the 340B Program. For example, orphan drugs are excluded from the discounted 340B Program price for some covered entities. In these situations, states that rely on the MEF or other state-developed lists of providers may be forgoing rebates. For example, if covered entities do not have a separate provider number for billing Medicaid for these non- 340B drugs, the states would be excluding both 340B and non-340B drugs from their rebate requests. State Medicaid officials in Oregon and Pennsylvania acknowledged that their states were likely forgoing rebates when covered entities listed on the MEF were unable to purchase drugs at the 340B Program price. While these state officials indicated that they did not consider the lost rebates financially significant, the loss of these rebates would also increase federal Medicaid expenditures, since rebates are shared between the state and the federal government. <2.2. Limitations in HHS Oversight Increase the Risk of Duplicate Discounts CMS Oversight of State Medicaid Programs Efforts to Prevent Duplicate Discounts Is Limited> CMS oversight of state Medicaid programs efforts to prevent duplicate discounts is limited. States have the flexibility to select the procedures used for identifying and excluding 340B drugs from rebate requests. Although CMS collaborated with HRSA to establish the MEF as a tool for identifying 340B drugs in Medicaid FFS, CMS does not require states to use the MEF in their duplicate discount prevention efforts. Instead, CMS has provided states with options of procedures they could consider for identifying and excluding 340B drugs from rebate requests. For example, CMS s February 2016 final rule on covered outpatient drugs, which detailed requirements for Medicaid reimbursement of covered outpatient drugs, included in its preamble examples of procedures that states could use to identify and exclude 340B drugs in FFS without prescribing any specific required procedure. Additionally, as noted earlier, the final rule CMS issued in May 2016 on Medicaid managed care included a provision relating to duplicate discounts for Medicaid managed care drugs. Specifically, it mandated that state Medicaid programs contracts with managed care plans that provide outpatient drugs require the plans to establish procedures for excluding 340B drugs from utilization data provided to states for use in seeking rebates, but did not specify what procedures plans should use. Most recently, in January 2020, CMS released a bulletin to state Medicaid programs on best practices for preventing duplicate discounts. CMS has some visibility into state Medicaid programs 340B-related policies and procedures through its oversight activities, but these activities are not intended to, and do not enable CMS to, assess compliance with the duplicate discount prohibition. For example, CMS has a system in place that reviews copies of states quarterly Medicaid drug rebate requests; however, CMS officials told us that these requests do not contain detailed, claim-level information that could be used to determine if specific drugs purchased through the 340B Program were incorrectly included. Additionally, CMS reviews states contracts with Medicaid managed care plans to ensure that they include language requiring the plans to have procedures to exclude 340B drugs from Medicaid rebate data provided to states, but CMS officials told us that the contract language does not have to specify or describe those mechanisms, limiting the information available regarding duplicate discount prevention efforts. CMS also required states to submit their plans for reimbursing covered entities for 340B drugs provided under Medicaid FFS to ensure that the states payment methodologies complied with federal requirements, but these reviews were not focused on ensuring that such drugs were excluded from rebate requests. CMS officials told us that they do not track which procedures states use to prevent duplicate discounts; review states policies or procedures for identifying and excluding 340B drugs from rebate requests for deficiencies or to ensure effectiveness; or audit states compliance with the prohibition on duplicate discounts. This is problematic because, as noted previously, we found that not all state Medicaid programs have written policies and procedures that specify the extent to which covered entities can use 340B drugs for Medicaid beneficiaries, or how they are to identify these drugs so the state can exclude them from Medicaid rebate requests. If states do not have written policies, covered entities may not be aware of whether, or under what circumstances, they are permitted to provide 340B drugs to Medicaid beneficiaries or how to properly inform the state of their use, which could result in errors that lead to duplicate discounts and forgone rebates. We found some evidence of confusion from covered entities about state policies. For example, officials from Apexus, which manages HRSA s 340B Prime Vendor Program, told us that Apexus s call center, which fields questions from covered entities and other stakeholders about the 340B Program, most frequently receives questions related to clarifying states duplicate discount-related policies. These inquiries about state requirements indicate that there is currently confusion among covered entities. CMS s limited oversight of state Medicaid programs efforts to prevent duplicate discounts is also problematic because we found that states policies and procedures were not always effective at preventing duplicate discounts, or in line with federal guidance. For example, the MEF is only intended to be used for Medicaid FFS. CMS officials told us that, while the agency was not aware of any states using the MEF for Medicaid managed care, such use would be concerning because it is not an accurate tool for that purpose. However, as previously shown in table 2, we found that eight states relied on the MEF to identify and exclude Medicaid managed care drugs dispensed at in-house pharmacies from rebate requests and 13 states used the MEF to identify and exclude managed care drugs administered by providers. The lack of CMS oversight of state Medicaid programs policies and procedures related to duplicate discount prevention is inconsistent with federal standards for internal control for information and communication, which state that management should obtain relevant data from reliable internal and external sources in a timely manner based on the identified information requirements so that data can be used for effective monitoring. Without reviewing states policies and procedures, CMS does not have the information needed to effectively oversee states compliance with the Medicaid drug rebate statute, which exempts 340B drugs from Medicaid rebate requirements, and ensure that states have effective policies and procedures for preventing duplicate discounts. The lack of oversight of states policies and procedures also results in CMS not having reasonable assurance that states are seeking rebates for all eligible drugs, and since Medicaid rebates are shared by the states and the federal government, forgoing rebates increases Medicaid costs for both states and the federal government. <2.3. Oversight Weaknesses Impede HRSA s Ability to Ensure That Duplicate Discounts Are Prevented or Remedied> We identified several areas of weaknesses in HRSA s oversight processes that impede its ability to ensure that duplicate discounts are prevented or remedied: Covered entities compliance with state policies and procedures is not assessed. HRSA s auditors are instructed to look for the potential for duplicate discounts in Medicaid FFS by assessing whether the covered entity s information on the MEF is correct; whether the entity is following its policies and procedures to prevent duplicate discounts; and whether a sample of claims reveals any noncompliance. Auditors are also instructed to use information provided by the covered entity to determine if the covered entity is following state policies. However, HRSA officials told us that its auditors are not expected to independently identify or verify state Medicaid programs policies to determine whether the covered entity is actually following what the state requires. Instead, HRSA officials stated that it is a best practice for covered entities to include a description of state Medicaid programs policies related to the 340B Program, such as how relevant drugs are to be identified, in their policy and procedure manuals. In addition, HRSA told us that its auditors interview covered entity staff about the controls in place to prevent duplicate discounts, and may discuss state requirements during these interviews. The auditor is then required to use this information to determine whether the covered entity is following state policy. For example, if the covered entity says that the state requires a 340B claim identifier, the auditor is to look to see if the covered entity used that identifier in the sample of claims that are reviewed. However, the auditor is not expected to determine if the state actually requires a claim identifier, or allows covered entities to use 340B drugs. The fact that HRSA does not assess whether covered entities are actually following state policies and procedures regarding the use and identification of 340B drugs for Medicaid beneficiaries is inconsistent with federal standards for internal control related to information and communication. Those standards state that management should obtain relevant data from reliable internal and external sources in a timely manner based on the identified information requirements and evaluate both internal and external sources of data for reliability so that it can be used for effective monitoring. This lack of HRSA oversight is especially concerning because we found that the covered entities we interviewed did not always have a correct understanding of their states policies. For example, officials from two of the four Pennsylvania covered entities we spoke with told us they were dispensing 340B drugs to Medicaid managed care beneficiaries at contract pharmacies, despite state officials telling us the state does not allow that practice. As a result of this confusion, duplicate discounts may have occurred as the state was not excluding drugs dispensed by contract pharmacies from its Medicaid rebate requests. Additionally, of the 13 covered entity policy and procedure manuals we reviewed, only four had descriptions of their states policies and two of those descriptions were incorrect. If HRSA were to audit the majority of those 13 covered entities, its auditors would likely be unable to appropriately assess the entities compliance with state requirements. Without fully assessing compliance with state policy, HRSA s audits do not provide the agency with reasonable assurance that covered entities are taking the necessary steps to prevent duplicate discounts. As a result, drug manufacturers are at risk of being required to erroneously provide duplicate discounts for Medicaid drugs. Not all identified duplicate discounts are repaid. HRSA officials told us that covered entities obligations for preventing duplicate discounts are the same for Medicaid FFS and managed care. However, as we reported in 2018, HRSA audits do not assess for the potential for duplicate discounts in Medicaid managed care despite the fact that the potential for duplicate discounts related to Medicaid managed care has existed since 2010, when manufacturers were required to begin paying Medicaid rebates under managed care in addition to FFS. As we noted in 2018, HRSA indicated that it does not audit for duplicate discounts in managed care because the agency has not issued guidance on how covered entities should prevent this. As a result, we recommended that HRSA issue guidance to covered entities on the prevention of duplicate discounts under Medicaid managed care and incorporate into its audit process an assessment of covered entities compliance with the prohibition on duplicate discounts as it relates to Medicaid managed care claims. HHS concurred with these recommendations and, as of October 2019, HRSA reported that it was working to determine next steps related to these recommendations. However, HRSA has noted that the agency lacks explicit general regulatory authority to issue regulations on most aspects of the 340B Program, and also told us, in October 2019, that guidance does not provide the agency with appropriate enforcement capability. As a result, HRSA requested authority in the President s budget request for fiscal year 2020 to issue regulations on all aspects of the 340B Program, as the agency believes that binding and enforceable regulations would provide it with the ability to more clearly define and enforce policy. In addition, the agency is not pursuing additional guidance under the 340B Program at this time. We note, however, that the law prohibits the payment of duplicate discounts and requires HRSA to issue guidance to covered entities describing methodologies and options for avoiding duplicate discounts. In the absence of federal guidance, HRSA instructs covered entities to work with their states on duplicate discount prevention. HRSA requires covered entities to work with affected drug manufacturers regarding the repayment of duplicate discounts in FFS that are identified through HRSA or manufacturer audits. However, HRSA officials told us that the agency does not require covered entities to take the same actions to address duplicate discounts for managed care claims that HRSA learns about through its audits or other means. For example, HRSA officials told us that they did not follow up on a letter from a state that confirmed a duplicate discount occurred on a Medicaid managed care claim, because the agency did not yet have guidance for covered entities related to Medicaid managed care claims. Additionally, HRSA officials told us they would not require a covered entity to develop a corrective action plan or make offers of repayment to a manufacturer if a drug manufacturer s audit of that covered entity identified a duplicate discount in managed care. Although HRSA officials told us that they expect covered entities to work in good faith with all parties involved to resolve potential duplicate discounts in managed care, HRSA does not require these actions if a duplicate discount is identified in managed care, as it does in FFS. This is particularly problematic as the majority of Medicaid enrollees, prescriptions, and spending for drugs are in managed care, and the drug manufacturers we contacted believe that duplicate discounts are more prevalent in Medicaid managed care than FFS. HRSA expecting but not requiring covered entities to address identified duplicate discounts related to Medicaid managed care is contrary to federal law, which provides that covered entities are liable to drug manufacturers for duplicate discounts that are identified through HRSA or manufacturer audits. It is also inconsistent with federal internal control standards related to monitoring, which state that management should oversee the prompt remediation of deficiencies and the audit resolution process, which begins when the results of an audit or other review are reported to management, and is completed only after action has been taken that corrects identified deficiencies. Without HRSA requiring covered entities to address identified duplicate discounts in Medicaid managed care as they would duplicate discounts in FFS, drug manufacturers may erroneously provide both 340B discounts and Medicaid rebates on the same drug claim. <3. Conclusions> The prevention of duplicate discounts in the 340B and Medicaid Drug Rebate Programs requires extensive coordination between state Medicaid programs and covered entities, and among agencies within HHS. Similar levels of coordination are required to ensure that states are not forgoing rebates on drugs not purchased at the 340B price, which would result in increased costs for both state and federal governments. Limitations in federal oversight impede CMS s and HRSA s ability to ensure compliance with the prohibition on duplicate discounts. CMS does not assess whether states have 340B policies and procedures and, if so, whether they are documented, effective, and accessible to stakeholders. As a result, it is unable to proactively identify and correct problematic policies and procedures, and prevent duplicate discounts and forgone rebates. Additionally, without knowing state Medicaid programs 340B policies, HRSA is unable to perform a comprehensive review of whether covered entities are taking the necessary actions to prevent duplicate discounts. In addition, HRSA s audits are not assessing compliance with the prohibition against duplicate discounts in managed care because the agency has yet to put forth guidance on this issue. While HRSA is not currently pursuing 340B-related guidance, the agency continues to work on determining next steps to respond to our 2018 recommendations on the issue. In the meantime, however, HRSA still must ensure that covered entities are complying with 340B Program requirements, including the prohibition on duplicate discounts in managed care. Failure to do so not only puts drug manufacturers at risk of providing duplicate discounts, but also compromises the integrity of the 340B Program. <4. Recommendations for Executive Action> We are making a total of three recommendations, including one to CMS and two to HRSA. Specifically: The Administrator of CMS should ensure that state Medicaid programs have written policies and procedures that specify the extent to which covered entities can use 340B drugs for Medicaid beneficiaries, are designed to effectively identify if 340B drugs were used, and if so, how they should be excluded from Medicaid rebate requests. The policies and procedures should be made publically available and cover FFS, managed care, and all of the dispensing methods for outpatient drugs. (Recommendation 1) The Administrator of HRSA should incorporate assessments of covered entities compliance with state Medicaid programs policies and procedures regarding the use and identification of 340B drugs into its audit process, working with CMS as needed to obtain states policies and procedures. (Recommendation 2) The Administrator of HRSA should require covered entities to work with affected drug manufacturers regarding repayment of identified duplicate discounts in Medicaid managed care. (Recommendation 3) <5. Agency Comments and Our Evaluation> HHS provided written comments, which are reproduced in app. IV, and technical comments, which we have incorporated as appropriate. In its written comments, HHS concurred with one of our three recommendations and did not concur with the remaining two recommendations. HHS concurred with our recommendation that CMS ensure that state Medicaid programs have written policies and procedures for identifying 340B drugs and excluding them from Medicaid rebate requests and stated that it will work with states to strengthen policies and procedures related to 340B drugs for Medicaid beneficiaries. HHS did not concur with our recommendation that HRSA incorporate assessments of covered entities compliance with state Medicaid programs policies and procedures into its audit process. HHS stated that HRSA does not have authority to determine whether state Medicaid policies and procedures are accurate and appropriate. We agree that HRSA is not the appropriate party for reviewing and assessing state Medicaid programs policies and procedures, which is why we recommended that CMS, not HRSA, strengthen its oversight of states 340B-related policies and procedures, a recommendation with which HHS concurred. We recommended that HRSA update its 340B Program audits to include assessments of whether covered entities are following state Medicaid programs policies and procedures regarding the use and identification of 340B drugs. HHS stated that HRSA does not have authority to enforce covered entities compliance with state Medicaid programs policies and procedures and that doing so would be beyond the scope of the 340B Program and would require additional training for HRSA auditors, who currently do not have this level of expertise. While we understand that HRSA does not have authority to enforce compliance with state Medicaid programs policies and procedures, covered entities compliance with state Medicaid programs policies and procedures is fundamental to preventing duplicate discounts and assessing compliance with state policies and procedures is essential to ensuring covered entities compliance with the 340B Program s prohibition on duplicate discounts. Further, HRSA already audits for compliance with certain aspects of states 340B-related Medicaid policies for preventing duplicate discounts. Specifically, HHS states that covered entities are expected to include a description of state policy in their policy and procedure manuals. If such descriptions exist, HRSA auditors are required to review those descriptions and determine if covered entities are following them. Thus, HRSA auditors already interpret state Medicaid policies and procedures when performing audits and the agency already enforces compliance with state policies by issuing audit findings when covered entities are not following them. However, as noted in our report, HRSA does not require its auditors to review state Medicaid programs actual policies and procedures. Instead, the auditors currently rely on covered entities descriptions of those policies and procedures, which we found were not always accurate. Additionally, knowledge of state policies would allow HRSA to incorporate an assessment of compliance into all audits as opposed to only those of covered entities that have such descriptions in their manuals. Finally, without considering states actual policies and procedures and ensuring that covered entities are following them, HRSA s audits cannot effectively identify the potential for duplicate discounts. For example, simply checking covered entities actions against information on the MEF does not provide useful information if the covered entities are in one of the many states that do not use the MEF and instead direct entities to identify 340B drugs dispensed to Medicaid beneficiaries via a different mechanism, such as 340B identifiers. HHS states that implementing this recommendation would be burdensome and difficult to operationalize because HRSA would need to be notified of any changes to states policies and procedures. We understand that the lack of knowledge of state Medicaid programs policies related to duplicate discount prevention at the federal level complicates the ability of HRSA and its auditors to determine what state- level requirements exist and to apply them to audits. This is, in part, why we recommended that CMS ensure that state Medicaid programs policies are publicly available a recommendation that, as noted above, HHS concurred with and that HRSA work with CMS to obtain these policies as needed. Though we understand that this creates an additional step in HRSA s audit process, we continue to believe that including an assessment of covered entities compliance with state Medicaid programs policies and procedures related to 340B drugs is necessary to identify potential duplicate discounts and to ensure covered entities compliance with 340B Program requirements. HHS also did not concur with our recommendation that HRSA should require covered entities to work with affected drug manufacturers regarding repayment of identified duplicate discounts in Medicaid managed care. In its response, HHS noted that because HRSA does not have guidance related to preventing duplicate discounts in Medicaid managed care, it is difficult to assess compliance in this area. However, our recommendation is not asking HRSA to assess compliance related to duplicate discounts in Medicaid managed care; instead, we are recommending that, when actual duplicate discounts have been identified, HRSA require covered entities to remedy those duplicate discounts. As noted in the report, actual duplicate discounts may be identified and confirmed by state Medicaid agencies through audits or other means. Given that HRSA officials told us that covered entities obligations for preventing duplicate discounts are the same for Medicaid FFS and managed care, the steps for addressing identified noncompliance should be similar, and thus, the agency should require and not just encourage covered entities to work with manufacturers to remedy any duplicate discounts related to managed care as they do for those related to FFS. Additionally, the potential for duplicate discounts related to Medicaid managed care has existed since 2010, when manufacturers were required to begin paying Medicaid rebates under managed care in addition to FFS. Ten years later, HRSA still has not issued guidance on how covered entities should prevent duplicate discounts in Medicaid managed care and has indicated that it is not pursuing new guidance at this time. This inaction continues to leave the 340B Program vulnerable to noncompliance with federal law. HHS concurred with our 2018 recommendations that HRSA issue guidance to covered entities on the prevention of duplicate discounts under Medicaid managed care and incorporate into its audit process an assessment of covered entities' compliance with the prohibition on duplicate discounts as it relates to Medicaid managed care claims. Until these recommendations are implemented, HRSA must, at a minimum, ensure that covered entities work with manufacturers regarding any identified duplicate discounts in managed care to help ensure compliance with 340B Program requirements. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of HRSA, the Administrator of CMS, and other interested parties. In addition, the report will be available at no charge on GAO s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at DraperD@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix V. Appendix I: Drug Manufacturers Efforts to Prevent and Detect Duplicate Discounts Officials from all three drug manufacturers and the organizations that work on their behalf that we contacted reported challenges preventing and detecting duplicate discounts due to a lack of information. For example, officials from drug manufacturers told us that state Medicaid programs do not always provide data on the individual claims for which they were requesting rebates. Specifically, to obtain rebates, states submit requests to participating manufacturers for all drug purchases made that quarter; these requests contain the total quarterly amount owed for each of the manufacturers drugs, but not information detailing each claim for which rebates are being sought. Although the Centers for Medicare & Medicaid Services (CMS) encourages states to respond to reasonable manufacturer requests for claim-level data, the provision of such data is not required. Without this claim-level data, manufacturers reported that it is difficult to determine if rebate requests include claims for drugs purchased at the 340B discounted price. Additionally, manufacturers lack complete information on the extent to which covered entities use 340B drugs for Medicaid beneficiaries. This is because the Medicaid Exclusion File (MEF), a list maintained by the Health Resources and Services Administration (HRSA) to assist in the prevention of duplicate discounts, is only required to reflect the provider numbers used by covered entities that choose to use (carve in) 340B drugs provided directly by the covered entity to Medicaid fee-for-service (FFS) beneficiaries. The MEF does not include information on whether covered entities are using 340B drugs for Medicaid managed care beneficiaries and may not include information on contract pharmacies that are dispensing these drugs to Medicaid beneficiaries on covered entities behalf. Despite these limitations, the drug manufacturers we contacted reported that when claim-level data is available they review that data to detect potential duplicate discounts before they issue rebate payments. For example, officials from one drug manufacturer told us that they compare the provider numbers on the claim-level data obtained from states with the information on the MEF and dispute rebate requests for any claims from a provider number listed on the MEF. However, officials from some drug manufacturers told us that this approach is ineffective for preventing duplicate discounts for drugs dispensed at contract pharmacies because, as noted above, the MEF may not include information on contract pharmacies, and the claim-level data may only list the provider number for the dispensing pharmacy, not the prescribing covered entity. The drug manufacturers we contacted also reported trying to identify duplicate discounts after rebates have been paid by looking at 340B purchasing patterns. For example, officials from one drug manufacturer told us they look at covered entities purchases and assess whether the proportion of 340B purchases is consistent with their carve-in status. Specifically, these officials explained that if a covered entity is not listed on the MEF, then the entity should not be using 340B drugs for Medicaid FFS patients. Therefore, if all or nearly all of the purchases made by that covered entity were at the discounted price, it could indicate the presence of duplicate discounts. While the MEF is only intended to indicate covered entities that are using 340B drugs for Medicaid FFS beneficiaries, officials reported that drug manufacturers also rely on the MEF as a proxy for covered entities carve-in practices for Medicaid managed care since there is no equivalent data source. If there are concerns that duplicate discounts occurred, officials from the drug manufacturers we contacted indicated that they may conduct what is referred to as a good faith inquiry, in which the manufacturer, or a consultant working on the manufacturer s behalf, requests data from covered entities on a specific set of drug claims for which they have paid rebates to determine if those claims involved 340B drugs. If drug manufacturers confirm that a duplicate discount did occur, officials reported that they may work to negotiate a repayment from the state or covered entity, depending on which party was responsible for the error. Additionally, one official who works on behalf of manufacturers told us that manufacturers also will work with covered entities to remedy the cause of the duplicate discount to prevent future occurrences. Drug manufacturers told us that it is not always clear whether states or covered entities are responsible for duplicate discounts, and thus, which party should be contacted regarding repayment. Additionally, drug manufacturers reported that some states refer them directly to covered entities to resolve all inquiries. Medicaid program officials in Michigan and Texas, for example, said that their states refer manufacturers to the covered entities because they believe that the covered entities would most likely be responsible for any duplicate discounts that occurred due to a failure to correctly apply the required claim identifiers. If drug manufacturers need assistance resolving their concerns or obtaining repayment for duplicate discounts, they can access options made available by HRSA and CMS. Specifically, drug manufacturers can request approval from HRSA to audit a covered entity to investigate suspicions of duplicate discounts in both Medicaid FFS and managed care. To receive approval from HRSA to conduct an audit, a drug manufacturer must document reasonable cause and provide an audit plan. In addition, HRSA requires the drug manufacturer to use an independent auditor who follows government auditing standards. According to HRSA, from October 2011 through August 2019, 45 audits were requested by drug manufacturers and 26 requests were approved. Of the 26 audits approved by HRSA, the agency received 13 final audit reports, six of which had duplicate discount-related findings. However, while audits can be a tool for identifying duplicate discounts and obtaining repayment, some drug manufacturers we spoke with indicated that the cost of audits may outweigh the benefits received in the form of repayments. Additionally, as noted previously, HRSA does not require covered entities to repay manufacturers for duplicate discounts that occur in managed care. Drug manufacturers also may use the state hearing process or pursue a dispute resolution in conjunction with states through CMS if their issues with state Medicaid programs cannot be resolved through inquires. According to CMS officials, through the dispute resolution process, the agency provides drug manufacturers and states with guidance to assist in determining responsibilities and identifying next steps to work through conflicts. CMS officials said that, in general, they have received five to 10 Medicaid drug rebate disputes per year, about half of which are related to 340B duplicate discount issues. Appendix II: State Medicaid Programs Policies on Covered Entities Use of 340B Drugs, by Dispensing Method Appendix II: State Medicaid Programs Policies on Covered Entities Use of 340B Drugs, by Dispensing Method California allows covered entities to dispense 340B drugs at contract pharmacies if there is an approved arrangement between the state, the covered entity, and the contract pharmacy. At the time of our information request, California officials indicated that they only had approved arrangements for certain hemophilia centers and had no approved arrangements with other types of covered entities. New Hampshire allows covered entities to provide 340B drugs to Medicaid beneficiaries, but generally does not allow them to bill Medicaid for these drugs. The one exception is that the state does allow covered entities that are approved family planning clinics to bill Medicaid for 340B drugs administered by providers to Medicaid beneficiaries. <6. State Arizona> The term 340B drugs refers to drugs purchased by covered entities at a discounted price through the 340B Program. Carve out means that the state did not allow covered entities to provide 340B drugs to Medicaid beneficiaries. Carve in means that the state required covered entities to provide 340B drugs to eligible Medicaid beneficiaries. New Hampshire allows covered entities to provide 340B drugs to Medicaid beneficiaries, but generally does not allow them to bill Medicaid for these drugs. The one exception is that the state does allow covered entities that are approved family planning clinics to bill Medicaid for 340B drugs administered by providers to Medicaid beneficiaries. Managed care plans in this state do not cover outpatient drugs dispensed at pharmacies; they only cover provider-administered drugs. Appendix III: State Medicaid Programs Procedures for Identifying 340B Drugs, by Dispensing Method Appendix III: State Medicaid Programs Procedures for Identifying 340B Drugs, by Dispensing Method State does not allow covered entities to use 340B drugs for Medicaid fee-for-service beneficiaries for this dispensing method, and thus does not need a procedure to identify these drugs. Massachusetts requires contract pharmacies to include the covered entities National Provider Identifier on claims using 340B drugs, which the state then uses to exclude those claims from its rebate request. New Hampshire allows covered entities to provide 340B drugs through this dispensing method, but does not allow them to bill Medicaid for these drugs. Rhode Island uses a 340B claim identifier to identify and exclude associated drugs administered by providers at hospitals, but does not have any procedures to identify these drugs administered by providers at other types of covered entities. <7. Arkansas> State does not allow covered entities to use 340B drugs for Medicaid managed care beneficiaries for this dispensing method and thus does not need a procedure to identify these drugs. New Hampshire allows covered entities to provide 340B drugs through this dispensing method, but does not allow them to bill Medicaid for these drugs. Managed care plans in this state do not cover outpatient drugs dispensed at pharmacies; they only cover provider-administered drugs. Appendix IV: Comments from the Department of Health and Human Services Appendix V: GAO Contact and Staff Acknowledgments <8. GAO Contact> <9. Staff Acknowledgments> In addition to the contact named above, Michelle Rosenberg (Assistant Director), David Lichtenfeld (Analyst-in-Charge), Amanda Cherrin, and Sarah Tempel made key contributions to this report. Also contributing were Jennie Apter, Ethiene Salgado-Rodriguez, and Jennifer Whitworth. | Why GAO Did This Study
Covered entities can receive substantial discounts on outpatient drugs through the 340B Program, an estimated 25 to 50 percent of the cost of the drugs, according to HRSA. Additionally, Medicaid drug rebates are an important source of savings for states and the federal government, saving more than $36 billion in fiscal year 2018. However, ensuring that manufacturers are not subject to both discounts requires coordination within HHS, and between covered entities and states. GAO was asked to provide information on the prevention of duplicate discounts. Among other things, this report examines HHS's efforts to ensure compliance with the prohibition on duplicate discounts. GAO reviewed documentation, including federal policies and those from all 50 states and Washington, D.C. on preventing duplicate discounts. GAO also interviewed officials from CMS, HRSA, and 16 covered entities from four states selected to obtain variation in the types of entities and other factors.
What GAO Found
The 340B Drug Pricing Program (340B Program) and the Medicaid Drug Rebate Program require manufacturers to provide discounts on outpatient drugs in order to have their drugs covered by Medicaid. These discounts take the form of reduced sales prices for covered entities participating in the 340B Program—eligible hospitals and federal grantees—and rebates on drugs dispensed to Medicaid beneficiaries, shared by states and the federal government. However, federal law prohibits subjecting manufacturers to “duplicate discounts” in which drugs provided to Medicaid beneficiaries are subject to both 340B Program discounted prices (i.e., are 340B drugs) and Medicaid rebates. To prevent duplicate discounts, state Medicaid programs must know when covered entities dispense 340B drugs to Medicaid beneficiaries, so the state programs can exclude those drugs from their Medicaid rebate requests.
GAO found that limitations in the Department of Health and Human Services's (HHS) oversight of the 340B and Medicaid Drug Rebate Programs may increase the risk that duplicate discounts occur.
HHS's Centers for Medicare & Medicaid Services (CMS) conducts limited oversight of state Medicaid programs' efforts to prevent duplicate discounts. CMS does not track or review states' policies or procedures for preventing duplicate discounts, and GAO found that the procedures states used to exclude 340B drugs are not always documented or effective at identifying these drugs. As a result, CMS does not have the information needed to effectively ensure that states exclude 340B drugs from Medicaid rebate requests. CMS also does not have a reasonable assurance that states are seeking rebates for all eligible drugs, potentially increasing costs to state and federal governments due to forgone rebates.
HHS's Health Resources and Services Administration's (HRSA) audits of covered entities do not include reviews of states' policies and procedures for the use and identification of 340B drugs. As a result, the audits are unable to determine whether covered entities are following state requirements, and taking the necessary steps to comply with the prohibition on subjecting manufacturers to duplicate discounts.
GAO reported in 2018 that HRSA had not issued guidance on, and did not audit for, duplicate discounts in Medicaid managed care and recommended the agency do so as the majority of Medicaid enrollees, prescriptions, and spending for drugs are in managed care. HRSA is working to determine next steps to address these recommendations. In this report, GAO found that, unlike Medicaid fee-for-service, when duplicate discounts in Medicaid managed care claims are identified, HRSA does not require covered entities to address them or work with manufacturers to repay them. As a result, manufacturers may be subject to duplicate discounts for drugs provided under managed care.
Given these limitations in federal oversight, HHS does not have reasonable assurance that states and covered entities are complying with the prohibition on duplicate discounts.
What GAO Recommends
GAO is making three recommendations, namely that: 1) CMS ensure that state Medicaid programs have written policies and procedures that are designed to prevent duplicate discounts and forgone rebates; and that HRSA 2) incorporate covered entities' compliance with state policies into its audits, and 3) require covered entities to work with manufacturers regarding repayment of identified duplicate discounts in managed care. HHS agreed with the recommendation to CMS, but disagreed with those to HRSA. GAO continues to believe these are needed to improve oversight and the integrity of the 340B Program, as explained in the report. |
gao_GAO-19-459 | gao_GAO-19-459_0 | <1. Background> <1.1. The Consumer Reporting Process> Information on consumers is exchanged through a consumer reporting process that includes consumers, CRAs, furnishers of consumer information, and users of consumer reports (see fig. 1). Consumers are individuals whose information is collected by CRAs and shared by CRAs with users of consumer reports to make decisions about eligibility, such as for credit, insurance, or employment. Information about consumers is generated through their participation in markets for goods and services such as the use of banking or insurance products. CRAs are companies that regularly assemble or evaluate consumer information for the purpose of providing consumer reports to third parties. CRAs obtain data from a wide variety of sources, including data furnishers, such as banks and mortgage lenders, and public records. They can generate revenue by selling consumer reports to third parties. In 2012, CFPB estimated that the consumer reporting market consisted of more than 400 CRAs. CFPB estimated in 2015 that the three nationwide CRAs which also are the three largest CRAs held information on about 208 million Americans. Data furnishers are companies that report consumer information to CRAs. Examples of furnishers include banks, payday lenders, mortgage lenders, collection agencies, automobile-finance lenders, and credit card issuers. The information provided by furnishers may include personally identifiable information such as names, addresses, Social Security numbers, and employment data and account status and credit histories. A furnisher may provide CRAs with consumer information on multiple types of products. For example, a financial institution may provide account information on student loans as well as bank deposits. Furnishing of information to CRAs is generally voluntary; therefore, a furnisher is not required to submit all of the records about a consumer s activity on an account to CRAs. Some accounts may only be reported when the payment history turns negative, such as when the debt is transferred to debt collection. Users of consumer reports include banks, credit card companies, landlords, employers, and other entities that use consumer reports to determine individual consumers eligibility for housing, employment, or products and services such as credit and insurance. Companies use consumer information compiled in consumer reports to screen for consumer risks and behaviors. For example, banks and credit unions may rely on consumer reports to assess the risk of opening new accounts. Some companies may act as both furnishers and consumer report users. During the consumer reporting process, a consumer does not necessarily interact with the CRA; however, if consumers discover inaccurate or incomplete information on their consumer reports as a result of, for example, being denied credit, they can file a dispute with the CRA, the furnisher, or both. Consumers may also request copies of their consumer reports from CRAs directly, and CRAs may provide consumers with certain disclosures about how their information is being shared. Different types of CRAs compile different types of reports using the data they obtain. The three nationwide CRAs produce credit reports and credit scores that can be used to qualify consumers for credit. Credit reports generally contain personally identifiable information, employment information, account status and credit histories, and inquiries made about consumers credit histories (see fig. 2). Other CRAs, called specialty CRAs, provide a variety of specialized reports used for making decisions on employment, rental housing, or other purposes. For example, reports from a specialty background-screening CRA may include some of the same information as a credit report but may also contain criminal history, education verification, and employment history. <1.2. Laws and Regulations Governing Consumer Reporting> Several federal laws govern the consumer reporting industry, including the accuracy, security, use, and sharing of consumer report information. The Fair Credit Reporting Act (FCRA) is the primary federal law governing the collection, assembly, and use of consumer reports. FCRA was enacted to improve the accuracy and integrity of consumer reports, and promote the consumer reporting agencies adoption of reasonable procedures regarding the confidentiality, accuracy, relevancy, and proper use of consumer information. FCRA has been amended several times since it was enacted in 1970. When FCRA was originally enacted, FCRA imposed certain requirements on CRAs and users of consumer reports. Amendments to FCRA, pursuant to the Consumer Credit Reporting Reform Act of 1996 and the Fair and Accurate Credit Transactions Act of 2003, expanded the duties of CRAs, including requirements for dispute investigations, and imposed duties on data furnishers, such as requirements related to data accuracy and dispute investigations. <1.2.1. FCRA Provisions Governing Consumer Report Accuracy> FCRA requires CRAs and furnishers to take steps regarding the accuracy of the information contained in consumer reports. In addition, FCRA s implementing regulation Regulation V as well as FTC s Furnisher Rule more specifically outline furnishers responsibilities regarding accuracy. FCRA requires CRAs to follow reasonable procedures to assure maximum possible accuracy of the information concerning the individual to whom the report relates when preparing consumer reports. FCRA prohibits furnishers from reporting information that they know or have reasonable cause to believe is inaccurate, unless the furnisher has clearly and conspicuously specified to consumers an address whereby consumers can notify the furnisher that specific information is inaccurate. Regulation V and FTC s Furnisher Rule require furnishers to have reasonable written policies and procedures in place regarding the accuracy and integrity of the information they provide to a CRA, where accuracy means that the information is for the right person and reflects the terms of the account and the consumer s performance on the account. They also require furnishers to consider and incorporate, as appropriate, guidelines such as internal controls for accuracy and integrity of furnished information. FCRA requires CRAs and furnishers to address disputes consumers submit to them about the completeness or accuracy of information in their consumer reports. FCRA requires CRAs and Regulation V and FTC s Furnisher Rule require furnishers to conduct reasonable investigations of a consumer s dispute to determine the accuracy of the disputed information. As part of the process, CRAs and furnishers are required to consider all relevant information, including information provided by the consumer. <1.2.2. Laws Governing Security of Consumer Data> The Gramm-Leach-Bliley Act (GLBA), provisions in the Federal Trade Commission Act, and provisions of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act), among other laws, govern the security of consumer data. Congress enacted GLBA in part to protect the privacy and security of nonpublic personal information that individuals provide to financial institutions. Many financial institutions furnish consumer data to CRAs. In a prior report, FTC staff told us that CRAs themselves might be considered financial institutions under GLBA if they collect, maintain, and report on consumer information. GLBA includes a provision directing FTC and certain federal regulators including the Federal Reserve, FDIC, and OCC to establish standards relating to administrative, technical, and physical safety for customer records. Specifically, GLBA directs these federal agencies to establish appropriate standards for financial institutions under their jurisdiction to ensure the security and confidentiality of customer records and information; protect against any anticipated threats or hazards to the security or integrity of such records; and protect against unauthorized access to or use of such records or information that could result in substantial harm or inconvenience to any customer. Provisions in the Federal Trade Commission Act prohibiting unfair or deceptive acts or practices and provisions in the Dodd-Frank Act prohibiting unfair, deceptive, or abusive acts or practices also may apply to CRAs protection of consumer data. Specifically, section 5 of the Federal Trade Commission Act prohibits unfair or deceptive acts or practices in or affecting commerce. In the context of privacy and security, these provisions require companies to represent practices to consumers in a truthful manner. For example, we reported previously that FTC has found companies that alleged they were following certain security protections, but did not in fact have such security features, to have engaged in unfair or deceptive practices. Similarly, the Dodd- Frank Act prohibits providers of consumer financial products or services from engaging in unfair, deceptive, or abusive acts or practices. For example, we reported previously that CFPB has alleged that claims to consumers that transactions are safe and secure while simultaneously lacking basic security practices can constitute unfair, deceptive, or abusive acts or practices. <1.2.3. Laws Governing Use and Sharing of Consumer Information> FCRA, GLBA, and the Economic Growth, Regulatory Relief, and Consumer Protection Act govern how consumer information may be used and shared. However, as we have previously reported, consumers have limited legal rights to control what personal information is collected and how it is maintained, used, and shared. For example, consumers generally cannot exercise choice in the consumer reporting market such as by choosing which CRAs maintain their information and do not have legal rights to delete their records with CRAs. FCRA permits CRAs to provide users of consumer reports the report only if the user has a permissible purpose, such as to process a credit application, screen a job applicant, or underwrite an insurance policy, subject to additional limitations where the credit or insurance transaction is not initiated by the consumer. FCRA also prohibits a person from using or obtaining a consumer report for any purpose other than that specified to the CRA when the user obtained the report. Further, FCRA requires that CRAs take steps to validate the legitimacy of users and their requests for consumer report information. FCRA and Regulation V also allow consumers to opt out of allowing CRAs to share their information with third parties for prescreened offers and limits the ability of affiliated companies to market products or services to consumers using shared consumer data. GLBA contains provisions regarding the use and sharing of consumer information that apply to CRAs. GLBA restricts the sharing of nonpublic personal information collected by or acquired from financial institutions. In particular, generally a nonaffiliated third party that receives nonpublic personal information from a financial institution faces restrictions on how it may further share or use the information. For example, a third party that receives nonpublic personal information from a financial institution to process consumer account transactions may not use the information for marketing purposes or sell it to another entity for marketing purposes. Consumers can prevent certain users from accessing their consumer reports by placing a security freeze on their consumer reports, which generally prevents the opening of new lines of credit in the consumer s name (provided the creditor checks the consumer s credit). Consumers may place a credit freeze at the three nationwide CRAs free of charge. <2. Oversight of CRAs Is Shared among CFPB and Other Federal and State Agencies> Federal and state agencies share oversight of CRAs and furnishers. At the federal level, CFPB has supervisory authority over certain CRAs and shares enforcement and rulemaking authority with FTC for certain statutes applicable to all CRAs. At the state level, state Attorney General offices have enforcement authority to oversee CRAs, and some state agencies have limited supervisory authority under state laws. Federal agencies that have oversight authority for data furnishers are CFPB, FTC, and prudential regulators the Federal Reserve, FDIC, NCUA, and OCC. Their oversight authority depends on the size as well as the type of the furnisher, such as if the furnisher is a nonbank institution, depository institution, or credit union. <2.1. CFPB Has Supervisory Authority over Certain CRAs and Shares Enforcement Authority with FTC for All CRAs> CFPB is the only federal agency with supervisory authority over CRAs, but it generally shares enforcement authority over CRAs with FTC as well as rulemaking authority for certain statutes applicable to all CRAs (see table 1). CFPB s supervisory authority includes the authority to perform examinations to assess compliance with FCRA and other Federal consumer financial laws and to detect and assess risk to consumers and markets. CFPB may issue matters requiring attention (MRA) based on its examinations. MRAs identify corrective actions that result from examination findings that require the attention of the supervised institution s board of directors or principals, including violations of Federal consumer financial laws. According to CFPB, MRAs are not legally enforceable, but CFPB can use them to determine future supervisory work or the need for potential enforcement actions. CFPB s supervisory authority is generally limited to CRAs that qualify as larger participants in the consumer reporting market. In 2012, CFPB defined larger participants of the consumer reporting market to include CRAs with more than $7 million in annual receipts resulting from consumer reporting activities. CFPB s authority does not extend to CRAs that do not participate in activities involving the use of consumer information to make decisions regarding financial products or services. For example, a specialty CRA that only provides consumer reports regarding a consumer s employment history may not be considered a larger participant for the purposes of CFPB supervision, even if the CRA s annual receipts from this activity are more than $7 million. In the preamble to its 2012 rule defining larger participants, CFPB stated that the threshold of more than $7 million is consistent with the objective of supervising market participants that have a significant impact on consumers and is appropriate in light of the highly concentrated nature of the consumer reporting market. In particular, CFPB estimated that out of about 410 CRAs, 30 CRAs met the threshold. Of those 30 CRAs, CFPB estimated that the six largest CRAs generated approximately 85 percent of industry receipts. While CFPB generally has supervisory authority over only larger- participant CRAs, CFPB and FTC generally share enforcement authority over CRAs. For example, they both enforce CRA compliance with most provisions of FCRA and provisions in other laws related to unfair or deceptive acts or practices. Both agencies have similar enforcement tools, including investigation, civil penalties, monetary relief for consumers, and requirements for a company to conduct or refrain from conducting certain acts. CFPB and FTC entered into a memorandum of understanding to coordinate their enforcement efforts, and staff from both agencies told us they take additional actions to coordinate their enforcement activities. For example, FTC staff said that CFPB and FTC maintain a log of each agency s investigations to avoid duplication. Additionally, CFPB and FTC staff said they hold periodic coordination meetings to discuss their enforcement activities. FTC staff told us that because CFPB possesses supervisory authority over the three largest CRAs, FTC has focused its FCRA enforcement efforts on other CRAs. However, FTC staff said that to the extent that the largest CRAs offer nonfinancial products or services, such as employment or tenant background screening, FTC will also investigate these activities. CFPB and FTC each have certain rulemaking authority in connection with statutes that may apply to CRA activities, but generally CFPB has broader authority than FTC. Generally, CFPB has broad authority to issue regulations for Federal consumer financial laws, including most provisions of FCRA, which are applied to all CRAs. FTC has specific rulemaking authority that may apply to CRAs under FCRA, the Federal Trade Commission Act, and GLBA. For example, FTC s rule related to safeguarding the security and confidentiality of customer records under GLBA applies to CRAs. <2.2. State Agencies Have Enforcement Authority over CRAs, and Some State Laws Provide Limited Supervisory Authority> State agencies, such as state Attorney General offices, have enforcement authority to oversee CRAs, and some state agencies have limited supervisory authority under state laws. Federal laws establish enforcement authority for state agencies over CRAs. Under FCRA and the Dodd-Frank Act s provisions prohibiting unfair, deceptive, or abusive acts and practices, state Attorney General offices (or another official or agency designated by the state) have certain enforcement authority over some companies, including certain CRAs. However, states are required to coordinate enforcement actions with CFPB and FTC. In addition to enforcement authority under federal laws, state agencies may have enforcement authority under their state laws that apply to CRAs. Staff from state agencies in four selected states Ohio, New York, Maine, and Maryland told us that their states Attorney General offices have enforcement authority over CRAs under state laws prohibiting unfair or deceptive acts or practices. In addition, according to the National Consumer Law Center, every state has a consumer protection law that prohibits deceptive acts or practices and many states prohibit unfair acts or practices, and the enforcement of such state laws typically is the responsibility of a state enforcement agency, such as the state Attorney General offices. Some state Attorney General offices have used their enforcement authority under FCRA and state laws prohibiting unfair or deceptive acts or practices to investigate and take enforcement actions against CRAs. For example, the three nationwide CRAs entered into two separate settlements with 30 state Attorney General offices in 2015 in which the CRAs agreed to implement a number of specific reforms, including reforms related to consumer report accuracy and dispute processes. Under these settlements, the state Attorney General offices claimed the CRAs violated FCRA and the states laws prohibiting unfair or deceptive acts or practices. Additionally, representatives of several states Attorney General offices told us in connection with a prior report that they launched a joint investigation into whether a nationwide CRA violated state laws in a 2017 data breach, including state laws prohibiting unfair or deceptive practices. In addition to the enforcement authority state Attorney General offices have under state laws prohibiting unfair or deceptive acts or practices, some state laws provide state agencies, such as financial regulators and consumer protection bureaus, with oversight authority over CRAs. Our interviews with staff from four selected states agencies Ohio, New York, Maine, and Maryland indicated that CRA oversight authority given to state agencies under state laws varies. Staff from Ohio s Office of the Attorney General told us that Ohio does not have specific laws that provide Ohio state regulators with supervisory, rulemaking, or enforcement authority over CRAs, apart from Ohio laws prohibiting unfair or deceptive acts or practices that provide the Office of the Attorney General with enforcement authority. New York s financial regulator told us that state laws provide the agency with supervisory, enforcement, and rulemaking authority over institutions that provide financial products and services, including certain CRAs. The agency issued a rule in 2018 requiring CRAs reporting on consumers within the state to register with the agency annually and provide information as required by the agency. Staff from Maine s consumer protection agency told us that under Maine law, the agency has supervisory and enforcement authority over CRAs operating within the state. Agency staff told us that the agency examines certain CRAs every 2 years for compliance with Maine s consumer reporting laws, such as by reviewing records and documents provided by CRAs. Maryland s financial regulator told us that Maryland s laws provide the agency with enforcement and rulemaking authority over CRAs but not supervisory authority. The agency can adopt regulations in order to administer provisions of Maryland statutes, such as procedures for ensuring accuracy in consumer reports. Additionally, agency staff said that the agency can investigate CRAs using its enforcement authority but cannot conduct supervisory examinations of CRAs. Representatives from several CRAs we interviewed told us that their supervision by state regulators has been limited. Representatives from two CRAs told us that a state agency has examined them. Representatives from three other CRAs we interviewed said they had limited encounters with state-level agencies. However, as previously stated, CFPB, FTC, and state agencies generally have enforcement authority over CRAs regarding consumer financial protection. <2.3. CFPB, FTC, and Prudential Regulators Share Oversight of Data Furnishers> CFPB, FTC, and the prudential regulators the Federal Reserve, FDIC, NCUA, and OCC share federal oversight of data furnishers for compliance with FCRA, among other Federal consumer financial laws. These furnishers include insured depository institutions and credit unions and nondepository institutions, such as student and mortgage loan servicers. Federal agencies generally split oversight of furnishers based on their charter type and asset size. Oversight of furnishers that are depository institutions or credit unions. CFPB and the prudential regulators have supervisory and enforcement authority over insured depository institutions and credit unions for compliance with FCRA and other federal consumer financial laws (see table 2). The Dodd-Frank Act generally divided authority between CFPB and the prudential regulators based on an institution s charter type and the value of an institution s total assets. Assets of more than $10 billion. In general, CFPB has enforcement and supervisory authority for insured depository institutions and credit unions (as well as their affiliates) that have more than $10 billion in total assets for compliance with many Federal consumer financial laws.However, a prudential regulator that is authorized to enforce a Federal consumer financial law may recommend that CFPB initiate an enforcement action, and if CFPB does not, the prudential regulator may initiate an enforcement action. Assets of $10 billion or less. In general, the four prudential regulators have enforcement and supervisory authority over insured depository institutions or credit unions with total assets of $10 billion or less. If, however, CFPB believes that an institution in this category has violated a Federal consumer financial law, it must notify the appropriate prudential regulator in writing and recommend action. Additionally, regardless of an institution s asset size, CFPB generally has rulemaking authority for many Federal consumer financial laws that apply to insured depository institutions and insured credit unions. However, prudential regulators have limited rulemaking authority as related to furnishing activities for certain provisions specifically retained pursuant to the Dodd-Frank Act and FCRA. CFPB generally has supervisory and enforcement authority over insured depository institutions and insured credit unions, as well as their affiliates, that have more than $10 billion in total assets, for compliance with Federal consumer financial laws as defined under the Dodd-Frank Wall Street Reform and Consumer Protection Act. CFPB has broad rulemaking authority under many Federal consumer financial laws that apply to depository institutions and credit unions, with limited exceptions. The Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, National Credit Union Administration, and Office of the Comptroller of Currency (collectively called the prudential regulators) generally have supervisory and enforcement authority for Federal consumer financial laws (as defined under the Dodd-Frank Wall Street Reform and Consumer Protection Act) for insured depository institutions and credit unions that have $10 billion or less in total assets. The prudential regulators also have limited rulemaking authority related to furnishing activities under certain Federal consumer financial laws, including parts of FCRA. Oversight of furnishers that are nondepository institutions. CFPB and FTC share oversight of nondepository institutions. In general, CFPB has supervisory authority over certain types of nondepository financial institutions for compliance with FCRA and other Federal consumer financial laws (see table 3). Such institutions include certain kinds of mortgage market participants, private student lenders, and payday lenders. CFPB also has supervisory authority over institutions in markets for consumer financial products or services that it defines as larger participants. For example, CFPB has issued rules defining larger participants for automobile-financing and consumer debt-collection markets. CFPB and FTC share enforcement authority for many different types of nondepository institutions, such as mortgage lenders, payday lenders, debt collectors, and telecommunication companies. FTC additionally has enforcement authority over other nondepository institutions for which CFPB does not have enforcement authority, such as automobile dealers. FTC staff told us that, similar to their coordination efforts for CRAs, FTC and CFPB coordinate their enforcement activities with respect to furnishers. CFPB has rulemaking authority for most consumer financial laws, including FCRA, that may apply to furnishers that are nondepository institutions. In comparison to CFPB, FTC has specific rulemaking authority under FCRA, the Federal Trade Commission Act, and GLBA to promulgate rules that may apply to nondepository institution furnishers. CFPB has supervisory authority over certain nondepository institutions. It shares enforcement authority with FTC for certain nondepository institutions and has broad rulemaking authority for Federal consumer financial laws which apply to many institutions, including those for which it has supervisory jurisdiction. <3. CFPB s Oversight Has Prioritized Supervision of CRAs Based on Perceived Risk, but CFPB Has Not Defined Supervisory Expectations> <3.1. CFPB s Supervision Has Prioritized Certain CRAs and Has Focused on Data Accuracy and Dispute Investigations> According to CFPB, in its oversight of the consumer reporting market, CFPB has prioritized CRAs representing the greatest potential risks to consumers. Additionally, CFPB has generally focused on certain compliance areas, particularly data accuracy and investigations conducted in response to consumer disputes. On an annual basis, CFPB updates its plans for supervision of CRAs and furnishers for the next 1 to 2 years. According to CFPB, it assesses specific risks in the market and determines entities and compliance areas to examine. In making these determinations, CFPB stated that it considers factors such as market presence, consumer complaints, its prior supervisory examinations and findings, and its resources. <3.1.1. Supervisory Priorities for CRAs and Data Furnishers> According to CFPB, since the start of its supervisory program for the consumer reporting market in 2012, CFPB has prioritized CRAs that pose the greatest risks to consumers and the marketplace for examinations. Specifically, CFPB staff told us that CFPB has prioritized CRAs that represent a significant share of the market and the largest volume of consumer complaints submitted to CFPB s complaint database. CFPB has also examined one or more specialty CRAs. CFPB stated that in determining which specialty CRAs to examine, it considered factors such as the CRAs market share in the particular consumer reporting products they offer. According to CFPB, in setting supervisory priorities, supervision staff also consulted with stakeholders and considered CFPB s resources and findings from prior examinations that may have indicated weaknesses. CFPB staff said that when CFPB began examining CRAs, its supervisory approach was to examine their compliance management systems first before focusing on other compliance areas. The staff said that the compliance management system reviews helped CFPB to learn about how CRAs operate. Based on the compliance management reviews, CFPB determined that it could review data accuracy, dispute investigations, and other compliance areas by examining the mechanisms CRAs use to comply with FCRA. After examining compliance management systems, CFPB prioritized examining other aspects of compliance related to data accuracy (including processes for monitoring furnishers) and dispute investigations, as well as performing follow-up examinations in those areas. CFPB staff stated that they have chosen to focus on data accuracy and dispute investigations because these were the largest problem areas based on CFPB s assessment of complaint data. Additionally, CFPB identified compliance with the FCRA obligations regarding data accuracy and effective and efficient dispute resolution as agency priorities for the consumer reporting market. CFPB has also examined other CRA compliance areas, including procedures related to suppression and reinsertion of information that CRAs found to be inaccurate, unverifiable, or obsolete; procedures for ensuring a permissible purpose for obtaining consumer reports; and compliance management systems related to data security. According to CFPB, when determining compliance areas for examination, the agency considered factors such as its data on complaints, the extent to which it had previously examined the areas, and concerns identified in prior examinations. In February 2019, we found that CFPB s examination process did not routinely include an assessment of CRAs data security risks, and we recommended that CFPB s prioritization specifically account for data security risk. In conducting its examinations, CFPB has focused on assessing CRA procedures for complying with FCRA rather than on the extent of inaccuracy in consumer reports. For example, according to a 2017 CFPB report, CFPB directed one or more CRAs to establish quality control programs to regularly assess the accuracy of information included in consumer reports and to develop systems to measure the accuracy of consumer reports and identify patterns and trends in errors. CFPB staff said CFPB has not monitored the extent of inaccuracy in consumer reports produced by the CRAs it examines. They stated that FCRA requires CRAs and furnishers to follow reasonable procedures with regard to accuracy but does not require or identify acceptable thresholds for accuracy. CFPB staff explained that CFPB s supervisory program has therefore focused on evaluating CRAs compliance with FCRA requirements for reasonable procedures with regard to accuracy and identifying weaknesses in such procedures. According to CFPB, in prioritizing examinations of data furnishers, the agency has primarily considered the furnishers market shares, the number of disputes CRAs received about the furnishers, and the number of complaints CFPB received in its complaint database. CFPB has prioritized large furnishers within their respective markets. For example, CFPB identified one or more student loan servicers furnishing data to CRAs that had large shares of the student loan servicing market. CFPB has also prioritized furnishers with high dispute rates relative to other furnishers within their markets. For example, CFPB identified one or more credit card issuers with higher dispute rates compared to their peers, based on CFPB s review of dispute data provided by CRAs. According to CFPB, it has also considered the results of prior CFPB examinations and input from agency stakeholders. As with CRAs, CFPB s examinations of furnisher activities have focused on accuracy and dispute investigations. In its Supervisory Highlights from March 2017, CFPB stated that the accuracy of consumer report information is a CFPB priority and that furnishers play an important role in ensuring the accuracy of consumer report information through the dispute process. For example, CFPB stated that furnishers timely response to consumer disputes may reduce the effect that inaccurate negative information on a consumer report may have on the consumer. <3.1.2. Examination Results for CRAs and Furnishers> From 2013 through 2018, CFPB examined several CRAs. Many of these examinations evaluated CRA compliance with accuracy and dispute investigation obligations under FCRA, such as by assessing data governance systems, quality control programs, and furnisher oversight and data monitoring. Additionally, some examinations evaluated other FCRA compliance areas, including ensuring that users had permissible purposes for requesting consumer reports and preventing reinsertion of previously deleted information. CFPB s examinations related to data accuracy and dispute investigation obligations resulted in supervisory findings that CFPB directed CRAs to take actions to address. CFPB found that one or more CRAs had minimal compliance mechanisms in place to meet requirements for data accuracy and for dispute investigations (see table 4 for examples of CFPB s supervisory findings and directed actions in these areas). For example, CFPB found that one or more CRAs lacked quality control policies and procedures to test compiled consumer reports for accuracy and had insufficient monitoring and oversight of furnishers that provided information used in the reports. CFPB also found that one or more CRAs did not review evidence that consumers provided to support their disputes and relied entirely on the furnishers to investigate the disputes. CFPB directed specific changes in some CRAs policies and procedures for ensuring data accuracy and conducting dispute investigations, including increasing oversight of incoming data from furnishers, developing internal processes to monitor furnisher dispute responses to detect those that may present higher risk of inaccurate data, and enforcing the CRAs obligation to investigate consumer disputes, including review of relevant information provided by consumers. In addition, CFPB directed one or more CRAs to establish quality control programs that regularly assess the accuracy and integrity of compiled consumer reports. In follow-up reviews of some of its supervisory findings, CFPB found that one or more CRAs took actions that resulted in improvements in policies and procedures. For example, CFPB has found that one or more CRAs established quality control programs, including developing tests to identify the extent to which consumer reports are produced using information for the wrong consumer. For other findings, CFPB determined that one or more CRAs had not taken actions to address the findings, or CFPB had not yet conducted follow-up examinations to determine if they had been addressed. From 2013 through 2018, CFPB conducted examinations of several data furnishers. These furnishers were involved in various consumer financial markets, such as automobile loan servicing, debt collection, mortgage servicing, and student loan servicing. CFPB staff told us that until 2017, CFPB generally examined furnishers compliance with FCRA as part of its assessment of compliance with other Federal consumer financial laws and regulations. CFPB staff said that in 2017, CFPB began conducting examinations specifically focused on furnishing activities under FCRA and Regulation V. CFPB stated that this change was made because the review of furnishers practices under FCRA and Regulation V was resource-intensive and merited dedicated resources. In a 2017 report, CFPB stated that it had found numerous furnisher violations of FCRA and Regulation V related to data accuracy and dispute investigations and that it directed furnishers to take corrective actions (see table 5 for examples of CFPB s supervisory findings and directed actions). For example, CFPB found that certain furnishers failed to establish, implement, and maintain reasonable written policies and procedures consistent with Regulation V regarding the accuracy and integrity of the information furnished; provided information to CRAs despite having reasonable cause to believe the information was inaccurate; and lacked policies for their employees on how to conduct reasonable investigation of consumer disputes. In some cases, CFPB s furnisher examinations conducted from 2013 through 2018 resulted in findings related to FCRA and Regulation V that CFPB directed the furnishers to take actions to address. For example, CFPB directed furnishers to develop reasonable written policies and procedures regarding accuracy, to promptly update the information provided to CRAs after determining that the information was not complete or accurate, and to update and implement policies and procedures to ensure disputes are handled in accordance with FCRA requirements. CFPB staff told us that the agency decides whether to investigate based on consideration of factors such as consumer complaints, extent of effects on consumers, and severity of misconduct. CFPB staff told us that, in many cases, CFPB has chosen to identify and correct FCRA violations and weaknesses in compliance management systems at CRAs through supervisory activity rather than enforcement investigations. However, CFPB has also investigated and used enforcement remedies, such as civil penalties and injunctive relief, against CRAs and furnishers that violated FCRA or Regulation V. From 2012 through 2018, CFPB settled 26 enforcement actions for violations related to FCRA and Regulation V, including four settlements involving CRAs and 16 settlements involving furnishers. Although CFPB found other FCRA violations in its investigations of these companies, such as those related to permissible purpose for obtaining consumer reports and disclosure issues, most of the violations related to data accuracy and dispute investigations. For example, two of the four FCRA-related settlements with CRAs involved dispute investigations or data accuracy procedures. Of the 16 settlements with furnishers for alleged violations related to FCRA and Regulation V, all contained violations related to the furnishers obligations regarding data accuracy or dispute investigations. CFPB s settlements contained findings similar to its supervisory examination findings. For example, CFPB found that a CRA failed to investigate consumer disputes, and another CRA failed to take steps to ensure its consumer reports were accurate. For furnishers, CFPB found violations including furnishers that failed to establish and implement reasonable written policies and procedures regarding the accuracy and integrity of information provided to CRAs, as well as furnishers that provided inaccurate or incomplete information about consumers to CRAs or failed to conduct reasonable investigations of consumer disputes. <3.2. CFPB Has Not Defined Expectations for CRA Accuracy and Dispute Investigation Procedures> CFPB has not defined its expectations including views on appropriate practices for how CRAs can comply with key FCRA requirements. Among other provisions, FCRA requires CRAs to (1) follow reasonable procedures when preparing a consumer report to assure maximum possible accuracy of consumer report information and (2) conduct reasonable investigations of consumer disputes. However, FCRA does not define what would constitute such reasonable policies and procedures or investigations or stipulate the types of procedures or investigations that would meet the requirements for CRAs. While CFPB has not defined its expectations for these two key FCRA requirements for CRAs, it has adopted Regulation V, which, as required by statute, includes information on CFPB s requirements and guidelines in these areas for furnishers. Regulation V includes requirements and guidelines for reasonable policies and procedures concerning the accuracy and integrity of furnished consumer information and requirements for reasonable investigations of consumer disputes filed directly with the furnishers. In its supervision of furnishers, CFPB has examined furnishers for compliance with the requirements of Regulation V for example, it has found in examinations that furnishers violated Regulation V s requirement to establish written policies and procedures regarding the accuracy of consumer information furnished to a CRA. Although CFPB has not similarly established guidelines or otherwise provided information on its supervisory expectations for CRAs, CFPB has found specific weaknesses in CRA practices. In particular, CFPB has cited one or more CRAs for specific deficiencies related to determinations of noncompliance with FCRA provisions regarding reasonable procedures for accuracy and dispute investigations. For example, CFPB has directed one or more CRAs to take specific actions to improve their accuracy procedures. In addition, CFPB found one or more CRAs data governance programs to be decentralized and informal, and it directed the CRAs to develop and implement written policies and procedures to formalize the programs. However, CFPB has not issued any information on its supervisory expectations indicating that reasonable procedures include having formal written policies and procedures. CFPB also has identified FCRA violations related to reasonable dispute investigations. For example, CFPB determined that one or more CRAs failed to review and consider documentation attached by consumers to disputes and relied entirely on furnishers to investigate a dispute therefore violating FCRA requirements for reasonable investigations and for reviewing and considering all relevant information submitted by the consumer and directed the CRAs to independently investigate consumer disputes. However, CFPB has not issued any information on its supervisory expectations to help interpret FCRA s requirement for CRAs to conduct a reasonable investigation of disputes and to review and consider all relevant information, including the expectation that CRAs investigate consumer disputes independently. Based on the FCRA requirements alone, it may be unclear to CRAs and others that these FCRA requirements include performing independent investigations. For example, representatives from one large CRA we interviewed stated that the company is not required to conduct an independent investigation. FCRA instructs CFPB to enact regulations that are necessary to carry out the purposes of the act, which could include issuing implementing regulations for CRAs regarding data accuracy and dispute investigations. Additionally, a 2018 policy statement issued by CFPB and the prudential regulators explains that information on supervisory expectations serves to articulate an agency s general views regarding appropriate practices. The policy statement further states that it is important for such information to provide insight to industry, as well as to supervisory staff, in a transparent way that helps to ensure consistency in the supervisory approach. According to CFPB s Supervisory Highlights from March 2017, CFPB s vision for the consumer reporting system is a system in which furnishers provide and CRAs maintain and distribute data that are accurate, supplemented by an effective dispute management and resolution process for consumers. According to the same CFPB publication, this vision is rooted in the obligations and rights set forth in FCRA and Regulation V. One reason why accuracy guidelines exist for furnishers but not CRAs is that the Fair and Accurate Credit Transactions Act of 2003 added a provision to FCRA requiring the prudential regulators and FTC to establish and maintain guidelines for furnishers regarding the accuracy of consumer data furnished to CRAs and to prescribe regulations requiring furnishers to establish reasonable policies and procedures for implementing the guidelines. In 2011, CFPB adopted these regulations as part of Regulation V after assuming rulemaking authority from the other agencies. Neither the Fair and Accurate Credit Transactions Act of 2003 nor any other statutory provision within FCRA includes a similar provision for the agencies to establish and maintain guidelines or provide information concerning supervisory expectations regarding the accuracy of consumer data CRAs maintain, and CFPB has not established guidelines or defined supervisory expectations for CRAs. Since 2015, CFPB s long-term rulemaking agenda has stated that it will evaluate possible policy responses to consumer reporting issues, including potential additional rules or amendments to existing rules governing consumer reporting accuracy and dispute processes. However, as of May 2019, CFPB had not conducted any rulemaking related to these topics. CFPB staff said that a substantial body of case law exists to guide CRAs regarding what practices may be considered compliant with FCRA requirements, including with respect to provisions for reasonable procedures for accuracy and performing reasonable dispute investigations. The staff also said that CFPB staff look to relevant case law when assessing CRA compliance with FCRA during examinations, and that supervisory findings serve to communicate to a supervised CRA how CFPB has applied FCRA during an examination. Providing information to CRAs about its supervisory expectations for these two key FCRA requirements and ways in which CRAs could comply could help CFPB to facilitate CRA compliance with FCRA and achieve agency objectives for the consumer reporting system. Without information about its expectations, CFPB s supervision lacks transparency about what practices it considers appropriate or expects CRAs to adopt to comply with key FCRA requirements. Absent such information from CFPB, representatives from four CRAs we interviewed told us that they look to other sources to understand what CFPB will consider to be noncompliant during examinations and to determine if they are meeting FCRA requirements for maintaining reasonable procedures. These sources include publicly available information such as court cases, presentations from industry associations, CFPB publications highlighting supervisory actions, and public enforcement actions. While CFPB can communicate with individual CRAs during examinations and by directing corrective actions, the impact of such interactions is limited to specific CRAs rather than helping to ensure consistency in its supervisory approach by providing transparent insights to the industry. While relevant case law could provide CRAs with some information regarding practices that have been determined to be compliant with FCRA requirements, there may be a lack of clarity about the extent to which all case law fully reflects CFPB s expectations. By communicating information about its expectations and ways in which CRAs could comply, CFPB could help ensure that CRAs receive complete and consistent information about how to interpret key FCRA requirements. Further, defining its expectations regarding how CRAs can meet key FCRA requirements could help CFPB promote consistency in its supervisory approach by providing examiners with information on the agency s interpretation of FCRA provisions. <4. FTC Enforcement Targets Smaller CRAs, and Prudential Regulators Examine Some Furnishers FCRA Compliance> <4.1. FTC Enforcement Actions Have Focused on Smaller CRAs Data Accuracy, Dispute Investigation, and Data Security Practices> FTC s enforcement actions since 2010 have targeted smaller CRAs. FTC staff told us that because CFPB has supervisory authority over the larger CRAs, FTC has focused its FCRA enforcement efforts on other CRAs. Additionally, our review of FTC s enforcement actions showed that FTC generally took enforcement actions against specialty CRAs that are smaller than the nationwide CRAs, such as CRAs conducting background screening. However, FTC staff also told us that they do not use a specific size threshold to initiate investigations against CRAs or furnishers and that they conduct their enforcement on a case-by-case basis, focusing on violations or potential violations of applicable laws. Prior to taking an enforcement action against a company, FTC conducts an investigation to determine if the company has violated a law. Using its investigative authority, FTC can compel companies to produce documents, testimony, and other materials to assist in its investigations. To determine whether to initiate investigations, FTC staff said they consider several sources, including leads from consumer advocacy groups and other FTC investigations, tips from whistleblowers, and monitoring of media reports. FTC staff also said that FTC regularly monitors its consumer complaint database to identify the types of complaints that consumers file and to determine if the activity described in the complaint indicates potential noncompliance with laws and regulations. FTC also can start investigations based on requests, such as by a member of Congress. FTC staff said that the agency targets its investigations based on the extent to which the potential noncompliance may affect a large number of consumers. For example, staff said FTC targets companies for investigation where inaccuracies may be occurring on a large scale. In addition, as we reported in February 2019, FTC staff said that when determining whether to initiate an investigation related to privacy and data security matters, they consider factors such as the companies size and the sensitivity of the data in the companies networks. FTC staff said that the consumer reporting market is a high priority for FTC, and that the accuracy of consumer reports and CRA activities has been a large part of FTC s enforcement priorities. FTC staff said that they initiated about 160 FCRA investigations from 2008 through 2018. FTC staff stated that of the approximately 160 investigations, about 70 related to CRAs or companies, such as data brokers and companies selling public records, that FTC investigated to determine if they were engaged in conduct that would render them CRAs. Additionally, the staff said that about 20 of the approximately 160 investigations related to furnishers, about 55 related to users of consumer reports, and about 15 related to companies that fall under provisions of FCRA that do not require that the entity be a CRA, furnisher, or user. FTC staff stated that among these investigations, FTC investigated specialty CRAs, such as background- screening and check-authorization companies, and furnishers, such as debt collectors, lenders, and telecommunications companies. After an investigation, FTC may initiate an enforcement action if it has reason to believe that a law is being or has been violated. From 2010 through 2018, FTC took 30 enforcement actions related to FCRA, including against 14 CRAs, six furnishers, and two companies that acted as both a CRA and furnisher. Of the 30 enforcement actions, 14 contained issues related to data accuracy or disputes and two contained issues related to data security. In total, 20 of the 30 enforcement actions contained issues related to other consumer reporting topics, such as provision of consumer reports without permissible purpose. FTC staff told us that all of the enforcement actions related to FCRA involved injunctive relief. Additionally, some enforcement actions involved civil penalties. For example, in one action, a CRA was ordered to pay civil penalties for failing to use reasonable procedures to ensure the maximum possible accuracy of information it provided to its customers, and for failing to reinvestigate consumer disputes, as required by FCRA. FTC alleged that the CRA failed to take reasonable steps to ensure that the information in the reports was current and reflected updates, such as the expungement of criminal records. FTC staff said that there is no overarching definition regarding the FCRA provision for reasonable procedures for assuring maximum possible accuracy and that FTC determines on a case-by-case basis whether a violation has occurred. FTC staff also said that FTC s enforcement actions provide industry with information on unacceptable practices and that the enforcement actions are closely monitored by the consumer reporting industry. In addition to enforcement actions related to FCRA, FTC staff told us that FTC took five actions against CRAs for unfair or deceptive acts or practices related to data security in the past 10 years. FTC alleged that all five CRAs failed to employ reasonable and appropriate security measures to protect sensitive consumer information. <4.2. Prudential Regulators Said They Examine Some Furnishers FCRA Compliance in Conjunction with Other Laws and Regulations> As discussed previously, the prudential regulators have supervisory and enforcement authority for FCRA over depository institutions and credit unions with total assets of $10 billion or less, some of which act as furnishers. The four prudential regulators told us they do not perform standalone examinations of these financial institutions for FCRA compliance. Rather, they examine for FCRA compliance in conjunction with other consumer financial laws and regulations and as part of examining an institution s compliance with federal consumer protection laws and regulations. For example, OCC staff told us that if an examiner reviews an institution s general compliance management system and identifies compliance, procedural, or other weaknesses related to FCRA, then the examiner would look at those issues more closely. Staff from the four prudential regulators told us they take a risk-based approach to determine the scope of examinations. They said that the approach includes consideration of factors such as an institution s asset size, record of FCRA compliance, and trends in consumer complaints. As part of their compliance examinations from 2013 through 2018, staff from FDIC, the Federal Reserve, and NCUA said their agencies identified multiple FCRA- and Regulation V-related findings, including findings not related to financial institutions furnishing activities. FDIC staff said that examiners identified more than 1,200 violations related to FCRA and Regulation V at around 800 institutions, but found that the majority of the violations posed a low level of concern to the institution and consumers. Of these violations, FDIC staff stated that 106 related to furnisher obligations under Regulation V regarding the accuracy and integrity of information furnished to CRAs and that those types of violations were among the five most frequently cited violation topics related to FCRA and Regulation V. Federal Reserve staff said that in examinations that reviewed compliance with FCRA and Regulation V, Federal Reserve examiners cited FCRA and Regulation V about 210 times for an aggregate of about 4,200 related violations. Of these, Federal Reserve staff said the agency cited FCRA and Regulation V provisions related to furnisher accuracy about 20 times and cited an aggregate of about 3,600 violations. NCUA staff stated that NCUA identified 160 FCRA violations at around 150 credit unions. NCUA staff explained that 20 of the violations related to furnisher accuracy and that these types of violations were not among the five most frequently cited violation topics related to FCRA. OCC staff told us that OCC identified no findings related to FCRA or Regulation V from 2013 to 2018. Three prudential regulators stated that they consider the risk that a FCRA or Regulation V violation poses to the depository institution, including risk to consumers. For example, FDIC staff stated that the violations they cited may have had a small but negative effect on consumers, or may have the potential to have a negative effect in the future if uncorrected. FDIC staff added that such violations may also pose compliance and legal risks to the institution. NCUA staff stated that they require corrective action for any FCRA violation, and that they consider the pervasiveness of violations particularly a risk of systemic or repeated violations in determining the appropriate supervisory action. <5. Stakeholders Identified Various Causes for Inaccuracies in Consumer Reports, and Several Processes Exist to Help Promote Accuracy> <5.1. Stakeholders Primarily Attributed Inaccuracies to CRAs Matching Data to the Wrong Consumer Files and Errors in Source Data> CFPB, FTC, and industry stakeholders attributed inaccuracies in consumer reports to several causes, including (1) CRAs matching data to the wrong consumer files due to missing, inaccurate, or inconsistent personally identifiable information; (2) errors in furnished data; (3) timing of data updates; and (4) identity fraud or theft. In particular, CFPB, FTC, and industry stakeholders most frequently cited CRAs mismatching data and errors in furnished data as the primary causes of consumer report inaccuracies. <5.1.1. Matching Furnished Data to the Wrong Consumer Files> Several industry stakeholders identified CRAs mismatching of furnished data or public records to consumer files as a major source of inaccuracies in consumer reports. Two of the consumer groups we interviewed Consumers Union and the National Consumer Law Center also cited mismatching of data to consumer files as a source of inaccuracies in reports they published. In addition, FTC and CFPB reported in separate studies in 2012 that mismatching is a key source of inaccuracies in consumer reports. When CRAs do not correctly match data to the appropriate consumer files, the consumer s file may contain data pertaining to another consumer. Alternatively, data can be excluded from the correct consumer s file. For example, if one consumer s report contains information about a different consumer s debt payment history or collections activity, this information would also be missing from the file of the consumer who generated that activity. CFPB reported in its 2012 study that inconsistent, inaccurate, or incomplete personally identifiable information can cause errors in matching furnished data to the correct consumer s file. CFPB, FTC, and industry stakeholders three CRAs, a CRA industry group, and a consumer group identified multiple reasons why personally identifiable information in data furnished to CRAs may be inconsistent, inaccurate, or incomplete, including the following examples: Consumers may use variations of their names when establishing an account with financial institutions (such as Kathy and Katherine). Consumers may change their names as a result of divorce or marriage, but the name change may not be reflected in furnished data. Consumers with suffixes in their names (such as junior or senior) may not consistently use suffixes in their applications. Furnishers may omit personally identifiable information. Furnishers may input consumers information incorrectly during data entry. In addition, CFPB stated in its 2012 report that matching public records to consumers files can be particularly challenging for CRAs because public records rarely contain Social Security numbers. The processes CRAs have in place to match data to consumers files may also contribute to inaccuracies in consumer reports. Generally, CRAs use various combinations of personally identifiable information to match data to consumers. For example, representatives from one CRA said the CRA uses at least the name and address to conduct matches. These representatives said that where only name and address are used, the address is required to be an exact match while the name can be a logical variation determined by the CRA s algorithm. Representatives from another CRA said that the CRA matches public record information using at least the full name and date of birth but not the Social Security number because it is difficult to obtain. According to a CFPB report, the three nationwide CRAs as part of their settlements with multiple state Attorney General offices now require name, address, and Social Security number or date of birth to be present in public records furnished to them and use that personally identifiable information to conduct matches. Representatives from three consumer groups attributed consumer report inaccuracies to how CRAs make such matches. For example, representatives of two consumer groups said that CRAs could reduce inaccuracies arising from mismatching by using stricter requirements, such as requiring both Social Security number and date of birth, in addition to names and addresses, or only matching data to consumers if all nine digits of the Social Security number are present. Altogether, the errors originating from consumers or furnishers, as well as processes that CRAs have in place for matching, affect the accuracy of consumer reports (see fig. 3). CFPB and representatives from several industry stakeholders identified errors in furnished data as a primary cause of consumer report inaccuracies. Even when a CRA matches data to the correct consumer file, the consumer report can still contain inaccuracies if the information a furnisher provided to the CRA regarding the consumer contained errors (see fig. 4). CFPB has reported and a few CRAs told us that CRAs conduct quality checks to identify issues including blank fields or logical inconsistencies in furnished data, such as reporting of new account balance for closed consumer accounts. The CRA can reject furnished data or ask furnishers to provide corrected data. However, a CFPB report and a few industry stakeholders we interviewed identified weaknesses in furnisher and CRA processes as contributing to errors in furnished data. Two of the consumer groups we interviewed Consumers Union and the National Consumer Law Center also cited weaknesses in furnisher and CRA processes as contributing to errors in furnished data in reports they published. Processes for handling consumer transactions. CFPB reported that problems with processes used by furnishers include failing to update records, failing to post a payment, and misattributing ownership of an account to an individual who is only an authorized user. Processes for handling data accuracy. CFPB also reported and a few stakeholders told us that some furnishers lack processes for ensuring the accuracy of data submitted to CRAs and some CRAs lack processes for ensuring the accuracy of furnished data. CFPB reported and representatives from a few industry stakeholders said that timing of data updates in furnished data and court records could be a source of potential inaccuracies. For example, representatives from one CRA said that an address or name change can take up to two billing cycles to be reflected in a consumer report. Additionally, representatives from a CRA industry group told us that online court records, where CRAs may obtain data, often lag behind paper court records. Representatives from one consumer group pointed to the timing of when furnishers report debt as a source of potential inaccuracies. <5.1.2. Identity Fraud or Theft> CFPB, the National Consumer Law Center, and Consumers Union have reported that identity fraud and theft are causes of inaccuracies in consumer reports. Additionally, representatives from one CRA also told us that identity fraud and theft are primary causes of inaccuracies. For example, identity thieves can create new credit accounts in a consumer s name and let the debt go unpaid. Such debts then may be reflected in the consumer s account and be reported to CRAs if not identified by the furnisher as resulting from fraudulent activity. <5.2. Consumers Can Dispute Potential Inaccuracies in Their Consumer Reports with CRAs or Furnishers> Consumers can dispute the accuracy or completeness of their consumer reports with the CRAs that produced the consumer reports, with the data furnishers, or both. As stated previously, FCRA requires CRAs to conduct reasonable investigations of consumer disputes; FCRA, Regulation V, and FTC s Furnisher Rule, as applicable, generally also require furnishers to conduct reasonable investigations of consumer disputes. If consumers are dissatisfied with the results of the investigations conducted by the CRAs or furnishers, they have a few options, discussed in detail below. FCRA requires CRAs and furnishers to take specific steps to respond to consumer disputes. When a consumer files a dispute with the CRA, the CRA must investigate the dispute internally, and once the CRA notifies the furnisher of the dispute, the furnisher must also investigate the disputed information (see fig. 5). If the CRA s internal investigation or the furnisher s investigation finds that the disputed item is inaccurate, incomplete, or cannot be verified, the CRA must delete the disputed item from the consumer s file or modify the information and notify the furnisher of the action taken. The CRA must notify the consumer of the investigation results. Representatives from six of the CRAs we interviewed said that they consider disputes resolved when they or the furnishers complete their investigations and notify consumers of the results, even if the consumer does not agree with the results. If a furnisher does not conduct an investigation and report to the CRA within the time frame required by FCRA, then the CRA must remove the disputed information from the consumer s file. Certain furnisher processes for investigating a dispute received from a CRA and a dispute received directly from the consumer are similar under FCRA. When a furnisher investigates a dispute received from a CRA, the furnisher must report the results of the investigation to the CRA that forwarded the dispute. If the furnisher receives the dispute directly from a consumer, then it must investigate the dispute and report the results of the investigation to the consumer, generally within 30 days (see fig. 6). In both cases, the furnisher must provide corrected information to every CRA to which it provided the information. CRAs may have differing dispute investigation processes in place because of regulatory requirements or because of how they obtained their data. Under FCRA, the nationwide CRAs are required to maintain an automated system through which furnishers can report incomplete or inaccurate information in a consumer s file. The nationwide CRAs share the use of an automated system that sends disputes to furnishers and receives furnishers responses to the disputes. Other CRAs are not required by FCRA to use an automated system. Representatives from one CRA told us that the CRA uses email and phone calls to send disputes to and receive responses from furnishers. Representatives from a CRA industry group, as well as representatives from a background- screening CRA, said that compared to CRAs that obtain information from furnishers, background-screening CRAs generally obtain records from courts and therefore conduct their dispute investigations by confirming court records and contacting court officials. Consumers have several options to address potential inaccuracies in their consumer reports if they disagree with the results of a CRA or furnisher investigation, but these options have potential limitations, according to the stakeholders we interviewed. Placing a consumer statement on the report. Under FCRA, if the investigation does not resolve the dispute (where the dispute is filed with a CRA), the consumer may place a statement regarding the nature of the dispute on the consumer report, such as why the consumer disagreed with the reported item. According to the three nationwide CRAs, such statements alert creditors to the consumer s disagreement. However, the statement does not modify or remove the information that the consumer perceived to be inaccurate from the consumer report, and users of the consumer report may or may not consider the consumer s statement in their decision-making. Resubmitting disputes to CRAs or furnishers. Consumers who believe their disputes have not been satisfactorily resolved may choose to resubmit disputes regarding the same items that they disputed previously to CRAs or to the furnishers. If a consumer submits a dispute and does not provide sufficient information to investigate the disputed information or resubmits a dispute and does not provide additional or new supporting information, a CRA or furnisher may determine that the dispute is frivolous or irrelevant and does not warrant an investigation. Representatives from one CRA told us that if the CRA receives a dispute from a consumer about an item that was previously disputed, it would review consumer records to see if it has verified the consumer s information previously. If so, the CRA would ask the consumer to provide additional documentation or to contact the furnisher to obtain support for the dispute. In some cases, consumers may turn to third parties that submit disputes on their behalf. Representatives from one CRA said that the CRA does not investigate disputes that certain third parties submit on behalf of consumers because these third parties dispute the same items repeatedly. Representatives from another CRA said that the CRA reviews third-party dispute requests to determine if the third party has proper authorization from consumers to act on their behalf. Submitting complaints to federal and state agencies. Consumers can submit complaints about inaccuracies in their consumer reports to federal and state agencies, such as CFPB and state Attorney General offices. CFPB has stated that it forwards these complaints to CRAs and works with them to obtain responses within 15 days. Staff from several state agencies we interviewed generally told us that after receiving complaints, they contact CRAs about the complaints to obtain responses but do not compel CRAs to take specific actions. CFPB has reported that CRAs handle complaints similarly to consumer disputes. As a result, although complaints are separate from the dispute process required under FCRA, the effectiveness of this option also depends on the same CRA processes for addressing inaccuracies. However, representatives from two consumer groups said that submitting complaints to CFPB through its consumer complaint database has helped consumers resolve inaccuracies in their reports. Representatives from one consumer group said the publication of complaints in CFPB s database helps to hold CRAs accountable and incentivizes CRAs to respond. Taking private legal action. Under FCRA, consumers have private rights of action or ability to litigate against CRAs and furnishers, under certain provisions. Consumers have brought legal claims against CRAs and furnishers for failure to follow reasonable procedures to assure maximum possible accuracy or conduct a reasonable investigation of a dispute. Under FCRA, consumers can sue a furnisher for failure to conduct a proper investigation when notified by a CRA that a consumer has disputed information provided by the furnisher. However, before initiating suit, the consumer must first dispute the information with the CRA. A consumer may initiate a dispute through a CRA even if the consumer has previously initiated a dispute with the furnisher. Representatives from two consumer groups and one state agency told us that in general, consumer barriers to litigation include that it is time-consuming and has potentially high legal costs and that consumers might be unaware of their legal rights. <5.3. Oversight Has Led CRAs to Make Changes to Promote Accuracy, but Challenges to Consumer Report Accuracy Remain> As a result of CFPB and FTC oversight and settlements with multiple state Attorneys General, the nationwide CRAs and several other CRAs have made changes in their policies and procedures to improve data accuracy and processes for addressing inaccuracies in consumer reports. However, CFPB and a few industry stakeholders said that challenges to improving accuracy in consumer reports remain. According to CFPB and nationwide CRAs, examples of the changes that CRAs have made as a result of oversight include the following: Changes as a result of CFPB supervision. According to CFPB, as a result of supervisory findings, one or more CRAs have implemented or changed policies and procedures related to ensuring accuracy and dispute investigations. These include (1) establishing a data- governance structure to oversee furnisher monitoring, such as by developing policies and procedures for ongoing and systemic screening of furnishers; (2) implementing systems to forward relevant dispute documents submitted by consumers to furnishers; and (3) implementing policies and procedures to ensure consideration of all supporting material submitted by consumers. Changes as a result of CFPB and FTC enforcement. As a result of CFPB s and FTC s enforcement, the two agencies directed a few CRAs to revise the procedures they use to match data using personally identifiable information. For example, CFPB directed two background-screening CRAs to revise procedures for assuring accuracy, such as by using algorithms to distinguish records by middle name and to match common names and nicknames. In another example, FTC directed a background-screening CRA that required an exact match of a consumer s last name and a nonexact match of first name, middle name, and date of birth to put in place reasonable procedures to ensure maximum possible accuracy. Changes as a result of state oversight. According to the three nationwide CRAs, they have implemented measures as a result of their 2015 settlements with multiple state Attorneys General. For example, they stated they monitor data furnishers dispute responses and take corrective actions against data furnishers for noncompliance with their dispute investigation responsibilities. Additionally, they established special handling procedures for disputes involving mixed files, fraud, and identity theft and provided CRA employees with discretion to resolve such disputes, rather than relying on furnishers responses. In addition to the changes described above, representatives at various CRAs said they had quality assurance processes in place to help ensure that furnished data are accurate and that furnishers are responsive to disputes. Monitoring of furnished data. Representatives from four CRAs said that they use various mechanisms to monitor furnished data to detect potential inaccuracies and take corrective actions against furnishers that do not comply with data furnishing standards. For example, representatives from three CRAs told us they compare data submissions against industry patterns and historical trends such as data submission history over the past 6 months to identify anomalies that would suggest erroneous data and take actions such as rejecting incoming data and returning data for correction. Representatives from one of these CRAs said that they analyze why a furnisher deviates from industry trends and help the furnisher identify and implement changes. Representatives from four CRAs told us that they provide regular reports, such as monthly reports, on data quality to furnishers. We reported previously that such steps may improve the quality of the information received from furnishers but cannot ensure the accuracy of such data. Monitoring of dispute investigations. Representatives from four CRAs said they have processes in place to help ensure that furnishers are responsive to disputes. For example, representatives from one CRA said that the automated system they use to correspond with furnishers about disputes automatically identifies illogical furnisher responses; the CRA contacts the furnisher to confirm the accuracy of those responses. Representatives from four CRAs told us that they monitor furnisher responses to disputes, such as dispute trends by furnisher type and the rate at which furnishers do not respond to disputes. Although CRAs have made changes to improve processes for ensuring accuracy and addressing inaccuracies, CFPB and industry stakeholders said that challenges remain in these areas. First, CFPB staff told us that the consumer reporting market has historically had comparatively less regulatory intervention than other regulated markets. As a result, the staff said that it has been challenging to change CRAs approach to a proactive one, whereby the CRAs proactively address compliance and change practices, as opposed to a defensive, reactive approach in response to consumer disputes and lawsuits. CFPB staff explained that this has been a focus of CFPB s supervision and said that its examination findings have demonstrated that CRAs can take actions to improve accuracy. Further, representatives from three consumer groups said that consumer report inaccuracy remains a challenge because CRAs lack incentives to be responsive to consumers, in part because the CRAs customers are the users of consumer reports, such as banks and employers, rather than the consumers themselves. Additionally, two industry stakeholders identified gaps in furnisher responsibilities for ensuring accuracy as a challenge. Representatives from one of these stakeholders, a state agency, said that furnishers often do not know their responsibilities for ensuring the accuracy of their data. Representatives from the other stakeholder, a CRA, said that while the CRA has implemented policies and procedures to ensure accuracy in response to CFPB s supervision, furnishers might not have implemented similar policies and procedures to ensure the accuracy of the data provided. <6. Conclusions> Consumer reports affect the lives of millions of Americans because of the role they play in many important decisions, such as whether a lender decides to extend credit and at what terms or whether an employer offers a candidate a job. Therefore, it is important for CRAs to produce reports that are accurate and for consumers to have appropriate procedures available to correct any inaccuracies in their consumer reports, including disputing inaccuracies. We found that opportunities exist for CFPB to improve its oversight of CRAs. As part of its supervision, CFPB has directed CRAs it has examined to make specific changes based on examination findings related to FCRA requirements for (1) reasonable procedures for assuring accuracy and (2) reasonable investigation of consumer disputes. However, CFPB has not defined its expectations for how CRAs can comply with these requirements. Providing additional information to CRAs about its expectations for key FCRA requirements could help CFPB achieve its vision of promoting a consumer reporting system where CRAs maintain and distribute accurate data, supplemented by effective dispute resolution processes. Additionally, such information could help to promote consistency and transparency in CFPB s supervisory approach. <7. Recommendations for Executive Action> We are making two recommendations to CFPB: The Director of CFPB should communicate to CRAs its expectations regarding reasonable procedures for assuring maximum possible accuracy of consumer report information. (Recommendation 1) The Director of CFPB should communicate to CRAs its expectations regarding reasonable investigations of consumer disputes. (Recommendation 2) <8. Agency Comments and Our Evaluation> We provided a draft of this report to CFPB, the Federal Reserve, FDIC, FTC, NCUA, and OCC for review and comment. We received written comments from CFPB, which are summarized below and reprinted in appendix II. CFPB, the Federal Reserve, FDIC, and FTC provided technical comments, which we incorporated as appropriate. In email responses, officials indicated that NCUA and OCC did not have any comments on the draft of this report. In its written comments, CFPB neither agreed nor disagreed with the recommendations. CFPB stated that it has made oversight of the consumer reporting market a top priority and that its supervisory reviews of CRAs have focused on evaluating their systems for assuring the accuracy of data used to prepare consumer reports. CFPB noted that CRAs have made significant advances to, among other things, promote greater accuracy. With respect to the first recommendation that CFPB should communicate to CRAs its expectations regarding reasonable procedures for assuring maximum possible accuracy CFPB noted that case law includes interpretations of the reasonableness standard and provides guidance to CRAs about how the standard applies to various factual scenarios. CFPB also noted that it and FTC have settled enforcement actions regarding the reasonableness standard in which each agency provided examples of how it applied the standard and the relevant case law to the facts of each matter and described a consent order with two background-screening companies that made clear that a lack of certain written procedures was not reasonable. Additionally, CFPB noted that its examination procedures discuss factors that would be considered in evaluating compliance with the reasonable procedures standard and that it publishes Supervisory Highlights that document key examination findings. While we agree that case law may provide information to CRAs regarding how courts have interpreted the reasonableness standard in specific circumstances, as we note in the report, there may be a lack of clarity about the extent to which all case law fully reflects CFPB s expectations. Absent additional information from CFPB, the current case law and case- by-case enforcement actions may not best serve to enable CRAs to proactively address compliance practices. More direct communication of CFPB s expectations can provide CRAs with clearer information on what they should be doing and what actions might constitute a FCRA violation. Similarly, while FTC and CFPB have settled actions with certain CRAs regarding reasonable procedures, such settlements may be applicable only to the specific facts and circumstances and the parties involved in those cases. CFPB s examination procedures provide information on factors that would be considered in evaluating compliance and areas that may be reviewed in examinations, but they do not provide information on CFPB s oversight expectations regarding how CRAs may comply with the FCRA requirement for reasonable procedures. Likewise, while CFPB s Supervisory Highlights provide information on key examination findings, the Supervisory Highlights do not represent CFPB s expectations for how CRAs may or should comply with the reasonableness standard. For example, the Supervisory Highlights state that the legal violations described are based on particular facts and circumstances and may not lead to such findings under different facts and circumstances. With respect to the second recommendation that CFPB should communicate to CRAs its expectations regarding reasonable investigations of consumer disputes CFPB stated that what qualifies as a reasonable investigation has been articulated in court cases and noted that an FTC report summarizes how the reasonable investigations standard has been interpreted by courts and FTC. While we acknowledge that FTC may have interpreted and the courts may have ruled on this issue, CFPB has not communicated to CRAs specific information on what may and may not qualify as a reasonable investigation. CFPB also stated that it issued a bulletin in September 2013 that is relevant to this recommendation. However, in that bulletin, CFPB restated FCRA requirements and emphasized their importance, but it did not provide further information on what practices may represent a reasonable investigation or what it expects of CRAs. CFPB noted that it has and will continue to communicate its expectations to CRAs. As stated in our report, communicating information about CFPB s compliance expectations and ways in which CRAs could comply could help to ensure that CRAs receive complete and clear information about how to comply with key FCRA requirements. CFPB could provide such information in several ways; for example, CFPB has put consumer reporting issues on its rulemaking agenda since 2015. We maintain that providing additional information to CRAs about its expectations for key FCRA requirements could help CFPB to promote consistency and transparency in its supervisory approach and that the recommendations should be addressed. We are sending copies of this report to the appropriate congressional committees and financial regulators, and other interested parties. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or OrtizA@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our objectives for this review were to (1) describe the current oversight framework for consumer reporting agencies (CRA), (2) examine how the Consumer Financial Protection Bureau (CFPB) has overseen CRAs and entities that furnish consumer data, (3) examine how other federal agencies, including the Federal Trade Commission (FTC) and the prudential regulators, have overseen CRAs and entities that furnish consumer data, and (4) identify what is known about the causes of inaccuracies in consumer reports and the processes that are in place to help ensure accuracy. Some information has not been included in this public report because CFPB determined it was information prohibited by law from public disclosure. This report omits such information, but we will be issuing a nonpublic version of this report that includes all the information. Although the information provided in this report is more limited, it addresses the same objectives as the sensitive nonpublic report and uses the same methodology. To describe the oversight framework for CRAs, we identified and reviewed relevant federal laws and their application for CRAs and institutions that furnish data to CRAs (called furnishers). We identified and reviewed laws focused on the accuracy of consumer reports, the security of consumer information, and the use and sharing of consumer reports. These laws include the Fair Credit Reporting Act (FCRA) and its implementing regulation, Regulation V, the Gramm-Leach-Bliley Act, the Dodd Frank Wall Street Reform and Consumer Protection Act, the Federal Trade Commission Act, and the Economic Growth, Regulatory Relief, and Consumer Protection Act. We interviewed staff from CFPB, FTC, and the prudential regulators the Board of Governors of the Federal Reserve System, the Federal Deposit Insurance Corporation, the National Credit Union Administration, and the Office of the Comptroller of the Currency about applicable laws and regulations for CRAs and furnishers and their oversight authority over CRAs and furnishers. Additionally, we interviewed five categories of stakeholders to learn about federal and state oversight over CRAs: state agencies such as Attorney General offices and regulators, CRAs, groups representing state agencies, industry groups representing CRAs, and consumer groups. We selected four states Maine, Maryland, New York, and Ohio for a more in-depth review. We chose these states because they had laws and regulations related to consumer reporting or had oversight activities involving CRAs, such as prior enforcement actions. We interviewed staff from state regulatory agencies in Maine, Maryland, and New York, as well as staff from the New York Office of the Attorney General. In addition, we received written responses to our questions from the Ohio Office of the Attorney General. In each case, we asked questions about state oversight of CRAs, including the relevant state laws and state enforcement, rulemaking, and supervisory authorities. We interviewed three nationwide CRAs and four smaller or specialty CRAs that produce or compile consumer reports covering the credit and background-screening markets about federal and state oversight, including applicable laws. We selected these CRAs because of potential differences in oversight based on their size and market. In our selection, we considered the size of the CRA and the number of consumer complaints in CFPB s database. We also interviewed two industry groups representing CRAs (the Consumer Data Industry Association and the National Association of Professional Background Screeners); two groups representing states (the Conference of State Bank Supervisors and the National Conference of State Legislatures); and four consumer groups (Consumers Union, the National Association of Consumer Advocates, the National Consumer Law Center, and U.S. Public Interest Research Group). We asked these groups about federal and state authorities for overseeing CRAs. We selected these groups because, based on our analysis of publicly available information and interviews with federal agencies, they are the primary organizations representing stakeholders in our review, such as CRAs, or have existing work, such as reports or testimonies, related to CRAs. The groups we included and the views they represent reflect a range of stakeholders but do not necessarily reflect the full scope of the industry. To examine how CFPB has overseen CRAs and furnishers, we interviewed CFPB staff about CFPB s supervision and enforcement strategies and activities, and we reviewed relevant documents, including supervisory and examination documents. To examine CFPB s supervisory strategies and activities, we reviewed CFPB s supervisory plans that document how CFPB determined which CRAs and furnishers to examine and which compliance areas to examine. We also reviewed CFPB s public reports, such as Supervisory Highlights, and nonpublic examination documents to evaluate CFPB s supervisory activities for both CRAs and furnishers. To learn about CFPB s enforcement strategies and enforcement activities in the consumer reporting market, we reviewed the types of enforcement actions available to CFPB for violations of relevant laws, and we identified specific enforcement actions CFPB brought against CRAs and furnishers for violations related to FCRA and Regulation V from 2012 through 2018. We identified these enforcement actions by reviewing CFPB s publicly available enforcement activities on its website, and we corroborated our results with CFPB. We also interviewed stakeholders, including CRAs, consumer groups, state agencies, and state groups, to obtain their views on CFPB s oversight. To examine how FTC and the prudential regulators have overseen CRAs and furnishers, we interviewed staff from FTC and the prudential regulators to discuss the agencies oversight and enforcement activities. To learn about FTC s enforcement strategies and activities in the consumer reporting market, we reviewed the types of enforcement actions available to FTC for violations of relevant laws, interviewed FTC staff regarding the process for initiating investigations and the investigations FTC conducted, and identified specific enforcement actions brought against CRAs and furnishers for violations related to FCRA, Regulation V, and FTC s Furnisher Rule from 2010 through 2018. We identified these enforcement actions by reviewing FTC s publicly available enforcement activities on its website, and we corroborated our results with FTC. To learn about prudential regulators activities, we reviewed the prudential regulators policies and procedures for examining furnishers and interviewed regulators staff. We also collected information from the regulators about their FCRA-related findings for furnishers from 2013 through 2018. To identify what is known about the causes of inaccuracies in consumer reports and the processes that are currently in place to help ensure accuracy, we conducted interviews with stakeholders. In particular, we interviewed staff from CFPB, FTC, the prudential regulators, and the state agencies to learn about what they believe are the causes of inaccuracies in consumer reports and the options available to consumers to address inaccuracies. Similarly, we interviewed staff at three nationwide CRAs and four smaller or specialty CRAs about the causes of inaccuracies and the processes they have in place for ensuring accuracy and addressing inaccuracies, including the processes in place to meet FCRA requirements for addressing consumer disputes about consumer report information. Additionally, we spoke with staff from four consumer and two industry groups (described above) to gain their perspectives on the causes of inaccuracies and processes in place to address them. We also conducted a literature search on the causes of inaccuracies in consumer reports and processes in place to help ensure accuracy. The search covered academic literature and court cases from 2008 through 2018 and used subject and keyword searches of various databases, such as ProQuest, Westlaw, and CQ. The literature search resulted in limited relevant information. However, we identified reports from CFPB and FTC that included information on the causes of inaccuracies in consumer reports, as well as information CFPB has published, such as Supervisory Highlights, on the processes CRAs have in place to help ensure accuracy. Additionally, through our interviews, we identified information that stakeholders, such as the National Consumer Law Center, have published on these issues. We conducted this performance audit from July 2018 to July 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Bureau of Consumer Financial Protection Appendix III: GAO Contact and Staff Acknowledgments <9. GAO Contact> <10. Staff Acknowledgments> In addition to the contact named above, Kevin Averyt (Assistant Director), Weifei Zheng (Analyst in Charge), Yue Pui Chin, Sergio Enriquez, Marc Molino, Stephen Ruszczyk, Kelsey Sagawa, Jessica Sandler, Jennifer Schwartz, and Farrah Stone made key contributions to this report. | Why GAO Did This Study
CRAs collect data from various sources, such as banks and credit card companies, to create consumer reports that they sell to third parties. The three largest CRAs hold information on more than 200 million Americans.
The Economic Growth, Regulatory Relief, and Consumer Protection Act, enacted in 2018, included a provision for GAO to examine issues related to the consumer reporting market. This report examines, among other objectives, the causes of consumer report inaccuracies and how CFPB has overseen CRAs.
To answer these questions, GAO reviewed relevant laws, regulations, and agency documents related to CRA oversight. GAO interviewed representatives of federal agencies and stakeholders, including a nongeneralizable selection of state agencies from four states that had laws or oversight activities involving CRAs and seven CRAs selected based on size and the type of consumer reports produced. GAO also interviewed groups representing state agencies, consumers, and CRAs selected to reflect a range of stakeholders or based on their work related to CRAs.
What GAO Found
Businesses and other entities use consumer reports to make decisions about consumers, such as whether they are eligible for credit, employment, or insurance. Consumer report inaccuracies can negatively affect such decisions. The Consumer Financial Protection Bureau (CFPB) and other stakeholders identified various causes of consumer report inaccuracies, such as errors in the data collected by consumer reporting agencies (CRA) and CRAs not matching data to the correct consumer.
In 2010, CFPB was granted supervisory and enforcement authority over CRAs. In using its oversight authorities, CFPB has prioritized CRAs that pose the greatest potential risks to consumers—such as those with significant market shares and large volumes of consumer complaints—for examination. CFPB's oversight has generally focused on assessing compliance with Fair Credit Reporting Act (FCRA) requirements regarding accuracy and the investigations CRAs conduct in response to consumer disputes. For example, since 2013, CFPB has conducted examinations of several CRAs and directed specific changes in CRAs' policies and procedures for ensuring data accuracy and conducting dispute investigations.
CFPB has not defined its expectations for how CRAs can comply with key statutory requirements. FCRA requires CRAs (1) to follow reasonable procedures for ensuring maximum possible accuracy and (2) to conduct reasonable investigations of consumer disputes. CFPB has identified deficiencies related to these requirements in its CRA examinations, but it has not defined its expectations—such as by communicating information on appropriate practices—for how CRAs can comply with these requirements. Absent such information, staff from four CRAs GAO interviewed said that they look to other sources, such as court cases or industry presentations, to understand what CFPB will consider to be noncompliant during examinations. A 2018 policy statement issued by CFPB and other regulators highlighted the important role of supervisory expectations in helping to ensure consistency in supervision by providing transparent insight to industry and to supervisory staff. By providing information to CRAs about its expectations for complying with key FCRA requirements, CFPB could help achieve its goal of accurate consumer reporting and effective dispute resolution processes. Such information also could help to promote consistency and transparency in CFPB's supervisory approach.
What GAO Recommends
CFPB should communicate to CRAs its expectations regarding (1) reasonable procedures for assuring maximum possible accuracy and (2) reasonable investigations of consumer disputes. CFPB described actions it has taken to provide information to CRAs. GAO maintains that communicating expectations in these two areas is beneficial, as discussed in the report. |
gao_GAO-19-603 | gao_GAO-19-603_0 | <1. Background> CMS and states jointly administer the Medicaid program and generally share in the financing of Medicaid payments according to a formula established in law. States may deliver health care services to Medicaid beneficiaries through fee-for-service payments to participating providers or through Medicaid managed care plans, through which states pay plans a fixed amount per beneficiary typically per member per month to provide a specific set of Medicaid-covered services. States finance their share (nonfederal share) of Medicaid program spending in a variety of ways, including state funds, such as state general funds appropriated to the state Medicaid program and funds collected through taxes levied on health care providers. Within limits, however, states may also use other sources of funds including funding from local government providers, such as county-owned or county-operated hospitals, or from local governments on behalf of government providers. Federal law allows states to finance up to 60 percent of the nonfederal share of Medicaid payments from local government funds. <1.1. Medicaid Payments to Hospitals> State Medicaid agencies have two primary mechanisms for making payments to hospitals base payments and supplemental payments and both can qualify for federal matching funds. Base payments are payments to hospitals for specific services provided to Medicaid beneficiaries through both fee-for-service and managed care. These payments are set by state Medicaid programs or managed care plans, and can vary considerably across states for the same services. Payment amounts for the same service may also vary within a state. States Medicaid base payments are typically lower than other payers , and often are below the costs of providing services. Supplemental payments are typically lump sum payments made to hospitals that are not specifically tied to an individual s care. Like all Medicaid payments, supplemental payments are required to be economical and efficient. Supplemental payments can be grouped into two broad categories: (1) DSH payments, which states are required to make to certain hospitals; and (2) non-DSH payments, which states are allowed to make, but are not required by law. <1.1.1. DSH Payments> DSH payments are designed to help offset uncompensated care costs for hospitals serving a high proportion of Medicaid beneficiaries and uninsured low-income patients. In fiscal year 2017, total DSH payments to hospitals nationally were about $18.1 billion. States may distribute DSH payments to any eligible hospital in the state; however, under federal law, the total amount of DSH payments to a hospital must not be more than the total amount of uncompensated care provided by the hospital (both the Medicaid shortfall and uncompensated costs for care for the uninsured). To be eligible for a DSH payment, hospitals must meet minimum requirements such as having a Medicaid inpatient utilization rate of at least 1 percent. States are required to make DSH payments to certain hospitals termed deemed-DSH hospitals with a Medicaid inpatient utilization rate of at least one standard deviation above the mean for hospitals in the state that receive Medicaid payments, or a low-income utilization rate that exceeds 25 percent. The amount of federal funding each state may claim for DSH payments is limited by federal law. Since fiscal year 1993, each state is subject to a federal DSH allotment that establishes the maximum federal funding available for the payments. A state s DSH allotment is largely based on its fiscal year 1992 DSH spending, although Congress has since made several incremental adjustments to these allotments. Ultimately, however, the states that spent the most in fiscal year 1992 continue to have the largest allotments; conversely, the states that spent the least in fiscal year 1992 have the smallest allotments. States may choose to make DSH payments to institutions for mental disease (IMD), which can include state-operated psychiatric hospitals. Prior to 1997, a large share of DSH payments went to state-operated IMDs, where they were used to pay for services not covered by Medicaid and any remaining funds were returned to the state treasuries. In general, Medicaid excludes fee-for-service base payments for beneficiaries aged 21-64 who are residents of IMDs called the IMD exclusion and using DSH payments allowed states to support the costs of IMDs. In 1997, Congress restricted the total amount of DSH payments a state could make to IMDs as a group by establishing an annual limit on payments to IMDs for each state. Any unspent funds within the IMD-designated limit can be used for other hospital types. <1.1.2. Non-DSH Payments> Non-DSH payments include four types of supplemental payments that states may make, but are not required to do so, to hospitals and other providers. Medicaid upper payment limit (UPL) payments are lump-sum payments that are made in addition to fee-for-service base payments. The UPL is a limit or ceiling on the amount of a state s Medicaid payments for which the federal government will match spending. The UPL is based on the difference between Medicaid fee-for-service base payments and an estimate of what Medicare would pay for comparable services. The UPL is not a hospital-specific limit, but is applied in the aggregate across certain categories of providers. States have some flexibility in deciding which hospitals will receive a UPL payment, and how to allocate UPL payments among hospitals. In fiscal year 2017, UPL payments totaled nearly $13 billion. Uncompensated care pool payments are payments that some states make to hospitals specifically for uncompensated care costs in conjunction with section 1115 demonstration waivers and pilot projects for which they have received approval from the Secretary of HHS. Specifically, section 1115 of the Social Security Act authorizes the Secretary of HHS to waive certain federal Medicaid requirements and allow costs that would not otherwise be eligible for federal matching funds for experimental, pilot, or demonstration programs that, in the Secretary s judgment, are likely to assist in promoting Medicaid objectives. States have received approval to make supplemental payments for hospital uncompensated care in their Medicaid programs. In fiscal year 2017, states reported total spending of about $8 billion through uncompensated care pools. Delivery system reform incentive payment (DSRIP) programs, which have also been authorized under section 1115 demonstrations, allow states to make supplemental payments to providers engaging in various improvement projects that align with state delivery system reform objectives. Examples of reform objectives include improving care for patients with specific conditions or increasing capacity. In fiscal year 2017, DSRIP program payments totaled about $7.3 billion. Graduate medical education payments help support teaching hospitals, and can include teaching costs, such as physician resident salaries, though states are not required to make such payments to teaching hospitals. States have significant flexibility in designing and administering these payments; however, the payments are subject to the UPL. In fiscal year 2017, Medicaid graduate medical education payments totaled about $2 billion. <1.2. The Patient Protection and Affordable Care Act (PPACA) and DSH Allotments> Effective January 1, 2014, PPACA allowed states to expand Medicaid eligibility to certain non-pregnant, non-elderly individuals. PPACA also required a phased reduction in DSH allotments to states, reflecting the expectation that the number of uninsured individuals would decline and so would hospital spending on uncompensated care. As of May 2019, there were 37 expansion states those states that chose to expand Medicaid eligibility and 14 non-expansion states those that did not choose to expand Medicaid. Congress has delayed the reduction in DSH allotments several times. The reductions are scheduled to begin in fiscal year 2020. Between 2013 and 2014, both expansion and non-expansion states reported different degrees of change in care for the uninsured and Medicaid shortfall. In particular, MACPAC reported that between 2013 and 2014, the year in which most state Medicaid expansions took effect, expansion states uncompensated care costs for the uninsured declined by $2.2 billion (19 percent), while non-expansion states uncompensated care costs for the uninsured increased by $0.6 billion (5 percent). During the same period, expansion states Medicaid shortfall increased by $2.2 billion (36 percent), and non-expansion states Medicaid shortfall increased by $1.8 billion (546 percent). <2. States Increasingly Made Supplemental Payments to Hospitals> States use of supplemental payments has grown in recent decades, partly due to the flexibility supplemental payments provide. This flexibility is twofold: supplemental payments provide states with flexibility in financing the nonfederal share of supplemental payments, and flexibility to target the payments to specific hospitals or types of hospitals. <2.1. States Increasingly Made Supplemental Payments to Hospitals, while Reducing or Freezing Hospitals Base Payments> Total supplemental payments to hospitals have grown over time, while states base payments have often been frozen or reduced. Congress imposed limits on DSH spending in the 1990s, and since then states use of non-DSH payments has grown. Between fiscal year 2000 and fiscal year 2017, DSH payments increased about 16 percent, from $15.6 billion to $18.1 billion. In prior work, we reported that in fiscal year 2006 state Medicaid agencies made at least $6.3 billion in non-DSH payments, though the exact amounts are unknown, because states did not report all their payments to CMS. By fiscal year 2017, the amount of non-DSH payments had increased to $30.4 billion. Both uncompensated care pool payments and DSRIP programs are relatively new types of non-DSH payments, and thus contributed to the overall increase in supplemental payments. In prior work, we reported that, as of February 2017, CMS authorized nearly $38.7 billion in DSRIP spending nonconsecutively over 2011 to 2022 in four states with the largest DSRIP programs. Our prior work found that new or increased supplemental payments helped mitigate the increasing gap between Medicaid base payments and hospital costs. While supplemental payments increased, the number of states reducing or freezing base payments to hospitals has increased, in part, because states reported challenges paying the nonfederal share with state general funds. Our work found that from 2008 to 2011, across all providers, the number of states making at least one base payment reduction grew from 13 to 34, while the number of states increasing at least one base payment fell over the same period. Across all 4 years, states most frequently reported reducing base payments for hospitals. The Kaiser Family Foundation s annual survey data shows the trend continued in more recent years. Specifically, over half of states froze or reduced inpatient hospital base payments each fiscal year from 2011 to 2018, ranging from a low of 28 states in 2011 and 2018, to a high of 39 states in 2012. (See table 1.) In a September 2018 study of five states, MACPAC found that hospitals and state Medicaid officials often prefer increases to supplemental payments rather than increases to base payments, because supplemental payments come with more predictability. MACPAC found that all five states reported reducing hospital base payments from 2007 to 2011. After 2011, all five states kept base payments frozen with no adjustment for inflation. As a result, base payments to hospitals in these states were lower in 2018 relative to other payers and hospital costs. To address the growing gap between base payments and hospital costs, states collaborated with hospitals to establish or increase supplemental payments. In the five states, supplemental payments ranged from 18 percent to 61 percent of total hospital payments. <2.2. States Have Relied on Multiple Sources of Funds to Finance Their Nonfederal Share> More often than with base payments, states have relied on sources other than state general funds to finance the nonfederal share of supplemental payments. For example, states may receive funds for the nonfederal share of supplemental payments through taxes levied on health care providers. (See fig. 1.) In previous work, we found that funds from local governments and health care providers constituted about 50 percent of the nonfederal share for DSH and non-DSH payments in fiscal years 2008 through 2012. In contrast, funds from local governments and health care providers constituted approximately 30 percent of base payments during the same time period. The MACPAC study of five states also found that states and hospitals preferred supplemental payments, because hospitals can track the extent to which their tax assessments are recouped through supplemental payments. In a July 2014 report, we found that the number of states relying on provider taxes increased, and that provider tax revenues were then used for the nonfederal share of supplemental payments. In particular, the total number of provider taxes increased from 119 taxes in 42 states in 2008 to 159 taxes in 47 states in 2012 a 34 percent increase. Kaiser Family Foundation data show this trend has continued. According to state survey data, the number of states using inpatient hospital provider taxes has steadily increased from fiscal year 2011 to 2018, ranging from a low of 34 states in 2011, to a high of 42 states in 2017 and 2018. (See table 2.) <2.3. Supplemental Payments Allow States to Target Payments to Certain Hospitals or Types of Hospitals> Supplemental payments provide states with flexibility that allows them to address states goals by targeting payments to particular hospitals or hospital types, such as public hospitals or teaching hospitals. States may choose to target supplemental payments to hospitals that may not have the highest uncompensated care costs. Our prior work found some states DSH payments were not proportionally targeted to hospitals with the highest uncompensated care costs, which DSH payments are designed to address. Based on our prior analysis of annual hospital-specific 2010 DSH data, we reported that in 30 of 42 states, hospitals receiving the largest share of state DSH payments did not provide the largest share of total uncompensated care. Moreover, our prior review of the independent DSH audits found that 41 states made DSH payments to 717 hospitals that exceeded the individual hospitals uncompensated care costs as calculated by the auditors, 9 states did not accurately calculate the uncompensated care costs of 206 hospitals in those states for purposes of making DSH payments, and 15 states made DSH payments to a total of 58 hospitals that either did not retain their DSH payments or were not qualified to receive them. States criteria for identifying eligible DSH hospitals and how much funding they receive vary, but were often related to hospital ownership, hospital type, and geographic factors. Our prior work found that 2006 DSH payments to individual hospitals varied widely, ranging from 1 cent to about $395 million. For example, California reported both the lowest and highest 2006 DSH payment amounts; the state made a total of only $160 in DSH payments to 96 private hospitals and paid $2 billion in DSH payments to 51 government hospitals. Based on our analysis of 2014 DSH audits, several states targeted DSH payments to certain hospitals and hospital types, including the following: Public hospitals: States targeting nearly all (93 percent or higher) of their DSH funding to public hospitals included Arkansas (99 percent), California (100 percent), Illinois (99 percent), Iowa (93 percent), Maine (100 percent), and Washington (97 percent). Nonprofit hospitals: Nebraska targeted 98 percent of its DSH funding to nonprofit hospitals. High-teaching hospitals: Arkansas targeted 98 percent of DSH funding to high-teaching hospitals, defined as teaching hospitals with an intern-and-resident-to-bed ratio of 0.25 or greater. IMDs: Maine makes DSH payments to the two state-run IMDs. In 2014, 18 states directed their entire IMD-designated DSH limit to IMDs. (For additional information on DSH payments to IMDs, see table 9 in app. II). Similarly, states can target UPL payments to certain hospitals. We and the HHS Office of the Inspector General have reported that some states concentrated these payments to a small number of providers. <2.4. GAO and Others Have Noted Concerns With States Use of Supplemental Payments> Our work has highlighted a number of concerns about the use of non- DSH payments from various perspectives, highlighting the need for transparent reporting, ensuring expenditures meet Medicaid purposes, and concerns regarding arrangements that shift costs from the states to the federal government. For example, in November 2012, we recommended that Congress consider requiring CMS to improve the transparency of and accountability for non-DSH payments by requiring facility-specific payment reporting and annual audits. The report noted that the annual DSH reports and audits that states began submitting in 2010 were important steps toward improving transparency and accountability for Medicaid DSH payments; however, similar information is lacking for non-DSH payments. Moreover, the report stated that the limited information available on non-DSH payments shows that a large share of these payments are paid to a small number of hospitals; when these payments are combined with Medicaid base payments, hundreds of hospitals may be receiving Medicaid payments well in excess of their actual costs of providing Medicaid services. As of March 2019, Congress has not taken any action, but CMS announced in fall 2018 that it was planning a proposed rule on supplemental payments that, if finalized, would improve transparency by requiring states to provide CMS with certain information on Medicaid supplemental payments. The agency plans to release the proposed rule for comment by fall 2019. In 2014, we recommended that CMS develop a data collection strategy ensuring states report accurate and complete data on all sources of funds used to finance the nonfederal share of Medicaid payments. Such data are needed to (1) track trends in financing the nonfederal share, and (2) oversee compliance with current limits on sources of financing the nonfederal share. CMS did not concur with our recommendation, but did acknowledge the agency does not have sufficient data to oversee compliance with the 60 percent limit on local government contributions to a state s nonfederal share. <3. Hospital Uncompensated Care Costs and DSH Payments Varied by State; Some Types of Hospitals Received a Greater Proportion of DSH Payments> <3.1. Uncompensated Care Costs Varied by State and Were Mainly for Costs Related to Treating Uninsured Patients> Among hospitals receiving DSH payments in 2014, total uncompensated care costs varied by state, ranging from $5.9 million in North Dakota to $6.2 billion in New York. In the hospitals, most uncompensated care costs were related to costs to care for uninsured patients, rather than the Medicaid shortfall. For example, among hospitals receiving DSH payments in the 48 states studied: Costs related to care for the uninsured comprised about two-thirds (67.9 percent) of total uncompensated care costs for DSH hospitals. The remaining share of DSH hospital uncompensated care costs consisted of the Medicaid shortfall. In 34 states, costs for care for the uninsured exceeded the Medicaid shortfall. In the remaining 14 states, the Medicaid shortfall exceeded costs related to care for the uninsured. In 15 states, Medicaid paid hospitals more than the total cost of services provided to Medicaid beneficiaries, resulting in a surplus of Medicaid payments even prior to receiving DSH payments. Termed a negative Medicaid shortfall, these surplus funds can be the result of non-DSH Medicaid supplemental payments. The remaining 33 states had some Medicaid shortfall. (See table 3.) No states had a surplus of total uncompensated care costs. (For additional information on state uncompensated care costs and DSH payments in 2014, see table 10 in app. II.) <3.2. Across States, DSH Payments Varied Significantly in Amounts, Percentage of Uncompensated Care Costs Covered, and Percentage of States Medicaid Spending on Hospitals> DSH payments both the federal and nonfederal share varied significantly in the amount that each state paid to hospitals in 2014. (See table 4 and fig. 2.) Wyoming made the smallest amount of DSH payments at about $500,000, while New York made the largest amount in DSH payments at $3.5 billion. Differences in DSH payments are largely the result of differences in the state allocations established in law. The proportion of total DSH hospital uncompensated care costs covered by total DSH payments in 2014 also varied considerably by state. Nationally, DSH payments ($18.3 billion) covered about half of DSH hospital uncompensated care costs ($36.2 billion). Nineteen states made DSH payments totaling at least 50 percent of uncompensated care costs for the states DSH hospitals, while 29 states made DSH payments of less than 50 percent of uncompensated care costs for the states DSH hospitals. (See table 5.) Four states (California, Illinois, Maryland, and Missouri) made DSH payments that exceeded aggregate hospital uncompensated care costs. (For additional information on state uncompensated care costs and DSH payments in 2014, see table 10 in app. II.) Among hospitals receiving them, DSH payments accounted for 13.6 percent of total Medicaid payments, nationally, but there was considerable variation across states. For example, DSH payments comprised 96.6 percent of Medicaid payments to DSH hospitals in Maine and 0.7 percent of Medicaid payments to DSH hospitals in Tennessee. In 40 states, DSH payments accounted for less than 20 percent of total Medicaid payments to hospitals, but in 8 states, it exceeded 20 percent. (See table 6.) (For additional information on state Medicaid payments to hospitals, see table 11 in app. II.) <3.3. Deemed-DSH, Public, and Teaching Hospitals Received a Greater Share of DSH Payments Relative to their Proportion of Uncompensated Care Costs> Among deemed and non-deemed DSH hospitals, overall deemed-DSH hospitals received larger relative DSH payments compared to non- deemed DSH hospitals. Deemed-DSH hospitals received 69.9 percent of DSH payments in 2014, but carried 51.2 percent of uncompensated care costs, relative to all hospitals receiving DSH payments that year. Each of the 48 states that distributed DSH payments in 2014 had at least one deemed-DSH hospital. (See table 7 for hospital type definitions.) Most of these states (36) provided deemed-DSH hospitals with a greater share of DSH payments relative to their share of total uncompensated care costs. (See table 8 for a summary of how states DSH payments to deemed-DSH hospitals compared to the hospitals share of uncompensated care costs, and table 12 in app. II for additional information by state.) In terms of ownership and teaching hospital status, hospitals that were publicly owned or teaching hospitals also generally received a greater proportion of DSH payments relative to their share of total uncompensated care costs. Among the three different ownership groups (public, non-profit, and private), public hospitals generally received a larger share of DSH payments relative to their share of uncompensated care. Among hospitals receiving DSH payments in 2014, public (36.7 percent) and nonprofit (53.7 percent) hospitals accounted for more uncompensated care costs than that of privately owned hospitals (9.6 percent). States generally provided more DSH payments to public hospitals (62.8 percent) relative to their share of total uncompensated care costs (36.7 percent). (For additional information on DSH payments and hospitals uncompensated care costs by ownership, see table 13 in app. II.) States distribute DSH payments to teaching hospitals at different rates, but generally provided a greater proportion of DSH payments to high-teaching hospitals (56.5 percent) relative to their share of total DSH hospital uncompensated care costs (44.0 percent). (For additional information on DSH payments and hospitals uncompensated care costs by hospital teaching status, see table 14 in app. II.) Nationally, urban hospitals received a greater share of DSH payments relative to rural hospitals, with 89.6 percent of DSH funds distributed to urban hospitals and the remaining 10.4 percent distributed to rural hospitals. This proportion corresponds to a similar distribution of uncompensated care costs, with 88.2 percent of uncompensated care costs among DSH hospitals carried by urban hospitals and the remaining 11.8 percent carried by rural hospitals. (For additional information on DSH payments and hospitals uncompensated care costs by urban/rural status, see table 15 in app. II.) For additional information on variation in uncompensated care and DSH payments by hospital category and sole community provider status, and state characteristics, see tables 16 through 19 in appendix II. <4. Agency Comments> We provided a draft of this report to HHS for review. HHS provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to appropriate congressional committees, to the Secretary of Health and Human Services, the Administrator of CMS, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114, or yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Selected Bibliography This bibliography contains citations for the eight Kaiser Family Foundation reports referenced in the report. Kaiser Family Foundation and Health Management Associates. States Focus on Quality and Outcomes Amid Waiver Changes: Results from a 50-State Medicaid Budget Survey for State Fiscal Years 2018 and 2019. Washington, D.C.: Kaiser Family Foundation, and National Association of Medicaid Directors, October 2018. Kaiser Family Foundation and Health Management Associates. Medicaid Moving Ahead in Uncertain Times: Results from a 50-State Medicaid Budget Survey for State Fiscal Years 2017 and 2018. Washington, D.C.: Kaiser Family Foundation, October 2017. Kaiser Family Foundation and Health Management Associates. Implementing Coverage and Payment Initiatives: Results from a 50-State Medicaid Budget Survey for State Fiscal Years 2016 and 2017. Washington, D.C.: Kaiser Family Foundation and National Association of Medicaid Directors, October 2016. Kaiser Family Foundation and Health Management Associates. Medicaid Reforms to Expand Coverage, Control Costs and Improve Care: Results from a 50-State Medicaid Budget Survey for State Fiscal Years 2015 and 2016. Washington, D.C.: Kaiser Family Foundation and National Association of Medicaid Directors, October 2015. Kaiser Family Foundation and Health Management Associates. Medicaid in an Era of Health & Delivery System Reform: Results from a 50-State Medicaid Budget Survey for State Fiscal Years 2014 and 2015. Washington, D.C.: Kaiser Family Foundation, and National Association of Medicaid Directors, October 2014. Kaiser Family Foundation and Health Management Associates. Medicaid in a Historic Time of Transformation: Results from a 50-State Medicaid Budget Survey for State Fiscal Years 2013 and 2014. Washington, D.C.: Kaiser Commission on Medicaid and the Uninsured, Kaiser Family Foundation, October 2013. Kaiser Family Foundation and Health Management Associates. Medicaid Today; Preparing for Tomorrow: A Look at State Medicaid Program Spending, Enrollment and Policy Trends Results from a 50-State Medicaid Budget Survey for State Fiscal Years 2012 and 2013. Washington, D.C.: Kaiser Commission on Medicaid and the Uninsured, Kaiser Family Foundation, October 2012. Kaiser Family Foundation and Health Management Associates. Moving Ahead Amid Fiscal Challenges: A Look at Medicaid Spending, Coverage and Policy Trends Results from a 50-State Medicaid Budget Survey for State Fiscal Years 2011 and 2012. Washington, D.C.: Kaiser Commission on Medicaid and the Uninsured, Kaiser Family Foundation, October 2011. Appendix II: Data on Disproportionate Share Hospital Payments and Hospital Uncompensated Care Costs by State To conduct this analysis, we used data compiled by Acumen for the Medicaid and CHIP Payment and Access Commission. These data consist of measures from several sources. The measures used within this report were collected from state disproportionate share hospital (DSH) audits and Medicare cost reports. The 2014 DSH audits, which report data on hospital uncompensated care costs and DSH payments to hospitals, were submitted by 48 states and the District of Columbia. These data do not include a census of all hospitals, but only those hospitals that were reported in the 2014 DSH audits. As a result, these data do not capture all uncompensated care costs in each state, only uncompensated care costs for those hospitals reported in the 2014 DSH audits. Two states, Massachusetts and Hawaii, did not submit a 2014 DSH audit, because they did not make DSH payments. Additionally, while South Dakota submitted a 2014 DSH audit, we excluded the state from our analysis because of concerns about the reliability of the reported cost measures. In addition, not all hospitals reported every data element we analyzed. As a result, total uncompensated care costs and total DSH payments vary between tables, as hospitals were excluded from a given table if they did not report the characteristic described by the table. The numbers of hospitals excluded because they did not report a given data element are noted in each table for which this is the case. Likewise, as uncompensated care costs are an important focus of the report, we also excluded from all analyses 13 hospitals that did not report a value for uncompensated care costs. Appendix III: GAO Contacts and Staff Acknowledgments <5. GAO Contacts> <6. Staff Acknowledgments> In addition to the contact named above, Lori Achman (Assistant Director), Dawn Nelson (Analyst-in-Charge), Sean Miskell, and Jeffrey Tamburello made key contributions to this report. Also contributing were Tim Bushfield, Drew Long, Vikki Porter, and Emily Wilson. | Why GAO Did This Study
Medicaid, the joint federal-state program that finances health care coverage for low-income and medically needy individuals, spent an estimated $177.5 billion on hospital care in fiscal year 2017. About a quarter ($46.3 billion) of those hospital payments were supplemental payments—typically lump sum payments made to providers that are not tied to a specific individual's care. States determine hospital payment amounts within federal limits. In fiscal year 2017, DSH payments totaled about $18.1 billion. Beginning in fiscal year 2020, the amount of DSH payments each state can make is scheduled to be reduced.
GAO was asked to study Medicaid DSH payments to hospitals. Among other things, GAO examined hospital uncompensated care costs and DSH payments by state Medicaid program and hospital characteristics.
GAO analyzed data from the 2014 DSH audits—states' independently audited and certified reports of hospital-level uncompensated care costs and DSH payments—from 47 states and the District of Columbia (48 states). Three states were excluded from the analysis because they either did not make DSH payments or the submitted data were unreliable. The 2014 data were the most recently available audited, hospital-specific, data at the time of GAO's analysis. We provided a draft of this report to HHS for review. HHS provided technical comments, which we incorporated as appropriate.
What GAO Found
Medicaid disproportionate share hospital (DSH) payments are one type of supplemental payment and are designed to help offset hospitals' uncompensated care costs for serving Medicaid beneficiaries and uninsured patients. Under the Medicaid DSH program, uncompensated care costs include two components: (1) costs related to care for the uninsured; and (2) the Medicaid shortfall—the gap between a state's Medicaid payment rates and hospitals' costs for serving Medicaid beneficiaries. GAO's analysis of hospitals receiving DSH payments showed that in 2014, costs related to care for the uninsured comprised 68 percent of total uncompensated care costs, and the remaining 32 percent was the Medicaid shortfall.
Across states, GAO found that total DSH payments varied significantly in 2014. DSH payment levels are generally tied to state DSH spending in 1992 and since 1993 states have been subject to a limit on the amount of federal funding that may be used for DSH payments.
Medicaid DSH payments covered 51 percent of the uncompensated care costs. In 19 states, DSH payments covered at least 50 percent of uncompensated care costs.
DSH payments comprised about 14 percent of total Medicaid payments, yet wide variation existed. For example, DSH payments comprised about 97 percent of Medicaid payments to DSH hospitals in Maine and 0.7 percent of Medicaid payments to DSH hospitals in Tennessee.
Some types of hospitals received a greater proportion of DSH payments relative to their share of total uncompensated care costs. For example, states generally provided more DSH payments to public hospitals (in comparison to private and non-profit hospitals) and teaching hospitals (as compared to non-teaching hospitals) relative to their share of total uncompensated care costs. |
gao_GAO-19-607 | gao_GAO-19-607_0 | <1. Background> <1.1. History of Conflict in the DRC and the Region> The DRC is a vast, mineral-rich nation with an estimated population of more than 85 million people and an area that is roughly one-quarter the size of the United States, according to the UN. Since gaining its independence from Belgium in 1960, the DRC has undergone political upheaval and armed conflict. From 1998 to 2003, the DRC and eight other African countries were involved in what has become known as Africa s World War, which resulted in a death toll of an estimated 5 million people in the DRC, according to State. During that period, in 1999, the UN deployed a peacekeeping mission to the DRC, and since then the United States and the international community have sought to improve security in the DRC. However, eastern DRC continues to be plagued by violence including numerous cases of sexual violence reported by the UN often perpetrated against civilians by nonstate armed groups and some members of the Congolese national military. More recently, presidential elections were originally scheduled for 2016, when the president s final term in office expired, but the government delayed elections until December 2018. During this time, the UN reported an increase in human rights violations. In 2018 and 2019, the UN reported that serious violations of human rights remain widespread in the DRC, including continued acts of sexual violence by government security forces as well as nonstate armed groups. In addition, the UN noted that criminal networks and armed groups, including members of the Congolese national military and police, continued to derive illegal revenues from smuggling and illicit taxation of minerals from eastern Congolese mines. <1.2. Uses of Conflict Minerals> Various industries, particularly manufacturing industries, use the four conflict minerals tin, tungsten, tantalum, and gold in a wide variety of products. For example, tin is used to solder metal pieces and is also found in food packaging, steel coatings on automobile parts, and some plastics. Tungsten is used in automobile manufacturing, drill bits, and cutting tools, and other industrial manufacturing tools and is the primary component of filaments in light bulbs. Most tantalum is used to manufacture capacitors that enable energy storage in electronic products, such as cell phones and computers, or to produce alloy additives used in turbines in jet engines. Gold is used as reserves and in jewelry and is also used by the electronics industry, including, for example, in cell phones and laptops. <1.3. SEC Conflict Minerals Disclosure Rule> In August 2012, SEC adopted its conflict minerals disclosure rule in response to Section 1502(b) of the Dodd-Frank Act. In the summary section of the adopting release for the rule, SEC noted that to accomplish the goal of helping to end the human rights abuses in the DRC caused by the conflict, Congress chose to use the Dodd-Frank Act s disclosure requirements to bring greater public awareness of the sources of companies conflict minerals and to promote the exercise of due diligence on conflict mineral supply chains. The map in figure 1 shows the countries covered by the SEC disclosure rule, including the DRC and its 26 provinces. The SEC disclosure rule addresses the four conflict minerals named in the Dodd-Frank Act originating from the covered countries. The rule outlines a process for companies to follow, as applicable, to comply with the rule. (See app. II.) The process broadly requires a company to 1. determine whether it manufactures, or contracts to be manufactured, products with necessary conflict minerals; 2. conduct a reasonable country-of-origin inquiry concerning the origin of those conflict minerals; and 3. exercise due diligence, if appropriate, to determine the source and chain of custody of those conflict minerals, adhering to a nationally or internationally recognized due diligence framework, if such a framework is available for these necessary conflict minerals. If companies choose to disclose that their products are DRC conflict free in a conflict minerals report, the SEC disclosure rule requires companies to obtain an independent private-sector audit. Following an appellate court decision that a portion of the disclosure required by the SEC disclosure rule violated the First Amendment, SEC staff issued guidance on April 29, 2014, indicating that, pending further action by the SEC or a court, companies required to file a conflict minerals report would not have to identify their products as DRC conflict undeterminable, not found to be DRC conflict free, or DRC conflict free. In April 2017, following the entry of the final judgment in the case, the SEC s Division of Corporation Finance issued revised guidance, indicating that, in light of the uncertainty regarding how the commission would resolve those issues and related issues raised by commenters, the Division of Corporation Finance had determined that it would not recommend enforcement action to the commission if companies did not report on specified due diligence disclosure requirements. However, the SEC staff told us that the guidance is not binding on the commission and that the commission could still initiate enforcement action if companies did not report on their due diligence in accordance with the rule. According to SEC staff, the 2017 guidance, while temporary, is still in effect, pending review of the rule by the commission. As of June 2019, the rule was on the SEC s long-term regulatory agenda, which means according to SEC staff that any action would likely not take place until after March 2020. <2. Conflict Minerals Disclosures Filed in 2018 Were Similar in Number and Content to Those Filed in Prior Years> <2.1. Almost as Many Companies Filed Conflict Minerals Disclosures in 2018 as in Each of the Past 2 Years> In 2018, 1,117 companies filed conflict minerals disclosures slightly fewer than the number of companies that filed in 2017 and 2016 (1,165 and 1,230, respectively). Our analysis of a generalizable sample of the 1,117 filings found that an estimated 85 percent of the companies filed as domestic, while the remaining 15 percent filed as foreign. This domestic- to-foreign ratio is similar to the ratio in 2017 and 2016. Overall, when reporting on the conflict minerals used in their products, an estimated 62 percent reported using tantalum; 63 percent, tungsten; and 66 percent, gold percentages similar to those reported in 2017 and 2016. An estimated 76 percent reported using tin, which was similar to the 69 percent reported in 2017 and significantly higher than the 61 percent in 2016. An estimated 24 percent did not specify the minerals they used. <2.2. A Similar Percentage of Companies Conducted Country of Origin Inquiries as in the Past 2 Years; the Percentage of Companies Reporting a Determination Has Increased since 2014> Our analysis of our generalizable sample found that, as in 2017 and 2016, almost all companies that filed conflict minerals disclosures indicated that they had conducted country-of-origin inquiries. Specifically, an estimated 100 percent of companies that filed reported that they had conducted such an inquiry, similar to the percentages that reported doing so in the prior 2 years. As a result of the inquiries they conducted, an estimated 56 percent of companies that filed reported whether the conflict minerals in their products came from covered countries similar to the estimated 53 percent in 2017 and 49 percent in 2016. The percentage of companies able to make such a determination significantly increased between 2014 and 2015, and has since leveled off. (See figure 2.) <2.3. Some Companies Filing in 2018 Reported Taking Actions to Improve Supply Chain Data, Though Many Continue to Report Difficulties in Determining Country of Origin> As in past years, our review of our generalizable sample of filings found that some of the companies in our generalizable sample reported taking the same actions to improve supply chain data collection that they had taken in past years, including using standardized tools and conducting surveys. Those companies that conducted surveys reported doing further investigation into the source of minerals, for example, by following up with suppliers to improve the specificity and completeness of their survey responses. Other actions companies reported taking to improve supply chain data collection included educating suppliers about conflict-free sourcing and creating and publicizing conflict minerals policies. In interviews, representatives of selected companies and other industry participants also noted, as they had in prior years, that awareness among suppliers about the use of conflict minerals continued to increase. However, many companies reported difficulties in determining the country of origin of conflict minerals, in part as a result of lack of access to suppliers and complex supply chains involving many suppliers and processing facilities. Specifically, some companies reported that some suppliers did not respond to requests for information, or that supplier and smelter information was incomplete or contained errors. Some companies also reported, among other factors, confusion among suppliers about the requirements of the SEC disclosure rule, and gaps in supplier education and knowledge. <2.4. Almost All Companies Required to Conduct Due Diligence Reported Conducting It in Their 2018 Filings> Our review of our generalizable sample found that 94 percent of the companies that were required to conduct due diligence, as a result of their country-of-origin inquiries, reported conducting it. This percentage is similar to those in prior years: 96 percent in both 2017 and 2016. An estimated 89 percent of the companies that were required to conduct due diligence reported using a due diligence framework prescribed by the Organisation for Economic Co-operation and Development (OECD) guidance to conduct due diligence on the source and chain of custody of the conflict minerals in their products. This percentage is comparable to the 87 percent in 2017 and 92 percent in 2016. The remainder of the companies reported using non-OECD guidance or did not specify the guidance they used, if any. Of all the companies that conducted due diligence (a subset of the companies that conducted country-of-origin inquiries shown in figure 2 above), an estimated 35 percent reported that they were able to determine that their conflict minerals came from covered countries or from scrap or recycled sources, compared with 37 percent in 2017 and 39 percent in 2016. However, an estimated 61 percent of the companies reported in 2018 that they could not definitively confirm the source of the conflict minerals in their products, compared with 47 percent in 2017 and 55 percent in 2016. As in prior years, almost all of the companies that conducted due diligence reported that they could not determine whether the conflict minerals in their products had financed or benefited armed groups. Three companies in our generalizable sample determined that the minerals in at least some of their products had not financed or benefited armed groups in covered countries. None of these three companies declared their products DRC conflict free, which would trigger the requirement to file an independent private-sector audit report. However, one of the three companies did include one such audit report. Overall, SEC officials approximated that a total of 14 companies filed independent private-sector audit reports in 2018, compared with 16 in 2017 and 19 in 2016. <2.5. Some Companies Noted That SEC Staff Guidance Regarding Due Diligence Reporting Requirements Had Caused Confusion, but Most Companies Filings Were Similar to Those Submitted in Each of the Prior 2 Years> Some companies and industry representatives told us as they did last year that even though the revised guidance and other statements made by SEC staff had raised some uncertainty about the filing process, companies generally planned to continue to report conflict minerals disclosure information. As noted earlier, the SEC s Division of Corporation Finance issued revised guidance in April 2017 indicating that it would not recommend enforcement action to the commission if companies did not report on specified due diligence disclosure requirements. Some companies and industry participants told us that the SEC staff s revised guidance had caused confusion among some suppliers and stakeholders about reporting requirements, sometimes leading suppliers to be reluctant or slow to share information required by companies for their due diligence reporting. In addition, some companies had changed their approach to filing as a result of the guidance. Specifically, one company in our generalizable sample of SEC filings for 2018 cited the SEC staff s revised guidance recommending no enforcement action as the reason for its decision not to report on due diligence efforts, despite noting it had determined there was reason to believe that minerals in its products may have come from covered countries. Another company we interviewed cited the same SEC staff guidance as one of the reasons the company chose not to file an independent private-sector audit. However, representatives of other companies we interviewed told us that, generally, their companies planned to continue to report conflict minerals disclosure information, including information from their due diligence efforts. In addition, as noted above, our review of a generalizable sample of SEC filings from 2018 found that the filings were similar in number and content to those filed in 2017. Some companies told us that they would continue to file, and even expand their due diligence, in response to the conflict minerals disclosure rule and other incentives for filing such as consumer pressure and European Union reporting requirements scheduled to take effect in 2021. Furthermore, State reported they had begun to take actions related to the revised guidance. Specifically, State officials told us that they had conducted public outreach, such as attending industry events to remind stakeholders that the conflict minerals disclosure rule was still in effect, provide an overview of the rules and requirements, and answer questions. In addition, as of June 2019, the SEC s long-term regulatory agenda included an item indicating that the SEC Division of Corporation Finance is considering recommendations for the commission to address the effect of litigation over the conflict minerals rule. According to SEC staff, these recommendations may affect the 2017 guidance pertaining to the conflict minerals rule. <3. No New Information on Rates of Sexual Violence in Eastern DRC and Adjoining Countries Has Been Published; Case-File and Other Information on the DRC and Burundi Is Available> We did not identify any new information on the rate of sexual violence in eastern DRC, Burundi, Rwanda, or Uganda since we last reported in June 2018; we did identify new case-file information and other information from UN reports for the DRC and Burundi. Since 2011, we have reported annually on rates of sexual violence derived from population-based surveys, as well as on case-file data as applicable, for eastern DRC (which consists of the provinces of Ituri, Maniema, North Kivu, and South Kivu) and three countries that adjoin that region: Burundi, Rwanda, and Uganda. See appendix III for population-based surveys containing sexual violence rates published since 2007. As explained in the sidebar, case-file information is unsuitable for estimating rates of sexual violence. international entities, law enforcement agencies, or medical service providers on sexual violence victims Data from population-based surveys provide a more appropriate basis for deriving a rate of sexual violence because such surveys are conducted using random sampling techniques and their results are generalizable to the target population from which a representative sample was surveyed. As we have previously reported, several factors make case-file information unsuitable for estimating rates of sexual violence. For example: Case-file data are not based on a random sample of a population, and therefore the results of analyzing these data are not generalizable. Case-file data are not aggregated across However, case-file data can provide indicators that sexual assaults are occurring in certain locations and can help service providers respond to the needs of victims. We did not identify any new population-based surveys providing rates of sexual violence in eastern DRC, Burundi, Rwanda, or Uganda published since our June 2018 report. The most recent information for eastern DRC and Rwanda dates from 2016, and for Burundi and Uganda, from 2018. <3.1. New Case-File Information about Sexual Violence in the DRC and Burundi Is Available> UN entities, State, USAID, and a USAID-funded program have produced additional case-file information reported in 2018 and 2019 about instances of sexual violence in the DRC and Burundi that occurred in 2017 and 2018. While State s annual country report on human rights practices for Uganda noted that rape remained a common problem in the country in 2018, we did not identify new case-file information for the country, nor did we find new case-file information regarding Rwanda. Periodic Reporting of Case-File Information on Sexual Violence in the DRC and Adjoining Countries United Nations (UN) entities and the U.S. Department of State (State) report periodically on case-file information, while the U.S. Agency for International Development (USAID) periodically receives such information from an implementing partner, as follows: Rights Office in the Democratic Republic of the Congo reports annually on human rights violations in the Democratic Republic of the Congo (DRC), including sexual violence. Representative of the Secretary- General on Sexual Violence in Conflict reports annually on cases of conflict- related sexual violence in several countries, including the DRC, using information from the United Nations Stabilization Mission in the Democratic Republic of the Congo and the United Nations Population Fund, among others. reports containing case-file information from a 5-year program that began in 2017 to counter gender-based violence in parts of eastern DRC s North and South Kivu provinces. UN entities, State, USAID, and a USAID-funded 5-year program located in North and South Kivu provinces have produced new case-file information pertaining to sexual violence in the DRC. UN entities reported the following case-file information pertaining to sexual violence in the DRC for calendar year 2018: United Nations Joint Human Rights Office in the Democratic Republic of the Congo (UNJHRO) confirmed and documented at least 939 sexual violence victims (657 women, 279 children, and three men). According to UNJHRO, this sexual violence was perpetrated by DRC armed forces and police in many instances. Specifically, Armed Forces of the Democratic Republic of the Congo (FARDC) soldiers were responsible for 218 of these victims, 195 of whom were located in conflict-affected provinces of the DRC. Members of the Congolese National Police were responsible for 100 victims of sexual violence, 60 of whom were in conflict-affected provinces of the DRC. United Nations Stabilization Mission in the Democratic Republic of the Congo (MONUSCO) documented and verified 1,049 cases of conflict-related sexual violence against 605 women, 436 girls, four men, and four boys. According to MONUSCO, 741 of those cases were perpetrated by combatants of nonstate armed groups and armed militiamen, with the remaining 308 perpetrated by FARDC soldiers and Congolese National Police. United Nations Population Fund (UNFPA) reported 32,342 incidents of sexual violence in conflict-affected provinces between January 2018 and September 2018. UN agencies also reported in 2018 that they had provided medical assistance to over 5,200 survivors of sexual violence, and MONUSCO reported that it had supported legal clinics that provided counseling and referrals to 2,243 civilian survivors of sexual violence for calendar year 2017. State noted two instances of armed groups in eastern DRC perpetrating sexual violence reported by UN entities in calendar years 2017 and 2018. Specifically, the Bana Mura, an armed group with ties to local government, kidnapped 66 people (64 of them children) in Kasai province and used them as sexual slaves, and members of Raia Mutomboki, a rebel armed group, perpetrated sexual violence, including gang rape, against at least 66 women and girls in South Kivu province. In 2018, USAID reported that it had provided medical, legal, and other services to 7,755 survivors of sexual and gender-based violence, and had also worked with local organizations to strengthen their ability to respond to and prevent such violence, during calendar year 2017. USAID also reported that it had collaborated with the Ministry of Education to develop a curriculum focused on preventing such violence, and had worked with gender-based violence monitoring committees in 618 schools. One of USAID s implementing partners addresses sexual and gender-based violence as part of a 5-year program. This implementing partner reported reaching 3,135 victims of gender-based violence (including 2,559 adults and 576 children) in North and South Kivu provinces, providing those victims with health, legal, and psychosocial support services during fiscal year 2018. The implementing partner also reported providing services to 1,150 victims (including 953 adults and 197 children) during the first quarter of fiscal year 2019. <3.1.1. New Information on Burundi> State s annual human rights report for 2018, as well as UNFPA, provided some case-file information on sexual violence in Burundi. State s annual human rights report for 2018 noted that the government-operated Humura Center had recorded 627 cases of sexual and gender-based violence in Burundi, including domestic violence, from January 2018 to early September 2018. This organization provides survivors of sexual and domestic violence with legal, medical, and psychosocial services. UNFPA reported in 2018 that it had recorded 10,592 cases of gender- based violence in 2017 and noted that the Burundian government had decided to close the local UN Office of the High Commissioner for Human Rights in December 2018, reducing the access of survivors of sexual violence to legal services. <3.2. UN Reports Some Steps Taken to Address Sexual Violence in the DRC and None Taken in Burundi> UN entities noted that the government of the DRC had taken steps to address sexual violence in the DRC since 2013, but identified an increase in the number of incidents reported beginning in 2017. The reports also noted continued difficulties providing services to victims of sexual violence and combating a climate in which perpetrators act with impunity. According to the 2018 annual UN report on conflict-related sexual violence and UN officials we interviewed in 2019, the government of the DRC has continued to take steps to address sexual violence by, for example, holding awareness-raising campaigns and establishing a nationwide victim helpline. The UN Special Representative of the Secretary-General on Sexual Violence in Conflict cited other examples, including the prosecution of military and police officials, as well as leaders of nonstate armed groups, for conflict-related sexual violence. Specifically, the UN reported in 2018 that 59 members of the Congolese National Police and the FARDC were convicted of rape in 2017. Among those convicted was a FARDC colonel sentenced for failing to prevent subordinates from committing rape. The UN also noted that the DRC had successfully prosecuted a commander of the armed group Democratic Forces for the Liberation of Rwanda for sexual violence as a war crime, and a South Kivu provincial lawmaker and his militia for crimes against humanity for the abduction and rape of 39 children. In 2019, an armed group leader and former FARDC colonel was convicted of war crimes, including rape. As mentioned earlier, armed conflict and political upheaval within the DRC and particularly in eastern DRC have long created an environment of persistent human rights abuses, including sexual violence, according to UN reports. The UN reported this environment worsened during the lead- up to the presidential elections between 2016 and 2018. Case-file information the UN collected on sexual violence for 2017 and 2018 indicated an upward trend in incidents in the DRC, according to UN reports. A UN report cited an increase in documented cases of sexual violence, linking it to two factors: (1) nonstate armed groups use of sexual violence to enforce control over illicit exploitation of natural resources, such as gold, and (2) FARDC military operations responding to the activities of these nonstate armed groups. In addition to these recent developments, UN officials we interviewed cited longstanding difficulties such as a significant shortage of response services in the DRC; common instances of retaliation against survivors who reported abuse; and, as mentioned above, a climate in which perpetrators act with impunity. The UN Commission of Inquiry on Burundi did not identify any steps taken by the government of Burundi to address the country s human rights issues, including sexual violence, in 2017 or 2018. The Commission of Inquiry which, according to State, was denied access to the country by the government of Burundi but conducted interviews with more than 400 witnesses living in exile reported that serious human rights violations, including acts of sexual violence, persisted in 2017 and 2018. For example, the commission reported that the National Intelligence Service, police, and the youth wing of the ruling political party used sexual violence to target supporters of the political opposition or their relatives. The commission also recommended that the government of Burundi establish investigative bodies to look into human rights violations and take measures to ensure that victims of sexual violence have access to appropriate care, including sexual health services and psychological support. <4. Agency Comments> We provided a draft of this report to the SEC, State, and USAID for comment. USAID provided written comments describing some of their related activities in the DRC, which we have reprinted in appendix IV. All three agencies provided technical comments, which we have incorporated as appropriate. We are sending copies of this report to appropriate congressional committees and to the Chairman of the Securities and Exchange Commission, the Secretary of State, and the Administrator of the U.S. Agency for International Development. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8612 or gianopoulosk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology In this report, we (1) examine how companies responded to the U.S. Securities and Exchange Commission (SEC) conflict minerals disclosure rule when filing in 2018 and (2) provide recent information on the rate of sexual violence in eastern Democratic Republic of the Congo (DRC) and adjoining countries that was published in 2018 and early 2019. To address our first objective, we downloaded the specialized disclosure reports (Form SD) from the SEC s publically available Electronic Data Gathering, Analysis, and Retrieval (EDGAR) database in September 2018. We downloaded 1,117 Form SD filings and any associated conflict minerals reports included in EDGAR. Companies filed these Forms SD, along with related conflict minerals reports in some instances, to provide information in response to the SEC disclosure rule. To review the completeness and accuracy of the EDGAR database, we reviewed relevant documentation, interviewed knowledgeable SEC officials, and reviewed our prior reports on internal controls related to the SEC s financial systems. We determined that the EDGAR database was sufficiently reliable for identifying the universe of Form SD filings. We reviewed the conflict minerals section of the 2010 Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) and the requirements of the SEC disclosure rule to develop a data collection instrument that guided our analysis of a generalizable sample of Forms SD and conflict minerals reports. Our data collection instrument was not a compliance review of the Forms SD and conflict minerals reports. The questions were written in both yes no and multiple-choice formats. An analyst reviewed the Forms SD and conflict minerals reports and recorded responses to the data collection instrument for all of the companies in the sample. A second analyst also reviewed the Forms SD and conflict minerals reports and verified the responses recorded by the first analyst. Analysts met to discuss and resolve any discrepancies. We randomly sampled 100 Forms SD from a population of 1,117 to create estimates generalizable to the population of all companies that filed. We selected this sample size to achieve a margin of error of no more than plus or minus 10 percentage points or less at the 95-percent confidence level, which applies to all our estimates except where noted. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have generated different estimates, we express our confidence in the precision of our particular sample s results as a 95-percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. After using the data collection instrument to analyze the sample of filings submitted in 2018, we compared the resulting estimates with our estimates regarding filings submitted in prior years to determine whether there had been any statistically significant changes. We also attended an industry conference on conflict minerals and spoke with company representatives and industry representatives to gain additional context and perspectives. To address our second objective, we identified and assessed any information on sexual violence in eastern DRC and the three adjoining countries Burundi, Rwanda, and Uganda that had been published or otherwise had become available in 2018 and early 2019 and therefore would not have been included in our most recent report on the topic. We discussed the collection of sexual violence related data in the DRC and adjoining countries, including population-based survey data and case-file data, with Department of State and U.S. Agency for International Development officials and with representatives of nongovernmental organizations and researchers. We also interviewed officials from the United Nations (UN) Children s Fund, the UN Special Representative of the Secretary-General on Sexual Violence in Conflict, and the UN Statistics Division, and we obtained information from the UN Population Fund and UN Organization Stabilization Mission in the Democratic Republic of the Congo. In addition, we searched research databases, including MEDLINE and Scopus, to identify new academic articles containing any additional information on sexual violence published in 2018 and early 2019. Through these searches, we identified an initial list of 164 articles, which we then narrowed down to a priority list of studies by considering a variety of factors pertaining to the studies relevance to our second objective. These factors included (1) whether the study included rates, particularly related to the nation-wide rate of sexual violence in the DRC and region-wide rate in eastern DRC; (2) whether the study included case-file information; (3) whether the study contained data from 2011 or later; (4) whether the study focused on a subset of a broader population; (5) the geographic scope of the study; and (6) whether the study included original research. We reviewed the priority list of 16 articles and determined that none of them met our criteria for inclusion. We conducted this performance audit from September 2018 to September 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Summary of the U.S. Securities and Exchange Commission s Conflict Minerals Rule Disclosure Process The U.S. Securities and Exchange Commission (SEC) conflict minerals disclosure rule requires certain companies to file a specialized disclosure report (Form SD), if the company manufactures, or contracts to have manufactured, a product or products containing conflict minerals that are necessary to the functionality or the production of those products. The rule also requires each company, as applicable, to conduct a Reasonable County of Origin Inquiry to determine whether it knows, or has reason to believe, that its conflict minerals may have originated in the covered countries or that the conflict minerals may not be from scrap or recycled sources. If the company s inquiry shows both conditions to be true of its conflict minerals, the company must exercise due diligence and provide a description of the measures it took to exercise due diligence in determining the source and chain of custody of the conflict minerals, the facilities used to process the conflict minerals, their country of origin, and of the efforts it made to determine the mine or location of origin with the greatest possible specificity. The Form SD provides general instructions for filing conflict minerals disclosures and specifies the information that companies must provide. Companies were required to file under the rule for the first time by June 2, 2014, and annually thereafter on May 31. Figure 3 shows the flowchart included in the SEC s adopting release for the rule, which summarized the conflict minerals disclosure rule at the time it was adopted. Appendix III: Population-Based Surveys on Sexual Violence Rates Since 2007 Since 2011, we have reported on population-based surveys containing sexual violence rates in eastern Democratic Republic of the Congo (DRC) and three adjoining countries: Burundi, Rwanda, and Uganda. Figure 4 shows the publication dates for these surveys, starting with surveys published in 2007. Appendix IV: Comments from the U.S. Agency for International Development Appendix V: GAO Contact and Staff Acknowledgments <5. GAO Contact> <6. Staff Acknowledgments> In addition to the individual named above, Godwin Agbara (Assistant Director), Katherine Forsyth (Analyst-in-Charge), Debbie Chung, Justin Fisher, Jieun Chang, Christopher Keblitis, Grace Lui, Nisha Rai, John Villecco, and Timothy Young made key contributions to this report. Diana Blumenfeld, Julia Jebo Grant, Farahnaaz Khakoo-Mausel, and Michael McAtee provided additional assistance. Related GAO Products Conflict Minerals: Company Reports on Mineral Sources in 2017 Are Similar to Prior Years and New Data on Sexual Violence Are Available. GAO-18-457. Washington, D.C.: June 28, 2018. Conflict Minerals: Information on Artisanal Mined Gold and Efforts to Encourage Responsible Sourcing in the Democratic Republic of the Congo. GAO-17-733. Washington, D.C.: August 23, 2017. SEC Conflict Minerals Rule: 2017 Review of Company Disclosures in Response to the U.S. Securities and Exchange Commission Rule. GAO-17-517R. Washington, D.C.: April 26, 2017. Conflict Minerals: Insights from Company Disclosures and Agency Actions. GAO-17-544T. Washington, D.C.: April 5, 2017. SEC Conflict Minerals Rule: Companies Face Continuing Challenges in Determining Whether Their Conflict Minerals Benefit Armed Groups. GAO-16-805. Washington, D.C.: August 25, 2016. SEC Conflict Minerals Rule: Insights from Companies Initial Disclosures and State and USAID Actions in the Democratic Republic of the Congo Region. GAO-16-200T. Washington, D.C.: November 17, 2015. SEC Conflict Minerals Rule: Initial Disclosures Indicate Most Companies Were Unable to Determine the Source of Their Conflict Minerals. GAO-15-561. Washington, D.C.: August 18, 2015. Conflict Minerals: Stakeholder Options for Responsible Sourcing Are Expanding, but More Information on Smelters Is Needed. GAO-14-575. Washington, D.C.: June 26, 2014. SEC Conflict Minerals Rule: Information on Responsible Sourcing and Companies Affected. GAO-13-689. Washington D.C.: July 18, 2013. Conflict Minerals Disclosure Rule: SEC s Actions and Stakeholder- Developed Initiatives. GAO-12-763. Washington, D.C.: July 16, 2012. The Democratic Republic of the Congo: Information on the Rate of Sexual Violence in War-Torn Eastern DRC and Adjoining Countries. GAO-11-702. Washington, D.C.: July 13, 2011. The Democratic Republic of the Congo: U.S. Agencies Should Take Further Actions to Contribute to the Effective Regulation and Control of the Minerals Trade in Eastern Democratic Republic of the Congo. GAO-10-1030. Washington, D.C.: September 30, 2010. | Why GAO Did This Study
Since the UN first deployed a peacekeeping mission to the DRC 2 decades ago, the United States and the international community have sought to improve security in the country. In eastern DRC, armed groups have committed severe human rights abuses, including sexual violence, and reportedly profit from the exploitation of “conflict minerals”—in particular, tin, tungsten, tantalum, and gold—according to the UN. Congress included a provision in the 2010 Dodd-Frank Wall Street Reform and Consumer Protection Act that, among other things, required the SEC to promulgate regulations regarding the use of conflict minerals from the DRC and adjoining countries. The SEC adopted these regulations in 2012. The act also included a provision for GAO to annually assess the SEC regulations' effectiveness in promoting peace and security and to report on the rate of sexual violence in the DRC and adjoining countries.
In this report, GAO (1) examines how companies responded to the SEC conflict minerals disclosure rule when filing in 2018 and (2) provides recent information on the rate of sexual violence in eastern DRC and adjoining countries. GAO analyzed a generalizable random sample of SEC filings and interviewed relevant officials. GAO also reviewed U.S. government, UN, and international organization reports; interviewed DRC officials and other stakeholders; and conducted fieldwork in California at an industry conference.
What GAO Found
Companies' conflict minerals disclosures filed with the U.S. Securities and Exchange Commission (SEC) in 2018 were, in general, similar in number and content to disclosures filed in the prior 2 years. In 2018, 1,117 companies filed conflict minerals disclosures—about the same number as in 2017 and 2016. The percentage of companies that reported on their efforts to determine the source of minerals in their products through supply chain data collection (country-of-origin inquiries) was also similar to percentages in those 2 prior years. As a result of the inquiries they conducted, an estimated 56 percent of the companies reported whether the conflict minerals in their products came from the Democratic Republic of the Congo (DRC) or any of the countries adjoining it—similar to the estimated 53 and 49 percent in the prior 2 years. The percentage of companies able to make such a determination significantly increased between 2014 and 2015, and has since leveled off, as shown below.
In their 2018 disclosures, some companies reported taking the same actions to improve supply chain data collection that they had taken in past years, and many noted difficulties in determining conflict minerals' country of origin. A subset of the companies in the figure had not determined their minerals' origin or had reason to believe their minerals were from covered countries (and not from scrap or recycled sources) and were, as a result of the inquiry, required to conduct additional research (due diligence). Of those that conducted due diligence, an estimated 61 percent reported they were unable to confirm the source of minerals in their products. An estimated 35 percent reported using conflict minerals from covered countries or from scrap or recycled sources. Although some companies noted that guidance the SEC staff revised in 2017 had caused uncertainty about the filing process, most filings were similar to those submitted in prior years.
GAO found no new population-based surveys on the rate of sexual violence in eastern DRC and three countries adjoining that region—Burundi, Uganda, and Rwanda—but found other types of information on sexual violence. |
gao_GAO-20-338T | gao_GAO-20-338T_0 | <1. Information on the Potential Economic Effects of Climate Change in the United States Could Help Federal Decision Makers Better Manage Climate Risks> We reported in September 2017 that while estimates of the economic effects of climate change are imprecise due to modeling and information limitations, they can convey useful insight into broad themes about potential damages in the United States. We also reported that according to the two national-scale studies available at the time that examined the economic effects of climate change across U.S. sectors, potential economic effects could be significant and these effects will likely increase over time for most of the sectors analyzed. For example, for 2020 through 2039, one of the studies estimated from $4 billion to $6 billion in annual coastal property damages from sea level rise and more frequent and intense storms. In addition, the national-scale studies we reviewed and several experts we interviewed for the September 2017 report suggested that potential economic effects could be unevenly distributed across sectors and regions. For example, one of the studies estimated that the Southeast, Midwest, and Great Plains regions will likely experience greater combined economic effects than other regions, largely because of coastal property damage in the Southeast and changes in crop yields in the Midwest and Great Plains (see fig. 1). This is consistent with the findings of the Fourth National Climate Assessment. For example, according to that assessment, the continued increase in the frequency and extent of high- tide flooding due to sea level rise threatens America s trillion-dollar coastal property market and public infrastructure sector. As we reported in September 2017, information on the potential economic effects of climate change could help federal decision makers better manage climate risks, according to leading practices for climate risk management, economic analysis we reviewed, and the views of several experts we interviewed. For example, such information could inform decision makers about significant potential damages in different U.S. sectors or regions. According to several experts and our prior work, this information could help federal decision makers identify significant climate priorities as an initial step toward managing climate risks. Such a first step is consistent with leading practices for climate risk management and federal standards for internal control. For example, leading practices from the National Academies call for climate change risk management efforts that focus on where immediate attention is needed. As noted in our September 2017 report, according to a 2010 National Academies report, other literature we reviewed, and several experts we interviewed, to make informed choices, decision makers need more comprehensive information on economic effects to better understand the potential costs of climate change to society and begin to develop an understanding of the benefits and costs of different options for managing climate risks. <2. The Federal Government Faces Fiscal Exposure from Climate Change Risks, but Our Past Work Shows an Absence of Government-Wide Strategic Planning> The federal government faces fiscal exposure from climate change risks in a number of areas, and this exposure will likely increase over time, as we concluded in September 2017. In the March 2019 update to our High-Risk List, we summarized our previous work that identified several of these areas across the federal government, including programs related to the following: Disaster aid. The rising number of natural disasters and increasing reliance on federal assistance are a key source of federal fiscal exposure, and this exposure will likely continue to rise. Since 2005, federal funding for disaster assistance has been at least $450 billion. In September 2018, we reported that four hurricane and wildfire disasters in 2017 created an unprecedented demand for federal disaster resources and that Hurricanes Harvey, Irma, and Maria ranked among the top five costliest hurricanes on record. Subsequently, the fall of 2018 brought additional catastrophic disasters such as Hurricanes Florence and Michael and devastating California wildfires, with further needs for federal disaster assistance. Disaster costs are projected to increase as certain extreme weather events become more frequent and intense due to climate change as USGCRP observed and projected. We reported in July 2015 that the federal government s fragmented and reactive approach to funding disaster resilience presented challenges to effective reduction of climate-related risks. In addition, our prior work found that the Federal Emergency Management Agency s (FEMA) primary indicator for determining whether to recommend that a jurisdiction receive disaster assistance which was set in 1986 is artificially low because it does not accurately reflect the ability of state and local governments to respond to disasters. Without an accurate assessment of a jurisdiction s capability to respond to a disaster without federal assistance, we found that FEMA runs the risk of recommending that the President award federal assistance to jurisdictions that have the capability to respond and recover on their own. Federal insurance for property and crops. The National Flood Insurance Program (NFIP) and the Federal Crop Insurance Corporation are sources of federal fiscal exposure due, in part, to the vulnerability of insured property and crops to climate change. These programs provide coverage where private markets for insurance do not exist, typically because the risk associated with the property or crops is too great to privately insure at a cost that buyers are willing to accept. From 2013 to 2017, losses paid under NFIP and the federal crop insurance program totaled $51.3 billion. Federal flood and crop insurance programs were not designed to generate sufficient funds to fully cover all losses and expenses, which means the programs need budget authority from Congress to operate. NFIP, for example, was about $21 billion in debt to the Department of the Treasury as of April 2019. Further, the Congressional Budget Office estimated in May 2019 that federal crop insurance would cost the federal government an average of about $8 billion annually from 2019 through 2029. Operation and management of federal property and lands. The federal government owns and operates hundreds of thousands of facilities and manages millions of acres of land that could be affected by a changing climate and represent a significant federal fiscal exposure. For example, the Department of Defense (DOD) owns and operates domestic and overseas infrastructure with an estimated replacement value of about $1 trillion. In September 2018, Hurricane Florence damaged Camp Lejeune and other Marine Corps facilities in North Carolina, resulting in a preliminary Marine Corps repair estimate of $3.6 billion. One month later, Hurricane Michael devastated Tyndall Air Force Base in Florida, resulting in a preliminary Air Force repair estimate of $3 billion and upwards of 5 years to complete the work. In addition, we recently reported that the federal government manages about 650 million acres of land in the United States that could be vulnerable to climate change, including the possibility of more frequent and severe droughts and wildfires. Appropriations for federal wildland fire management activities have increased considerably since the 1990s, as we and the Congressional Research Service have reported. As we reported in October 2019, our past work shows an absence of government-wide strategic planning for climate change. Specifically, our past work identifies limitations related to strategic planning for climate change that include a lack of coordination, prioritization, and consolidation of strategic priorities. For example, we reported in October 2009 that the federal government s emerging climate resilience activities were carried out in an ad hoc manner and were not well coordinated across federal agencies. In May 2011, we reported that federal officials did not have a shared understanding of strategic government-wide priorities related to climate change. In the same report, we found that there was not a consolidated set of strategic priorities integrating climate change programs and activities across the federal government. In our March 2019 High-Risk Update, we reported that one area of government-wide action needed to reduce federal fiscal exposure is in the federal government s role as the leader of a strategic plan that coordinates federal efforts and informs state, local, and private sector action. For our 2019 High-Risk Update, we assessed the federal government s progress since 2017 related to climate change strategic planning against five criteria and found that the federal government had not met any of the criteria for removal from the high-risk list. Specifically, since our 2017 high-risk update, four ratings regressed to not met and one remained unchanged as not met. (See fig. 2.) We have made 62 recommendations related to the climate change high-risk area, 17 of which address improving federal climate change strategic planning. As of August 2019, no action had been taken toward 14 of those 17 recommendations one dating back to 2003. <3. Federal Investments in Resilience to Climate Change Impacts Have Been Limited> Although the federal government faces fiscal exposure to climate change, its investments in resilience to climate change impacts have been limited. One way to reduce federal fiscal exposure is to enhance resilience by reducing or eliminating long-term risk to people and property from natural hazards. For example, in September 2018 we reported that elevated homes and strengthened building codes in Texas and Florida prevented greater damages during the 2017 hurricane season. In addition, one company participating in a 2014 forum we held on preparing for climate- related risks noted that for every dollar it invested in resilience efforts, the company could prevent $5 in potential losses. Finally, a 2018 interim report by the National Institute of Building Sciences examined a sample of federal grants for hazard mitigation. The interim report estimated approximate benefits to society (i.e., homeowners and communities) in excess of costs for several types of resilience projects through the protection of lives and property, and prevention of other losses, though precise benefits are uncertain. According to the interim report, for every grant dollar the federal government spent on resilience projects, over time, society is estimated to accrue benefits amounting to the following: About $3 on average from projects addressing the effects of fire in the wildland urban interface, with most benefits (approximately 70 percent) coming from the protection of property (i.e., avoiding property losses). About $5 on average from projects to address hurricane-force and tornado-force winds, with most benefits (approximately 90 percent) coming from the protection of lives. This includes avoiding deaths, nonfatal injuries, and causes of posttraumatic stress. About $7 on average from projects that buy out buildings prone to riverine flooding, with most benefits (approximately 65 percent) coming from the protection of property. The interim report also projected that society could accrue benefits amounting to about $11 on average for every dollar invested in designing new buildings to meet the 2018 International Building Code and the 2018 International Residential Code the model building codes that the International Code Council developed with most benefits (46 percent) coming from the protection of property. We reported in October 2009 that the federal government s activities to build resilience to climate change were carried out in an ad hoc manner and were not well coordinated across federal agencies. We reported similar findings in October 2019. Federal agencies have included some of these activities within existing programs and operations a concept known as mainstreaming. For example, the Fourth National Climate Assessment reported that the U.S. military integrates climate risks into its analysis, plans, and programs, with particular attention paid to climate effects on force readiness, military bases, and training ranges. However, according to the Fourth National Climate Assessment, while a significant portion of climate risk can be addressed by mainstreaming, the practice may reduce the visibility of climate resilience relative to dedicated, stand-alone approaches and may prove insufficient to address the full range of climate risks. In addition, as we reported in March 2019, the Disaster Recovery Reform Act of 2018 (DRRA) was enacted in October 2018 and could improve state and local resilience to disasters. DRRA, among other things, allows the President to set aside, with respect to each major disaster, a percentage of the estimated aggregate amount of certain grants to use for predisaster hazard mitigation and makes federal assistance available to state and local governments for building code administration and enforcement. However, it is too early to tell what impact implementing the act will have on state and local resilience. The federal government has made some limited investments in resilience, and DRRA could enable additional improvements at the state and local levels. However, we reported in October 2019 that the federal government does not have a strategic approach for investing in climate resilience projects that is, an intentional, crosscutting approach in which the federal government identifies and prioritizes projects for the purpose of enhancing climate resilience. Federal agencies may take actions to invest in projects with potential climate resilience benefits related to their own mission areas using funds from federal programs designed for other purposes. In addition, the National Climate Assessment provides high- level information on what is known about observed and projected climate risks in the United States. However, no federal entity looks holistically at the federal government s investments to strategically prioritize projects to ensure that they address the nation s most significant climate risks and provide the highest net benefits relative to other potential projects. Further, we reported in September 2017 that the federal government had not undertaken strategic government-wide planning to manage significant climate risks before they become fiscal exposures. As an initial step in managing climate risks, most of the experts we interviewed for the September 2017 report told us that federal decision makers should prioritize risk management efforts on significant climate risks that create the greatest fiscal exposure. Moreover, several stakeholders told us that the federal government s emphasis has been on funding postdisaster efforts instead of funding resilience projects before a disaster occurs. This is consistent with findings from our July 2015 report that most federal funding for hazard mitigation is only available after a disaster. In addition, according to FEMA officials, some of the agency s hazard mitigation programs are designed to empower state and local governments to determine their mitigation funding priorities, and these state and local priorities may or may not align with the federal interest. Finally, although we did not identify a government-wide strategic approach specifically for investing in climate resilience projects, the National Mitigation Investment Strategy a national effort under way to plan for predisaster resilience investments represents a potential cross- agency vehicle for climate resilience planning. However, the strategy does not specifically address climate change or identify and prioritize specific climate resilience projects. <4. The Federal Government Could Reduce Its Fiscal Exposure by Focusing and Coordinating Federal Efforts> As we reported in March 2019, the federal government could reduce its fiscal exposure to climate change by focusing and coordinating federal efforts. However, the federal government is currently not well organized to address the fiscal exposure presented by climate change, partly because of the inherently complicated and crosscutting nature of the issue. We have made a total of 62 recommendations related to limiting the federal government s fiscal exposure to climate change over the years, 12 of which have been made since February 2017. As of December 2018, 25 of these recommendations remained open. In describing what needs to be done to reduce federal fiscal exposure to climate change, our March 2019 High-Risk Report discusses many of the open recommendations.Implementing these recommendations could help reduce federal fiscal exposure. Several of them, including those highlighted below, identify key government-wide efforts needed to help plan for and manage climate risks and direct federal efforts toward common goals, such as improving resilience. Develop a national strategic plan: In May 2011, we recommended that appropriate entities within the Executive Office of the President (EOP), including the Office of Management and Budget, work with agencies and interagency coordinating bodies to establish federal strategic climate change priorities that reflect the full range of climate- related federal activities, including roles and responsibilities of key federal entities. Use economic information to identify and respond to significant climate risks: In September 2017, we recommended that the appropriate entities within EOP use information on the potential economic effects of climate change to help identify significant climate risks facing the federal government and craft appropriate federal responses. Such federal responses could include establishing a strategy to identify, prioritize, and guide federal investments to enhance resilience against future disasters. Provide decision makers with the best-available climate information: In November 2015, we reported that federal efforts to provide information about climate change impacts did not fully meet the climate information needs of federal, state, local, and private sector decision makers, which hindered their efforts to plan for climate change risks. We reported that these decision makers would benefit from a national climate information system that would develop and update authoritative climate observations and projections specifically for use in decision-making. As a result, we recommended that EOP (1) designate a federal entity to develop and periodically update a set of authoritative climate observations and projections for use in federal decision-making, which other decision makers could also access, and (2) designate a federal entity to create a national climate information system with defined roles for federal agencies and nonfederal entities with existing statutory authority. Consider climate information in design standards: In November 2016, we reported that design standards, building codes, and voluntary certifications established by standards-developing organizations play a role in ensuring the resilience of infrastructure to the effects of natural disasters. However, we reported that these organizations faced challenges in using forward-looking climate information that could help enhance the resilience of infrastructure. As a result, we recommended in the November 2016 report that the Department of Commerce (Commerce), acting through the National Institute of Standards and Technology which is responsible for coordinating federal participation in standards organizations convene federal agencies for an ongoing government-wide effort to provide the best-available forward-looking climate information to standards-developing organizations for their consideration in the development of design standards, building codes, and voluntary certifications. In addition, in October 2019, we recommended that Congress consider establishing a federal organizational arrangement to periodically identify and prioritize climate resilience projects for federal investment. We also identified six key steps the federal government could use to prioritize climate resilience investments and opportunities to increase the climate resilience impacts of federal funding options that Congress could use in designing the arrangement. In October 2019 we also issued the Disaster Resilience Framework to serve as a guide for analysis of federal action to facilitate and promote resilience to natural disasters. The framework identifies three key principles that can help federal efforts to promote disaster resilience, including building resilience to climate change. First, authoritative and understandable information can help decision makers identify current and future risks and the impact of risk-reduction strategies. Second, integrated analysis and strategic planning can help decision makers take coherent and coordinated resilience actions. Third, financial and nonfinancial incentives can help make long-term, forward-looking risk-reduction investments more viable and attractive among competing priorities. In conclusion, the effects of climate change have already posed and will continue to pose risks that can create fiscal exposure across the federal government, and this exposure will continue to increase. The federal government does not generally account for such fiscal exposure to programs in the budget process, and it has not undertaken strategic efforts to manage significant climate risks that could reduce the need for far more costly steps in the decades to come. To reduce its fiscal exposure, the federal government needs a cohesive strategic approach with strong leadership and the authority to manage risks across the entire range of related federal activities. The federal government could make further progress toward reducing fiscal exposure by implementing the recommendations we have made. Chairman Rouda, Ranking Member Comer, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. <5. GAO Contact and Staff Acknowledgments> If you or your staff have any questions about this testimony, please contact J. Alfredo G mez, Director, Natural Resources and Environment, at (202) 512-3841or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Joseph Dean Thompson (Assistant Director), Micah McMillan (Analyst in Charge), Holly Halifax, Caitlin Jackson, Richard Johnson, Joe Maher, Oliver Richard, and Kiki Theodoropoulos. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
Since 2005, federal funding for disaster assistance is at least $450 billion, including approximately $19.1 billion in supplemental appropriations signed into law on June 6, 2019. In 2018 alone, there were 14 separate billion-dollar weather and climate disaster events across the United States, with a total cost of at least $91 billion, according to the National Oceanic and Atmospheric Administration. The U.S. Global Change Research Program projects that disaster costs will likely increase as certain extreme weather events become more frequent and intense due to climate change.
The costs of recent weather disasters have illustrated the need for planning for climate change risks and investing in resilience. Resilience is the ability to prepare and plan for, absorb, recover from, and more successfully adapt to adverse events, according to the National Academies of Science, Engineering, and Medicine. Investing in resilience can reduce the need for far more costly steps in the decades to come.
Since February 2013, GAO has included Limiting the Federal Government's Fiscal Exposure by Better Managing Climate Change Risks on its list of federal program areas at high risk of vulnerabilities to fraud, waste, abuse, and mismanagement or most in need of transformation. GAO updates this list every 2 years. In March 2019, GAO reported that the federal government had not made measurable progress since 2017 to reduce fiscal exposure to climate change.
This testimony—based on reports GAO issued from October 2009 to October 2019—discusses 1) what is known about the potential economic effects of climate change in the United States and the extent to which this information could help federal decision makers manage climate risks across the federal government, (2) the fiscal exposure facing the federal government due to climate risks and current efforts to address that exposure, (3) the extent to which the federal government has invested in resilience to climate change impacts, and (4) how the federal government could reduce fiscal exposure to the effects of climate change.
GAO had made 62 recommendations related to the Limiting the Federal Government’s Fiscal Exposure by Better Managing Climate Change Risks high-risk area. As of December 2018, 25 of those recommendations remained open.
What GAO Found
The estimated economic effects of climate change, while imprecise, can convey useful insight about potential damages in the United States. In September 2017, GAO reported that the potential economic effects of climate change could be significant and unevenly distributed across sectors and regions (see figure). This is consistent with the 2018 findings of the U.S. Global Change Research Program's Fourth National Climate Assessment, which concluded, among other things, that the continued increase in the frequency and extent of high-tide flooding due to sea level rise threatens America's trillion-dollar coastal infrastructure.
Information about the potential economic effects of climate change could inform decision makers about significant potential damages in different U.S. sectors or regions. According to prior GAO work, this information could help decision makers identify significant climate risks as an initial step toward managing them.
The federal government faces fiscal exposure from climate change risks in several areas, including:
Disaster aid: due to the rising number of natural disasters and increasing reliance on federal assistance. GAO has previously reported that the federal government's fragmented and reactive approach to funding disaster resilience presented challenges to effective reduction of climate-related risks. GAO has also reported that, due to an artificially low indicator for determining a jurisdiction's ability to respond to disasters that was set in 1986, the Federal Emergency Management Agency risks recommending federal assistance for jurisdictions that could recover on their own.
Federal insurance for property and crops: due, in part, to the vulnerability of insured property and crops to climate change impacts. Federal flood and crop insurance programs were not designed to generate sufficient funds to fully cover all losses and expenses. The flood insurance program, for example, was about $21 billion in debt to the Treasury as of April 2019. Further, the Congressional Budget Office estimated in May 2019 that federal crop insurance would cost the federal government an average of about $8 billion annually from 2019 through 2029.
Operation and management of federal property and lands: due to the hundreds of thousands of federal facilities and millions of acres of land that could be affected by a changing climate and more frequent extreme events. For example, in 2018, Hurricane Michael devastated Tyndall Air Force Base in Florida, with a preliminary repair estimate of $3 billion.
As we reported in October 2019, our past work shows an absence of government-wide strategic planning for climate change. Specifically, our past work has identified limitations related to strategic planning for climate change that includes a lack of coordination, prioritization, and consolidation of strategic priorities. In our March 2019 High-Risk Update, we assessed the federal government's progress since 2017 related to climate change strategic planning against five criteria and found that the federal government had not met any of the criteria for removal from the high-risk list.
Federal investments in resilience to reduce fiscal exposures have been limited. As GAO has reported, enhancing resilience can reduce fiscal exposure by reducing or eliminating long-term risk to people and property from natural hazards. For example, a 2018 interim report by the National Institute of Building Sciences estimated approximate benefits to society in excess of costs for several types of resilience projects. While precise benefits are uncertain, the report estimated that for every dollar invested in designing new buildings to particular design standards, society could accrue benefits amounting to about $11 on average.
GAO's March 2019 High-Risk report identified a number of recommendations GAO has made related to fiscal exposure to climate change. The federal government could reduce its fiscal exposure by implementing these recommendations. Among GAO's key government-wide recommendations are:
Entities within the Executive Office of the President (EOP) should work with partners to establish federal strategic climate change priorities that reflect the full range of climate-related federal activities;
Entities within EOP should use information on potential economic effects from climate change to help identify significant climate risks and craft appropriate federal responses;
Entities within EOP should designate a federal entity to develop and update a set of authoritative climate observations and projections for use in federal decision making, and create a national climate information system with defined roles for federal agencies and certain nonfederal entities; and
The Department of Commerce should convene federal agencies to provide the best-available forward-looking climate information to organizations that develop standards and building codes to enhance infrastructure resilience.
Further, in October 2019, GAO reported that Congress could consider establishing a federal organizational arrangement to periodically identify and prioritize climate resilience projects for federal investment. GAO also issued the Disaster Resilience Framework to serve as a guide for analysis of federal action to facilitate and promote resilience to natural disasters, including resilience to climate change. |
gao_GAO-19-641T | gao_GAO-19-641T_0 | <1. Background> The federal government plans to invest over $90 billion in IT in fiscal year 2019. Nevertheless, we have previously reported that investments in federal IT too often resulted in failed projects that incurred cost overruns and schedule slippages, while contributing little to the desired mission- related outcomes. For example: The tri-agency National Polar-orbiting Operational Environmental Satellite System was disbanded in February 2010 at the direction of the White House s Office of Science and Technology Policy after the program spent 16 years and almost $5 billion. The Department of Homeland Security s (DHS) Secure Border Initiative Network program was ended in January 2011, after the department obligated more than $1 billion for the program. The Department of Veterans Affairs Financial and Logistics Integrated Technology Enterprise program was intended to be delivered by 2014 at a total estimated cost of $609 million, but was terminated in October 2011. The Department of Defense s Expeditionary Combat Support System was canceled in December 2012 after spending more than a billion dollars and failing to deploy within 5 years of initially obligating funds. The United States Coast Guard (Coast Guard) decided to terminate its Integrated Health Information System project in 2015. As reported by the agency in August 2017, the Coast Guard spent approximately $60 million over 7 years on this project, which resulted in no equipment or software that could be used for future efforts. Our past work has found that these and other failed IT projects often suffered from a lack of disciplined and effective management, such as project planning, requirements definition, and program oversight and governance. In many instances, agencies had not consistently applied best practices that are critical to successfully acquiring IT. Federal IT projects have also failed due to a lack of oversight and governance. Executive-level governance and oversight across the government has often been ineffective, specifically from CIOs. For example, we have reported that some CIOs roles were limited because they did not have the authority to review and approve the entire agency IT portfolio. In addition to failures when acquiring IT, our cybersecurity work at federal agencies continues to highlight information security deficiencies. The following examples describe the types of risks we have found at federal agencies. In September 2018, we reported that the Department of Education s Office of Federal Student Aid exercised minimal oversight of lenders protection of student data and lacked assurance that appropriate risk- based safeguards were being effectively implemented, tested, and monitored. In August 2017, we issued a report stating that, since the 2015 data breaches, the Office of Personnel Management (OPM) had taken actions to prevent, mitigate, and respond to data breaches involving sensitive personal and background investigation information. However, we noted that the agency had not fully implemented recommendations made to OPM by DHS s United States Computer Emergency Readiness Team to help the agency improve its overall security posture and improve its ability to protect its systems and information from security breaches. We reported in July 2017 that information security at the Internal Revenue Service had weaknesses that limited its effectiveness in protecting the confidentiality, integrity, and availability of financial and sensitive taxpayer data. An underlying reason for these weaknesses was that the Internal Revenue Service had not effectively implemented elements of its information security program. In May 2016, we found that the National Aeronautics and Space Administration, the Nuclear Regulatory Commission, OPM, and the Department of Veteran Affairs did not always control access to selected high-impact systems, patch known software vulnerabilities, or plan for contingencies. An underlying reason for these weaknesses was that the agencies had not fully implemented key elements of their information security programs. We reported in August 2016 that the information security of the Food and Drug Administration had significant weaknesses that jeopardized the confidentiality, integrity, and availability of its information systems and industry and public health data. <1.1. FITARA Increases CIO Authorities and Responsibilities for Managing IT> Congress and the President have enacted various key pieces of reform legislation to address IT management issues. These include the federal IT acquisition reform legislation commonly referred to as the Federal Information Technology Acquisition Reform Act (FITARA). This legislation was intended to improve covered agencies acquisitions of IT and enable Congress to monitor agencies progress and hold them accountable for reducing duplication and achieving cost savings. The law includes specific requirements related to seven areas: Agency CIO authority enhancements. CIOs at covered agencies have the authority to, among other things, (1) approve the IT budget requests of their respective agencies and (2) review and approve IT contracts. Federal data center consolidation initiative (FDCCI). Agencies covered by FITARA are required, among other things, to provide a strategy for consolidating and optimizing their data centers and issue quarterly updates on the progress made. Enhanced transparency and improved risk management. The Office of Management and Budget (OMB) and covered agencies are to make detailed information on federal IT investments publicly available, and agency CIOs are to categorize their investments by level of risk. Portfolio review. Covered agencies are to annually review IT investment portfolios in order to, among other things, increase efficiency and effectiveness and identify potential waste and duplication. Expansion of training and use of IT acquisition cadres. Covered agencies are to update their acquisition human capital plans to support timely and effective IT acquisitions. In doing so, the law calls for agencies to consider, among other things, establishing IT acquisition cadres (i.e., multi-functional groups of professionals to acquire and manage complex programs), or developing agreements with other agencies that have such cadres. Government-wide software purchasing program. The General Services Administration is to develop a strategic sourcing initiative to enhance government-wide acquisition and management of software. In doing so, the law requires that, to the maximum extent practicable, the General Services Administration should allow for the purchase of a software license agreement that is available for use by all executive branch agencies as a single user. Maximizing the benefit of the Federal Strategic Sourcing Initiative. Federal agencies are required to compare their purchases of services and supplies to what is offered under the Federal Strategic Sourcing Initiative. In June 2015, OMB released guidance describing how agencies are to implement FITARA. This guidance was intended to, among other things: assist agencies in aligning their IT resources with statutory requirements; establish government-wide IT management controls to meet the law s requirements, while providing agencies with flexibility to adapt to unique agency processes and requirements; strengthen the relationship between agency CIOs and bureau CIOs; and strengthen CIO accountability for IT costs, schedules, performance, and security. The guidance identifies a number of actions that agencies are to take to establish a basic set of roles and responsibilities (referred to as the common baseline) for CIOs and other senior agency officials and, thus, to implement the authorities described in the law. For example, agencies are to conduct a self-assessment and submit a plan describing the changes they intend to make to ensure that common baseline responsibilities are implemented. In addition, in August 2016, OMB released guidance intended, among other things, to define a framework for achieving the data center consolidation and optimization requirements of FITARA. The guidance directed agencies to develop a data center consolidation and optimization strategic plan that defined the agency s data center strategy for fiscal years 2016, 2017, and 2018. This strategy was to include, among other things, a statement from the agency CIO indicating whether the agency had complied with all data center reporting requirements in FITARA. Further, the guidance states that OMB is to maintain a public dashboard to display consolidation-related costs savings and optimization performance information for the agencies. <1.2. Congress Has Undertaken Efforts to Continue Selected FITARA Provisions and Modernize Federal IT> Congress has recognized the importance of agencies continued implementation of FITARA provisions, and has taken legislative action to extend selected provisions beyond their original dates of expiration. Specifically, Congress and the President enacted laws to: remove the expiration dates for the enhanced transparency and improved risk management provisions, which were set to expire in 2019; remove the expiration date for portfolio review, which was set to expire in 2019; and extend the expiration date for FDCCI from 2018 to 2020. In addition, Congress and the President enacted a law to authorize the availability of funding mechanisms to help further agencies efforts to modernize IT. The law, known as the Modernizing Government Technology (MGT) Act, authorizes agencies to establish working capital funds for use in transitioning away from legacy IT systems, as well as for addressing evolving threats to information security. The law also creates the Technology Modernization Fund within the Department of the Treasury, from which agencies can borrow money to retire and replace legacy systems, as well as to acquire or develop systems. Further, in February 2018, OMB issued guidance for agencies on implementing the MGT Act. The guidance was intended to provide agencies additional information regarding the Technology Modernization Fund, as well as the administration and funding of the related IT working capital funds. Specifically, the guidance encouraged agencies to begin submitting initial project proposals for modernization on February 27, 2018. In addition, in accordance with the MGT Act, the guidance provided details regarding a Technology Modernization Board, which is to consist of (1) the Federal CIO; (2) a senior IT official from the General Services Administration; (3) a member of DHS s National Protection and Program Directorate; and (4) four federal employees with technical expertise in IT development, financial management, cybersecurity and privacy, and acquisition that were appointed by the Director of OMB. <1.3. FISMA Establishes Responsibilities for Agencies to Address Federal Cybersecurity> Congress and the President enacted the Federal Information Security Modernization Act of 2014 (FISMA) to improve federal cybersecurity and clarify government-wide responsibilities. The act addresses the increasing sophistication of cybersecurity attacks, promotes the use of automated security tools with the ability to continuously monitor and diagnose the security posture of federal agencies, and provides for improved oversight of federal agencies information security programs. To this end, the act clarifies and assigns specific responsibilities to entities such as OMB, DHS, and the federal agencies. Table 1 describes a selection of the OMB, DHS, and agency responsibilities. <1.4. The Administration Has Undertaken Efforts to Improve and Modernize Federal IT and Strengthen Cybersecurity> Beyond the implementation of FITARA, FISMA, and related actions, the administration has also initiated other efforts intended to improve federal IT and the nation s cybersecurity. Specifically, in March 2017, the administration established the Office of American Innovation, which has a mission to, among other things, make recommendations to the President on policies and plans aimed at improving federal government operations and services. In doing so, the office is to consult with both OMB and the Office of Science and Technology Policy on policies and plans intended to improve government operations and services, improve the quality of life for Americans, and spur job creation. In May 2017, the Administration also established the American Technology Council, which has a goal of helping to transform and modernize federal agency IT and how the federal government uses and delivers digital services. The President is the chairman of this council, and the Federal CIO and the United States Digital Service Administrator are among the members. In addition, in May 2017, the President signed Executive Order 13800, Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure. This executive order outlined actions to enhance cybersecurity across federal agencies and critical infrastructure to improve the nation s cyber posture and capabilities against cybersecurity threats. Among other things, the order tasked the Director of the American Technology Council to coordinate a report to the President from the Secretary of DHS, the Director of OMB, and the Administrator of the General Services Administration, in consultation with the Secretary of Commerce, regarding the modernization of federal IT. In response, the Report to the President on Federal IT Modernization was issued in December 2017 and outlined the current and envisioned state of federal IT. The report focused on modernization efforts to improve the security posture of federal IT. Further, it recognized that agencies have attempted to modernize systems but have been stymied by a variety of factors, including resource prioritization, ability to procure services quickly, and technical issues. The report provided multiple recommendations intended to address these issues through the modernization and consolidation of networks and the use of shared services to enable future network architectures. Further, in March 2018, the Administration issued the President s Management Agenda, which laid out a long-term vision for modernizing the federal government. The agenda identified three related drivers of transformation IT modernization; data, accountability, and transparency; and the workforce of the future that are intended to push change across the federal government. The Administration also established 14 related Cross-Agency Priority goals, many of which have elements that involve IT. In particular, the Cross-Agency Priority goal on IT modernization stated that modern IT must function as the backbone of how government serves the public in the digital age. This goal established three priorities that are to guide the Administration s efforts to modernize federal IT: (1) enhancing mission effectiveness by improving the quality and efficiency of critical services, including the increased utilization of cloud-based solutions; (2) reducing cybersecurity risks to the federal mission by leveraging current commercial capabilities and implementing cutting edge cybersecurity capabilities; and (3) building a modern IT workforce by recruiting, reskilling, and retaining professionals able to help drive modernization with up-to-date technology. On May 15, 2018, the President signed Executive Order 13833: Enhancing the Effectiveness of Agency Chief Information Officers. Among other things, this executive order was intended to better position agencies to modernize their IT systems, execute IT programs more efficiently, and reduce cybersecurity risks. The order pertains to 22 of the 24 Chief Financial Officers (CFO) Act agencies; the Department of Defense and the Nuclear Regulatory Commission are exempt. For the covered agencies, the executive order strengthened the role of agency CIOs by, among other things, requiring them to report directly to their agency head; serve as their agency head s primary IT strategic advisor; and have a significant role in all management, governance, and oversight processes related to IT. In addition, one of the cybersecurity requirements directed agencies to ensure that the CIO works closely with an integrated team of senior executives, including those with expertise in IT, security, and privacy, to implement appropriate risk management measures. <2. Agencies Have Not Fully Addressed the IT Acquisitions and Operations High-Risk Area> In the March 2019 update to our high-risk series, we reported that agencies still needed to complete significant work related to the management of IT acquisitions and operations. As government-wide spending on IT increases every year, the need for appropriate stewardship of that investment increases as well. However, we stated that OMB and federal agencies have not made significant progress since 2017 in taking the steps needed to improve how these financial resources are budgeted and realized. To address this issue, we highlighted the need for OMB and federal agencies to further implement the requirements of federal IT acquisition reforms, including the enhancement of CIO authority. Our update to the IT acquisitions and operations high-risk area also stressed that OMB and agencies needed to continue to implement our prior recommendations in order to improve their ability to effectively and efficiently invest in IT. Specifically, since fiscal year 2010, we have made 1,278 recommendations to address shortcomings in IT acquisitions and operations. As stated in our 2019 high-risk update, OMB and agencies should demonstrate government-wide progress by, among other things, implementing at least 80 percent of our recommendations related to managing IT acquisitions and operations. As of June 2019, OMB and agencies had fully implemented 768 (or 60 percent) of their 1,277 recommendations. Figure 1 summarizes the progress that OMB and agencies have made in addressing our recommendations compared to the 80 percent target. Overall, federal agencies would be better positioned to realize billions in cost savings and additional management improvements if they address these recommendations, including those aimed at implementing CIO responsibilities, reviewing IT acquisitions; improving data center consolidation; and managing software licenses. <2.1. Agencies Need to Address Shortcomings and Challenges in Implementing CIO Responsibilities> In all, various laws, such as FITARA and related guidance, assign 35 IT management responsibilities to CIOs in six key areas. These areas are: leadership and accountability, budgeting, information security, investment management, workforce, and strategic planning. In August 2018, we reported that none of the 24 agencies we reviewed had policies that fully addressed the role of their CIO, as called for by federal laws and guidance. In this regard, a majority of the agencies had fully or substantially addressed the role of their CIOs for the area of leadership and accountability. In addition, a majority of the agencies had substantially or partially addressed the role of their CIOs for two areas: information security and IT budgeting. However, most agencies had partially or minimally addressed the role of their CIOs for two areas: investment management and strategic planning. Further, the majority of the agencies minimally addressed or did not address the role of their CIOs for the remaining area: IT workforce. Figure 2 depicts the extent to which the 24 agencies addressed the role of their CIOs for the six areas. Notwithstanding the shortfalls in agencies policies addressing the roles of their CIOs, most agency officials stated that their CIOs are implementing the responsibilities even if the agencies do not have policies requiring implementation. Nevertheless, in their responses to our survey, the CIOs of the 24 selected agencies acknowledged that they were not always very effective in implementing the six IT management areas. Specifically, at least ten of the CIOs indicated that they were less than very effective for each of the six areas of responsibility. We believe that until agencies fully address the role of CIOs in their policies, agencies will be limited in addressing longstanding IT management challenges. Figure 3 depicts the extent to which the CIOs reported their effectiveness in implementing the six areas of responsibility. Beyond the actions of the agencies, however, shortcomings in agencies policies were also partially attributable to two weaknesses in OMB s guidance. First, the guidance did not comprehensively address all CIO responsibilities, such as those related to assessing the extent to which personnel meet IT management knowledge and skill requirements and ensuring that personnel are held accountable for complying with the information security program. Correspondingly, the majority of the agencies policies did not fully address nearly all of the responsibilities that were not included in OMB s guidance. Second, OMB s guidance did not ensure that CIOs had a significant role in (1) IT planning, programming, and budgeting decisions; and (2) execution decisions and the management, governance, and oversight processes related to IT, as required by federal law and guidance. In the absence of comprehensive guidance, CIOs would not be positioned to effectively acquire, maintain, and secure their IT systems. In response to the survey conducted for our August 2018 report, the 24 agency CIOs also identified a number of factors that enabled and challenged their ability to effectively manage IT. Specifically, most agency CIOs cited five factors as being enablers to effectively carry out their responsibilities: (1) NIST guidance, (2) the CIO s position within the agency hierarchy, (3) OMB guidance, (4) coordination with the Chief Acquisition Officer (CAO), and (5) legal authority. Further, three factors were cited by CIOs as major factors that have challenged their ability to effectively carry out responsibilities: (1) processes for hiring, recruiting, and retaining IT personnel; (2) financial resources; and (3) the availability of personnel/staff resources. As shown in figure 4, the five enabling factors were identified by at least half of the 24 CIOs and the three factors cited as major challenges were identified by at least half of the CIOs. Although OMB issued guidance aimed at addressing the three factors identified by a majority of the CIOs as major challenges, the guidance did not fully do so. Further, regarding the financial resources challenge, OMB recently required agencies to provide data on CIO authority over IT spending; however, its guidance did not provide a complete definition of that authority. In the absence of such guidance, agencies created varying definitions of CIO authority. Until OMB updates its guidance to include a complete definition of the authority that CIOs are to have over IT spending, it will be difficult for OMB to identify any deficiencies in this area and to help agencies make any needed improvements. In order to address challenges in implementing CIO responsibilities, we made three recommendations to OMB and one recommendation to each of the selected 24 federal agencies for each of the six IT management areas. Most agencies agreed with or had no comments on the recommendations. However, as of June 2019, none of the 27 recommendations had been implemented. We will continue to monitor the implementation of these recommendations. <2.2. Agencies Need to Ensure that IT Acquisitions Are Reviewed and Approved by CIOs> FITARA includes a provision to enhance covered agency CIOs authority through, among other things, requiring agency heads to ensure that CIOs review and approve IT contracts. OMB s FITARA implementation guidance expanded upon this aspect of the legislation in a number of ways. Specifically, according to the guidance: CIOs may review and approve IT acquisition strategies and plans, rather than individual IT contracts; CIOs can designate other agency officials to act as their representatives, but the CIOs must retain accountability; CAOs are responsible for ensuring that all IT contract actions are consistent with CIO-approved acquisition strategies and plans; and CAOs are to indicate to the CIOs when planned acquisition strategies and acquisition plans include IT. In January 2018, we reported that most of the CIOs at 22 selected agencies were not adequately involved in reviewing billions of dollars of IT acquisitions. For instance, most of the 22 agencies did not identify all of their IT contracts. In this regard, the agencies identified 78,249 IT- related contracts, to which they obligated $14.7 billion in fiscal year 2016. However, we identified 31,493 additional IT contracts with combined obligations totaling $4.5 billion, raising the total amount obligated to IT contracts by these agencies in fiscal year 2016 to at least $19.2 billion. Figure 5 reflects the obligations that the 22 selected agencies reported to us relative to the obligations we identified. The percentage of additional IT contract obligations we identified varied among the selected agencies. For example, the Department of State did not identify 1 percent of its IT contract obligations. Conversely, eight agencies did not identify over 40 percent of their IT contract obligations. Many of the selected agencies that did not identify these IT contract obligations also did not follow OMB guidance. Specifically, 14 of the 22 agencies did not involve the acquisition office in their process to identify IT acquisitions for CIO review, as required by OMB. In addition, seven agencies did not establish guidance to aid officials in recognizing IT. We concluded that, until these agencies involve the acquisitions office in their IT acquisition identification processes and establish supporting guidance, they cannot ensure that they will identify all such acquisitions. Without proper identification of IT acquisitions, these agencies and their CIOs cannot effectively provide oversight of these acquisitions. In addition to not identifying all IT contracts, 14 of the 22 selected agencies did not fully satisfy OMB s requirement that the CIO review and approve IT acquisition plans or strategies. Further, only 11 of 96 randomly selected IT contracts at 10 of the 22 agencies were CIO-reviewed and approved as required by OMB s guidance. The 85 contracts that were not reviewed had a total possible value of approximately $23.8 billion. Until agencies ensure that CIOs are able to review and approve all IT acquisitions, CIOs will continue to have limited visibility and input into their agencies planned IT expenditures and will not be able to effectively use the increased authority that FITARA s contract approval provision is intended to provide. Further, agencies will likely miss an opportunity to strengthen their CIOs authority and the oversight of acquisitions. As a result, agencies may award IT contracts that are duplicative, wasteful, or poorly conceived. As a result of these findings, we made 39 recommendations in our January 2018 report. Among these, we recommended that agencies ensure that their acquisition offices are involved in identifying IT acquisitions and issuing related guidance and that IT acquisitions are reviewed in accordance with OMB guidance. OMB and the majority of the agencies generally agreed with or did not comment on the recommendations. As of June 2019, 23 of the 39 of the recommendations had not been implemented. <2.3. Agencies Have Made Significant Progress in Consolidating Data Centers, but Need to Take Action to Achieve Planned Cost Savings> Data center consolidation efforts are key to implementing FITARA. Specifically, OMB established the FDCCI in February 2010 to improve the efficiency, performance, and environmental footprint of federal data center activities. The enactment of FITARA in 2014 codified and expanded the initiative. In addition, in August 2016, OMB issued a memorandum which established the Data Center Optimization Initiative (DCOI) and included guidance on how to implement the data center consolidation and optimization provisions of FITARA. Among other things, the guidance required agencies to consolidate inefficient infrastructure, optimize existing facilities, improve their security posture, and achieve cost savings. According to the 24 agencies covered by the initiative, data center consolidation and optimization efforts had resulted in approximately $4.7 billion in cost savings through August 2018. Even so, additional work remains to fully carry out the initiative. Specifically, in a series of reports that we issued from July 2011 through April 2019, we noted that, while data center consolidation could potentially save the federal government billions of dollars, weaknesses existed in several areas, including agencies data center consolidation plans, data center optimization, and OMB s tracking and reporting on related cost savings. In April 2019, we reported that agencies continued to report mixed progress toward achieving OMB s goals for closing data centers and realizing the associated savings by September 2018. Specifically, as of August 2018, over half of the agencies reported that they had met, or planned to meet, all of their OMB-assigned closure goals for tiered data centers by the deadline. Six agencies reported that they did not plan to meet their goals for tiered data centers. In addition, as of August 2018, 11 agencies reported that they had already met the goal for closing 60 percent of their non-tiered centers, three agencies reported that they planned to meet the goal by the end of fiscal year 2018, and nine agencies reported that they did not plan to meet the goal by the end of fiscal year 2018. In all, the 24 agencies reported a total of 6,250 data center closures as of August 2018, which represented about half of the total reported number of federal data centers. In addition, the agencies reported 1,009 planned closures by the end of fiscal year 2018, with an additional 191 closures planned through fiscal year 2023, for a total of 1,200 further closures. Further, in August 2018, 22 agencies reported that they had achieved $1.94 billion in cost savings for fiscal years 2016 through 2018, while two agencies reported that they had not achieved any savings. In addition to that amount, 21 agencies identified an additional $0.42 billion in planned savings through fiscal year 2018 for a total of $2.36 billion in planned cost savings from fiscal years 2016 through 2018. Nevertheless, this total is about $0.37 billion less than OMB s goal of $2.7 billion for overall DCOI savings. From July 2011 through April 2019, we made a total of 196 recommendations to OMB and 24 agencies to improve the execution and oversight of the initiative. Most agencies and OMB agreed with our recommendations or had no comments. As of June 2019, 79 of these 196 recommendations had not been implemented. <2.4. Agencies Need to Better Manage Software Licenses to Achieve Savings> In our 2015 high-risk report s discussion of IT acquisitions and operations, we identified the management of software licenses as a focus area, in part because of the potential for cost savings. Federal agencies engage in thousands of software licensing agreements annually. The objective of software license management is to manage, control, and protect an organization s software assets. Effective management of these licenses can help avoid purchasing too many licenses, which can result in unused software, as well as too few licenses, which can result in noncompliance with license terms and cause the imposition of additional fees. As part of its PortfolioStat initiative, OMB has developed a policy that addresses software licenses. This policy requires agencies to conduct an annual, agency-wide IT portfolio review to, among other things, reduce commodity IT spending. Such areas of spending could include software licenses. In May 2014, we reported on federal agencies management of software licenses and determined that better management was needed to achieve significant savings government-wide. Of the 24 selected agencies we reviewed, only two had comprehensive policies that included the establishment of clear roles and central oversight authority for managing enterprise software license agreements, among other things. Of the remaining 22 agencies, 18 had policies that were not comprehensive, and four had not developed any policies. Further, we found that only two of the 24 selected agencies had established comprehensive software license inventories, a leading practice that would help them to adequately manage their software licenses. The inadequate implementation of this and other leading practices in software license management was partially due to weaknesses in agencies policies. As a result, we concluded that agencies oversight of software license spending was limited or lacking, thus, potentially leading to missed savings. However, the potential savings could be significant considering that, in fiscal year 2012, one major federal agency reported saving approximately $181 million by consolidating its enterprise license agreements, even when its oversight process was ad hoc. Accordingly, we recommended that OMB issue a directive to help guide agencies in managing software licenses. We also made 135 recommendations to the 24 agencies to improve their policies and practices for managing licenses. Among other things, we recommended that the agencies (1) regularly track and maintain a comprehensive inventory of software licenses and (2) analyze the inventory to identify opportunities to reduce costs and better inform investment decision making. Most agencies generally agreed with the recommendations or had no comments. As of June 2019, 27 of the 135 recommendations had not been implemented. Table 2 reflects the extent to which the 24 agencies implemented the recommendations in these two areas. <3. Agencies Need to Address Shortcomings in Cybersecurity Area> We have consistently identified shortcomings in the federal government s approach to cybersecurity. In particular, in a September 2018 report, we identified four major cybersecurity challenges: (1) establishing a comprehensive cybersecurity strategy and performing effective oversight, (2) securing federal systems and information, (3) protecting cyber critical infrastructure, and (4) protecting privacy and sensitive data. To address these challenges, we identified 10 critical actions that the federal government and other entities need to take. For example, in order to address the challenge of securing federal systems and information, we identified three actions that the agencies should take: (1) improve implementation of government-wide cybersecurity initiatives, (2) address weaknesses in federal information security programs, and (3) enhance the federal response to cyber incidents. Figure 6 depicts the 10 critical actions to address the four major cybersecurity challenges. As we have previously noted, in order to strengthen the federal government s cybersecurity posture, agencies should fully implement the information security programs required by FISMA. In this regard, FISMA provides a framework for ensuring the effectiveness of information security controls for federal information resources. The law requires each agency to develop, document, and implement an agency-wide information security program. Such a program should include risk assessments; the development and implementation of policies and procedures to cost- effectively reduce risks; plans for providing adequate information security for networks, facilities, and systems; security awareness and specialized training; the testing and evaluation of the effectiveness of controls; the planning, implementation, evaluation, and documentation of remedial actions to address information security deficiencies; procedures for detecting, reporting, and responding to security incidents; and plans and procedures to ensure continuity of operations. Since fiscal year 2010, we have made 3,058 recommendations to agencies aimed at addressing the four cybersecurity challenges. These recommendations have identified actions for agencies to take to strengthen technical security controls over their computer networks and systems. They also have included recommendations for agencies to fully implement aspects of their information security programs, as mandated by FISMA. Nevertheless, many agencies continue to be challenged in safeguarding their information systems and information, in part, because many of these recommendations have not been implemented. Of the 3,058 recommendations made since 2010, 2,384 (or 78 percent) had been implemented as of June 2019, leaving 674 recommendations (or 22 percent) unimplemented. <3.1. Agencies Inspectors General Are to Identify Information Security Program Weaknesses> In order to determine the effectiveness of the agencies information security programs and practices, FISMA requires federal agencies inspectors general to conduct annual independent evaluations. The agencies are to report the results of these evaluations to OMB, and OMB is to summarize the results in annual reports to Congress. In these evaluations, the inspectors general are to frame the scope of their analyses, identify key findings, and detail recommendations to address the findings. The evaluations also are to capture maturity model ratings for their respective agencies. Toward this end, in fiscal year 2017, the inspector general community, in partnership with OMB and DHS, finalized a 3-year effort to create a maturity model for FISMA metrics. The maturity model aligns with the five function areas in the NIST Framework for Improving Critical Infrastructure Cybersecurity (Cybersecurity Framework): identify, protect, detect, respond, and recover. This alignment is intended to help promote consistent and comparable metrics and criteria and provide agencies with a meaningful independent assessment of their information security programs. The maturity model is designed to summarize the status of agencies information security programs on a five-level capability maturity scale. The five maturity levels are defined as follows: Level 1 (Ad hoc): Policies, procedures, and strategy are not formalized; activities are performed in an ad-hoc, reactive manner. Level 2 (Defined): Policies, procedures, and strategy are formalized and documented but not consistently implemented. Level 3 (Consistently Implemented): Policies, procedures, and strategy are consistently implemented, but quantitative and qualitative effectiveness measures are lacking. Level 4 (Managed and Measurable): Quantitative and qualitative measures on the effectiveness of policies, procedures, and strategy are collected across the organizations and used to assess them and make necessary changes. Level 5 (Optimized): Policies, procedures, and strategy are fully institutionalized, repeatable, self-generating, consistently implemented and regularly updated based on a changing threat and technology landscape and business/mission needs. According to this maturity model, Level 4 (managed and measurable) represents an effective level of security. Therefore, if an inspector general rates an agency s information security program at Level 4 or Level 5, then that agency is considered to have an effective information security program. For fiscal year 2017, the inspectors general for six of the 23 civilian CFO Act agencies reported that their agencies had an effective agency-wide information security program. Specifically, for the five function areas in the NIST Cybersecurity Framework, most inspectors general reported that their agencies were at Level 3 (consistently implemented) for the identify, protect, and recover functions, and at Level 2 (defined) for the detect and respond functions. Table 3 shows the individual maturity ratings for each covered agency. <3.2. OMB Requires Agencies to Meet Targets for Cybersecurity Metrics> In its efforts toward strengthening the federal government s cybersecurity, OMB also requires agencies to submit related cybersecurity metrics as part of its Cross-Agency Priority goals. In particular, OMB developed the IT modernization goal so that federal agencies will be able to build and maintain more modern, secure, and resilient IT. A key part of this goal is to reduce cybersecurity risks to the federal mission through three strategies: manage asset security, protect networks and data, and limit personnel access. The key targets supporting each of these strategies correspond to areas within the FISMA metrics. Table 4 outlines the strategies, their associated targets, and the 23 civilian CFO Act agencies progress in meeting those targets, as of June 2018. In conclusion, by addressing the high-risk areas on improving the management of IT acquisitions and operations and ensuring the cybersecurity of the nation, the government has the opportunity to both save billions of dollars and advance the efficiency and effectiveness of government services. Most agencies have taken steps to execute key IT management and cybersecurity initiatives, including implementing CIO responsibilities, requiring CIO reviews of IT acquisitions, realizing data center consolidation cost savings, managing software assets, and complying with FISMA requirements. The agencies have also continued to address the recommendations that we have made over the past several years. Nevertheless, further efforts by OMB and federal agencies to implement our previous recommendations would better position them to improve the management and security of federal IT. To help ensure that these efforts succeed, we will continue to monitor agencies efforts toward implementing the recommendations. Chairman Connolly, Ranking Member Meadows, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. <4. GAO Contact and Staff Acknowledgments> If you or your staff have any questions about this testimony, please contact Carol C. Harris, Director of Information Technology Acquisition Management Issues, at (202) 512-4456 or harriscc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Kevin Walsh (Assistant Director), Meredith Raymond (Analyst-in-Charge), Chris Businsky, and Rebecca Eyler. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
The federal government plans to spend over $90 billion in fiscal year 2019 on IT. Even so, IT investments have too often failed or contributed little to mission-related outcomes. Further, increasingly sophisticated threats and frequent cyber incidents underscore the need for effective information security. To focus attention on these concerns, GAO's high-risk list includes both the management of IT acquisitions and operations and cybersecurity.
This statement summarizes federal agencies' progress in improving the management and ensuring the security of federal IT. It is primarily based on GAO's reports issued between July 2011 and April 2019 on (1) CIO responsibilities, (2) CIO IT acquisition review requirements, (3) data center consolidation efforts, (4) the management of software licenses, and (5) cybersecurity.
What GAO Found
The Office of Management and Budget (OMB) and federal agencies have taken steps to improve the management of information technology (IT) acquisitions and operations and ensure federal cybersecurity through a series of initiatives. As of June 2019, federal agencies had fully implemented 60 percent of the 1,277 IT management-related recommendations that GAO has made to them since fiscal year 2010. Likewise, agencies had implemented 78 percent of the 3,058 security-related recommendations that GAO has made since 2010. Even with this progress, significant actions remain to be completed.
Chief Information Officer (CIO) responsibilities . Laws such as the Federal Information Technology Acquisition Reform Act (FITARA) and related guidance assigned 35 key IT management responsibilities to CIOs to help address longstanding challenges. In August 2018, GAO reported that none of the 24 selected agencies had established policies that fully addressed the role of their CIO, as called for by laws and guidance. GAO recommended that OMB and each of the 24 agencies take actions to improve the effectiveness of CIOs' implementation of their responsibilities. As of June 2019, none of the 27 recommendations had been implemented.
CIO IT acquisition review . According to FITARA, covered agencies' CIOs are required to review and approve IT contracts. Nevertheless, in January 2018, GAO reported that most of the CIOs at 22 covered agencies were not adequately involved in reviewing billions of dollars of IT acquisitions. Consequently, GAO made 39 recommendations to improve CIO oversight for these acquisitions. As of June 2019, 23 of the recommendations had not been implemented.
Consolidating data centers . OMB launched an initiative in 2010 to reduce data centers. According to 24 agencies, data center consolidation and optimization efforts had resulted in approximately $4.7 billion in cost savings through August 2018. Even so, additional work remains. GAO has made 196 recommendations to OMB and agencies to improve the reporting of related cost savings and to achieve optimization targets. As of June 2019, 79 of the recommendations had not been implemented.
Managing software licenses . Effective management of software licenses can help avoid purchasing too many licenses that result in unused software. In May 2014, GAO reported that better management of licenses was needed to achieve savings, and made 136 recommendations to improve such management. As of June 2019, 27 of the recommendations had not been implemented.
Ensuring the nation's cybersecurity . While the government has acted to protect federal information systems, GAO has consistently identified shortcomings in the federal government's approach to cybersecurity. The 3,058 recommendations that GAO made to agencies since 2010 have been aimed at addressing cybersecurity challenges. These recommendations have identified actions for agencies to take to fully implement aspects of their information security programs and strengthen technical security controls over their computer networks and systems. As of June 2019, 674 of the recommendations had not been implemented.
What GAO Recommends
Since fiscal year 2010, GAO has made about 1,300 recommendations to OMB and agencies to address shortcomings in IT acquisitions and operations, as well as approximately 3,000 recommendations to agencies to improve the security of federal systems. These recommendations addressed, among other things, implementation of CIO responsibilities, oversight of the data center consolidation initiative, management of software license efforts, and the efficacy of security programs and technical controls. Implementation of these recommendations is essential to strengthening federal agencies' acquisitions, operations, and cybersecurity efforts. |
gao_GAO-20-39 | gao_GAO-20-39_0 | <1. Background> <1.1. TRICARE T-2017 Contracts and Transition Process> Under T-2017, DHA reduced the number of TRICARE regions by merging the North and South regions to form the East region, which has approximately 6 million beneficiaries, while the West region remained the same with approximately 3.4 million beneficiaries (see figure 1). In July 2016, DHA awarded the East region contract to Humana Government Business, the incumbent South region contractor, and the West region contract to Health Net Federal Services, the incumbent North region contractor. The T-2017 contracts include five 1-year performance periods and are scheduled to expire on December 31, 2022. As a result of the changes in regional structure, the T-2017 contract transition included a transition from Health Net Federal Services (North region) and Humana Government Business (South region) to Humana Government Business in the East region as well as a transition from UnitedHealth Military & Veterans to Health Net Federal Services in the West Region. The start of the T-2017 transition was initially planned for August 2016, with a health care delivery start date of August 2017. However, due to bid protests filed against each contract, the transition start date was pushed out to January 1, 2017 with a health care delivery start date of October 1, 2017. To manage the T-2017 transition, DHA assigned individuals to lead the transition in each region, who were responsible for coordinating all major transition activities. The transition leads were supported by other staff, including contracting officers, contracting officer representatives, and subject matter experts. In addition, DHA established an organizational structure comprised of several groups to oversee the T-2017 transition from day-to-day oversight to leadership updates. The TRICARE Operations Manual, which is part of the managed care support contract, establishes transition guidance that includes requirements for both the incoming and outgoing contractors. The T- 2017 transition guidance focused on the incoming contractors readiness to perform in seven critical areas: (1) provider network, (2) referral management, (3) enrollment, (4) medical management, (5) claims processing, (6) customer service, and (7) management. For the T-2017 transition, DHA introduced two new oversight methods to ensure contractors readiness in the seven critical areas prior to the start of health care delivery. These methods and other guidance are outlined in the TRICARE Operations Manual and T-2017 contracts. The performance readiness validation (PRV) and performance readiness assessment and verification (PRAV) referred to as PRV/PRAV tested contractors functionality in the seven critical areas outlined in the TRICARE Operations Manual. For the PRV, contractors validated their own readiness for specific requirements within each area. For example, the contractor had to validate that it had a complete provider directory online and operational 60 days prior to the start of health care delivery at a 95 percent accuracy rate. The number of requirements varied by critical area. For the PRAV, DHA subsequently assessed and verified contractors validation prior to the start of health care delivery. DHA also established financial penalties referred to as transition performance guarantees for five of the seven critical areas. The T- 2017 contracts specify that if a contractor does not meet a transition- in requirement in any one of these five areas, DHA will assess a financial penalty (see table 1). In December 2016 prior to the start of the transition DHA held transition specification meetings with the incoming and outgoing contractors to begin planning critical T-2017 transition activities. The incoming contractors were also required to provide DHA with an integrated master plan and an integrated master schedule outlining processes and specific steps for the transition as well as a risk management plan that identified risks to the successful execution of the contractor s schedule. Contractors were required to provide weekly updates to DHA on the status of their transition schedule progress. In April 2018 several months after the transition had ended DHA produced an after action report to identify best practices, lessons learned, and recommendations to improve future TRICARE contract transitions. DHA is currently in the process of developing its fifth generation of contracts, referred to as the T-5 contracts. <1.2. TRICARE Select> As required by the NDAA 2017, DHA established a new preferred provider benefit option called TRICARE Select and terminated the TRICARE Standard and Extra benefit options by January 1, 2018. Prior to 2018, beneficiaries primarily had a choice between three basic options TRICARE Prime (a managed care option), TRICARE Standard (a fee-for- service option), or TRICARE Extra (a preferred provider organization option). The TRICARE Standard and Extra options did not require beneficiaries to enroll. However, beneficiaries who choose the TRICARE Select option must enroll during an annual open enrollment period or within 90 days of experiencing a qualifying life event. Beneficiary cost sharing responsibilities were also modified for the new benefit option. <2. DHA Delayed Time Frames for Key Transition Activities to Implement TRICARE Select> The implementation of TRICARE Select delayed timeframes for the T- 2017 transition and was the primary challenge of the T-2017 transition, according to DHA and contractor officials. Because the T-2017 contracts were awarded prior to the enactment of the NDAA 2017, DHA had to incorporate TRICARE Select requirements into the ongoing T-2017 transition process, including developing updated guidance for contractors. As a result of the time needed to plan for and implement a new benefit, DHA delayed timeframes for the following key transition activities. DHA postponed the start of health care delivery by 3 months. DHA moved the start of health care delivery from October 1, 2017 to January 1, 2018 (see fig. 2). According to DHA officials, DHA made this change to align the start of health care delivery with the implementation of TRICARE Select to minimize the impact that two, successive changes could have had on the continuity of care for beneficiaries. On March 30, 2017 three months into the transition DHA sent a letter to the contractors informing them of this decision. DHA also directed its incoming contractors to submit modified transition schedules and risk management plans. DHA had to delay the start of a planned enrollment freeze and lengthen its duration. According to DHA officials, in a typical transition, DMDC requires 3 to 4 days to make adjustments to beneficiaries records in the Defense Enrollment Eligibility Reporting System, including assigning beneficiaries to incoming contractors and regions for the T-2017 contracts. During this time, which is referred to as an enrollment freeze, contractors cannot access this system to process any enrollments. For the T-2017 transition, DHA and DMDC officials stated that, given the termination of two benefit options and the new enrollment requirements for TRICARE Select, DMDC needed additional time to adjust every beneficiary enrollment record (over 9 million). Therefore, DHA delayed the start of the T-2017 enrollment freeze from August to December 2017 and increased its duration from 3 to 4 days to 19 days December 1-19, 2017 (see fig. 2). Contractors had less time to process enrollments and make other system changes. Once an enrollment freeze has ended, incoming and outgoing contractors have a designated period of time, referred to as a dual operations period, to process beneficiaries enrollments and make other systems changes, such as assigning Prime beneficiaries to a primary care manager (PCM). Due to the extended enrollment freeze, contractors had a shorter dual operations period less than 2 weeks in December 2017 rather than 6 to 8 weeks beginning in August 2017 (see fig. 2). According to contractors, the shorter dual operations period for T-2017 transition contributed to a backlog of enrollment requests and PCM assignments that they were unable to process prior to the start of health care delivery. To mitigate the financial effect on beneficiaries, DHA issued point of service waivers and waived referral requirements for TRICARE Prime enrollees for both regions and provided an enrollment grace period for beneficiaries so they did not have to pay higher copayments for receiving care from non-network providers or care that was not referred by a PCM. DHA s communications to TRICARE beneficiaries were delayed. TRICARE Select complicated and delayed DHA s communications to beneficiaries about TRICARE program changes, which led to customer service problems after the start of health care delivery. DHA engaged in various efforts to inform beneficiaries of the new changes, such as through website updates, blog posts, and direct mailings. However, DHA s after action report acknowledged that on multiple occasions its communication division posted incorrect information on its website because of changing policy language. In addition, DHA planned to send a direct mailing to beneficiaries to inform them of TRICARE program changes in October 2017. However, DHA and DMDC officials told us that this date was delayed due to the additional time needed to prepare for TRICARE Select. As a result, DHA mailed information to beneficiaries starting in December 2017, and some beneficiaries did not receive this mailing until after the start of health care delivery, according to DHA. An organization representing TRICARE beneficiaries told us that some beneficiaries were unaware of the various benefit changes that went into effect on January 1, 2018 because of inadequate communication from DHA. Contractors also told us that the delayed communication to beneficiaries contributed to the high volume of customer service calls they received after the start of health care delivery. DHA officials told us that they took several steps to minimize the risks these delays and the implementation of TRICARE Select created, including the use of various transition oversight meetings to discuss and track related challenges. For example, the regional transition management staff participated in a monthly Risk Review Board meeting to discuss concerns related to the schedule of transition activities, such as the impact of TRICARE Select on the time needed for performance testing in critical areas. DHA also discussed transition risks related to TRICARE Select during weekly meetings with contractors throughout the transition. Furthermore, in August 2017, DHA hosted an Enrollment Summit for all stakeholders involved with the transition and implementation of TRICARE Select, where they discussed the schedule of transition steps and the coordination needed to implement the interrelated T-2017 and NDAA 2017 requirements. In addition, DHA kept contractors informed about TRICARE Select as they developed the related policies. Beginning in June 2017, DHA provided contractors with draft guidance on the new benefit to keep them informed of potential changes and obtain their feedback. According to DHA, this also allowed contractors to plan for and begin implementing the program changes they would be required to make once the policies were finalized. DHA issued the final TRICARE Select policies to its contractors in late October 2017, which left contractors with less than 3 months to implement the finalized changes prior to the start of health care delivery on January 1, 2018. According to DHA officials and contractors, contractors ideally would have had the final TRICARE policies at the start of the 9-to-12 month transition period. <3. Challenges Experienced during the T-2017 Transition Process Reflect Weaknesses in DHA s Guidance and Oversight> <3.1. Lack of Specificity and Accuracy in DHA s Guidance Contributed to Disagreements between Contractors, Which DHA Failed to Resolve in a Timely Manner> During the T-2017 transition, outgoing and incoming contractors had disagreements over data transfers. According to DHA officials and contractors, DHA s transition guidance to contractors was not always specific or accurate regarding the amount and type of data to be shared, as well as how these data should be transferred. Furthermore, according to contractors, DHA did not always resolve contractors guidance-related disagreements in a timely manner. Contractors said this contributed to delays in implementing some transition steps and problems after the start of health care delivery. DHA faced challenges related to the following data transfer issues: Referral and authorization data. The contractors in the West region disagreed on how many years of historical referral and authorization data the outgoing contractor would provide the incoming contractor because this was not specified in the guidance, according to the contractors and DHA s contracting officers. While the contractors in the East region mutually agreed on the years of data to transfer, the West region contractors did not. As a result, the incoming West region contractor reached out to DHA for resolution on August 2, 2017 by letter, and continued to discuss it with DHA officials during weekly meetings, as documented in meeting minutes we reviewed. However, DHA did not address the issue until December 12, 2017, at which point DHA rejected the incoming contractor s request for additional historical data because the outgoing contractor would not have enough time to provide it by the start of health care delivery on January 1, 2018. The incoming contractor reported that not receiving the anticipated historical referral information contributed to several problems related to referrals after the start of health care delivery. First, it contributed to delays in processing referrals within timeliness standards. Second, the lack of data made it difficult for contractors to help MTFs address customer referral inquiries, which negatively affected the contractor s relationship with MTFs. Finally, the contractor had limited ability to resolve beneficiaries customer service questions related to referrals and had to reissue authorizations for some referrals. Claims data. The incoming and outgoing West region contractors also disagreed on which elements of claims data needed to be transferred. For example, the incoming contractor requested information from the claims notes section, which the outgoing contractor stated contained some proprietary information. According to the incoming contractor, this section typically contains information important for claims processing, such as medical necessity reviews medical record reviews to determine that health care services are appropriate for payment. When the outgoing contractor refused to provide the claims notes, the incoming contractor raised the issue several times to DHA during weekly meetings and through letters, as documented in meeting minutes and correspondence we reviewed. However, DHA determined that the outgoing contractor did not need to provide the information requested, as the non-proprietary information was available in other claims data sections. According to the incoming contractor, without access to more detailed historical information from the claims notes, there were instances in which they were unable to adjust payment determinations for certain claims paid prior to transition, which resulted in provider and beneficiary dissatisfaction. Beneficiary payment information. The incoming contractors faced challenges obtaining payment information for TRICARE beneficiaries who paid their health insurance premiums using credit cards or electronic funds transfers. According to a contracting officer, DHA initially directed the outgoing contractor to transfer beneficiary payment data to the incoming contractor. However, the outgoing contractors told us that they were unable to transfer this data due to banking laws and proprietary information security standards. DHA agreed that the outgoing contractors could not legally transfer this information and resolved the problem by requiring incoming contractors to reach out directly to beneficiaries to obtain the payment information. According to incoming contractor officials, this created additional, unanticipated effort, since they had to contact beneficiaries for this information directly, which diverted transition resources, such as enrollment staff, away from ongoing transition activities. In addition, contractors reported that this put certain TRICARE plan beneficiaries at risk since those who did not resubmit their payment information risked disenrollment and gaps in health care coverage. The contractors and DHA made attempts to notify affected beneficiaries that they needed to contact the contractor to reestablish their automated premium payments. However, approximately 224,000 beneficiaries credit card or electronic funds transfer enrollments for premium payments did not continue after January 1, 2018. To give beneficiaries more time to provide this information, DHA provided a 150-day grace period for premium payments. Still, certain beneficiaries were disenrolled from TRICARE plans for failure to establish a recurring form of payment. For example, more than 15,000 beneficiaries were disenrolled in the East region. In its after action report, DHA acknowledged that it did not always provide specific and accurate requirements for data transfers in its transition guidance and that this should be addressed for the next transition. However, the report did not address the difficulties related to resolving contractors questions and disagreements on these issues. For example, DHA officials told us that they followed an informal process for tracking and handling issues raised by contractors during the transition, which was explained in the initial transition specifications meeting in December 2016. However, the outgoing and incoming contractors in the West region expressed concerns about this process, explaining that it was difficult to resolve issues, particularly with the amount of time it took for DHA to provide a response, such as with the referral and authorization disagreement. Federal standards for internal control note that an agency should implement control activities through policies, such as by providing guidance with greater specificity for data transfers. These standards also indicate that agencies should remediate deficiencies in a timely manner, such as the prompt resolution of contractors guidance-related disputes so as to not disrupt the transition schedule. Without more specific guidance and a process that ensures timely dispute resolution, DHA risks disagreements and delays for future contract transitions, which could hinder health care delivery. <3.2. Some of DHA s Requirements for Determining Contractors Readiness for Health Care Delivery Were Not Feasible or Effective> DHA experienced challenges executing its new T-2017 transition oversight methods PRV/PRAV and performance guarantees as planned because of fundamental problems with how some requirements were written and the implementation of TRICARE Select. As a result, some of the requirements were not feasible or effective in assessing contractors readiness for health care delivery. 1. Certain PRV/PRAV requirements were not feasible as originally written or were not aligned with the corresponding performance guarantee, according to DHA officials. For example, one of the PRV requirements in the critical area of medical management focused on testing the contractors web-based systems for exchanging information electronically with the government and providers, but this was not always possible as some information continues to be transferred in hard copy, such as by fax. In addition, the performance guarantee related to provider network development did not align with the corresponding PRV/PRAV requirements. A DHA official told us that aligning the performance guarantee and PRV/PRAV requirements would have resulted in a higher financial penalty for one of the contractors. 2. Contractors noted that some PRV/PRAV requirements were not complete or effective measures of readiness. For example, contractors told us that requirements for claims and referrals did not effectively test the actual volume of administrative tasks that they would have to process after the start of health care delivery. According to the West region contractor, one of the referral PRAV tests required contractors to demonstrate that they could process 300 referrals during DHA s onsite review, whereas they typically need to process 9,000 referrals a day after the start of health care delivery. 3. The original PRV/PRAV requirements did not account for TRICARE Select, since the contracts were awarded prior to the enactment of the NDAA 2017. Furthermore, due to the delayed and extended enrollment freeze that ended on December 19, 2017, DHA determined that contractors could not demonstrate a fully operational enrollment system sixty days prior to the start of health care delivery as originally required. Additionally, the contractors had limited access to DHA s information technology systems for testing scenarios that included TRICARE Select. As a result, contractors had to test the majority of the critical areas (claims, enrollment, customer service, and referral management) with information technology systems that did not include TRICARE Select, which was not a true test of their readiness. To address issues with feasibility and TRICARE Select, DHA modified the PRV requirements for four of the seven critical areas during transition. Specifically, DHA modified all of the PRV requirements for enrollment, referral management, and claims processing as well as one PRV requirement for medical management. DHA also waived the corresponding performance guarantees for the three of these critical areas that had such guarantees (enrollment, referral management, and claims processing). As a result, the contractors were not subjected to financial penalties for not meeting the requirements for these critical areas. According to DHA officials, the problems with the PRV/PRAV requirements experienced during the T-2017 transition occurred in part because DHA subject matter experts did not review the requirements prior to the release of the final request for proposal. As a result, officials said that it was not until after the contracts were awarded that subject matter experts determined that some of the requirements could not be performed as written. Nonetheless, DHA officials and contractors agreed that the PRV/PRAV processes are good conceptual measures, and should continue to be used for the next transition with improvements to their feasibility and effectiveness. Having subject matter experts review contractors readiness requirements for feasibility and contract alignment could help ensure that these requirements are appropriate measures of contractor readiness. In addition, DHA s after action report included feedback and lessons learned from officials and contractors on the PRV/PRAV requirements, which DHA could incorporate for future transitions. Federal standards for internal control state that an agency should internally communicate quality information to enable personnel to perform key roles in achieving objectives. By considering lessons learned from this transition and having subject matter experts review the requirements, DHA would be able to better ensure their metrics are appropriate to prepare contractors for health care delivery. <4. DHA Required Contractors to Develop Corrective Action Plans to Address Problems after the Start of Health Care Delivery> DHA reported that the T-2017 contractors had overall better performance meeting contract requirements after the start of health care delivery than the two previous generations of TRICARE contracts. Nonetheless, DHA has acknowledged that both T-2017 contractors did experience some problems meeting certain contract requirements. DHA addressed most of these problems through the issuance of corrective action requests, which require the contractors to submit and implement a corrective action plan (see table 2). One exception where DHA did not issue formal corrective action requests was for problems both contractors experienced with processing enrollment backlogs after the start of health care delivery due to the extended TRICARE Select enrollment freeze during transition. Although most of the problems have been resolved, some problems have persisted into the second year of health care delivery, which DHA and contractors reported they are continuing to address. Provider directory accuracy. Both contractors have continued to fall short of the requirement for 95 percent accuracy of their online provider directories problems they also experienced during the transition. As of June 2019, the West region contractor s directory was 76 percent accurate and the East region s was 64 percent accurate, according to DHA officials. Both contractors expressed concern about the methodology used to assess their performance against this requirement and stated that the 95 percent standard is too high. DHA officials acknowledged that the 95 percent standard is high and that the provider directory corrective action requests may remain open indefinitely because of the high standard, though they continue to monitor the corrective action requests. Claims processing timeliness and accuracy. The East region contractor has struggled to meet timeliness and accuracy standards for processing claims. The contract requires contractors to process 98 percent of claims within 30 calendar days of receipt and 100 percent of claims within 90 days with a 98 percent accuracy rate. As of June 2019, the contractor was meeting the 30 day timeliness requirement and was close to meeting the 90 day timeliness requirement (99.99 percent within 90 days). However, the contractor continued to miss the performance standard for claims processing accuracy, according to DHA officials. DHA officials told us that the department had completed multiple on-site reviews and continues to monitor this issue to ensure the contractor improves its ability to meet claims processing standards. Contractor officials acknowledged that they needed to improve their oversight of claims functions and improve training and job aids with their claims processing subcontractor, which was a new partner for their T-2017 contract. <5. Conclusions> A smooth transition of health care delivery between outgoing and incoming managed care support contractors helps ensure continuity of care for TRICARE beneficiaries. In the most recent transition, the need to concurrently implement a new benefit option TRICARE Select presented some unique challenges that delayed the transition timeline and limited DHA s ability to ensure contractors readiness in certain areas. While the implementation of a new benefit option during the T-2017 contract transition was a one-time occurrence, our review highlighted weaknesses in DHA s transition guidance and oversight that could pose challenges to future contract transitions. By improving the specificity of its transition guidance, revising its process for resolving contractors issues, and ensuring review of PRV/PRAV requirements for feasibility and effectiveness, DOD could mitigate these challenges and thus improve future transitions. <6. Recommendations for Executive Action> We are making the following three recommendations to DHA: The Director of DHA should define data sharing requirements with more specificity in its transition guidance for outgoing and incoming contractors, including the time period covered and the types of data that must be shared. (Recommendation 1) The Director of DHA should revise the process the agency has in place for resolving issues raised between contractors during transition to ensure such issues are resolved within time frames that will not adversely affect the transition schedule. (Recommendation 2) The Director of DHA should incorporate lessons learned from this transition and ensure that subject matter experts review PRV/PRAV requirements and performance guarantees prior to the issuance of the request for proposal for the next transition. These requirements should be reviewed to ensure their feasibility and effectiveness for assessing contractor readiness. (Recommendation 3) <7. Agency Comments> We provided a draft of this report to DOD for comment. In its written comments, reproduced in appendix I, DOD generally agreed with our findings and concurred with our recommendations. DOD outlined steps the department will take to improve the next TRICARE contract transition, including revising the TRICARE Operations Manual to better define data sharing requirements, developing a process to ensure that all contractor questions are answered appropriately and in a timely manner, and ensuring SMEs are involved in writing the PRV/PRAV requirements. DOD also provided technical comments, which we incorporated as appropriate We are sending copies of this report to the Secretary of Defense and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix II. Appendix I: Comments from the Department of Defense Appendix II: GAO Contact and Staff Acknowledgments <8. GAO Contact> <9. Staff Acknowledgments> In addition to the contact named above, Bonnie Anderson, Assistant Director; Rebecca Abela, Analyst-in-Charge; Cathleen Hamann; Jacquelyn Hamilton; Rianna Jansen; and Vikki Porter made contributions to this report. | Why GAO Did This Study
DOD contracts with private sector companies—referred to as managed care support contractors—to deliver health care services to its TRICARE program beneficiaries through networks of civilian providers. In July 2016, DOD awarded its fourth generation of TRICARE contracts, referred to as T-2017, for management of civilian providers in its two regions (East and West). For new TRICARE contracts, DOD provides a transition period—usually 9 to 12 months—for the incoming and outgoing contractors. During this time, the incoming contractors must take specific steps to prepare for health care delivery.
The John S. McCain National Defense Authorization Act for Fiscal Year 2019 included a provision for GAO to review the T-2017 transition. This report examines (1) how the requirement to implement TRICARE Select affected the transition, (2) challenges DOD experienced executing the T-2017 transition process, and (3) how DOD addressed problems after the start of health care delivery. GAO reviewed and analyzed DOD guidance, contract requirements, and other relevant documentation, and interviewed DOD officials, TRICARE contractors, and other stakeholders.
What GAO Found
The implementation of a required new health care benefit option delayed aspects of the transition to the Department of Defense's (DOD) fourth generation of TRICARE managed care support contracts (T-2017). The National Defense Authorization Act for Fiscal Year 2017 required DOD to implement TRICARE Select, a new preferred provider benefit option. As a result, DOD delayed the start of health care delivery—the date the incoming T-2017 contractors would assume responsibility for managing health care—from October 1, 2017, to January 1, 2018, to align with the mandated implementation date for TRICARE Select. DOD also delayed and lengthened a planned period for the department to make changes to beneficiary information in TRICARE's eligibility system. According to DOD and its contractors, this delay contributed to problems with enrollment processing backlogs that were not addressed until several months after health care delivery began.
DOD experienced challenges during the T-2017 transition that resulted from weaknesses with its transition guidance and oversight. Specifically, DOD's guidance does not always specify the amount and types of data outgoing contractors have to share with incoming contractors. This led to contractor disagreements over data transfers, which DOD did not always resolve in a timely manner. Contractors reported that these issues contributed to problems after health care delivery began for the T-2017 contracts, such as with processing referrals. DHA also determined that some of DHA's oversight requirements, such as for specialty care referrals, were not feasible or effective, which limited some testing of contractors' readiness for health care delivery. This occurred in part because DOD's relevant subject matter experts did not review the requirements.
DOD addressed most of the problems that occurred after health care delivery began by requiring the contractors to develop and implement corrective action plans. DOD and contractors are addressing some problems that have persisted, including problems with the contractors' provider directory accuracy in both regions and claims processing in one region. DOD has an opportunity to avoid similar problems in the future by improving the specificity of its transition guidance and effectiveness of its oversight requirements.
What GAO Recommends
GAO is making three recommendations to improve future contract transitions, including that DOD improve the specificity of its transition guidance and have subject matter experts review oversight requirements. DOD concurred with GAO's recommendations and identified steps the department is taking to address them. |
gao_GAO-20-263 | gao_GAO-20-263_0 | <1. Background> <1.1. HUD s Working Capital Fund Currently Finances Externally Provided Shared Services> HUD s WCF was established in 2016 to provide a mechanism for the department to centralize and fund federal shared services used across HUD offices and agencies. According to its Committee Charter, the three goals of the WCF are to: align incentives for efficient enterprise operations through users paying for goods and services; establish accurate and timely cost estimates for goods and services; improve planning, increase visibility and transparency, and support the efficient and effective delivery of goods and services. To begin WCF operations in fiscal year 2016, HUD transferred approximately $44 million in funding from the salaries and expenses accounts of OCFO and OCHCO to the newly established WCF. In fiscal year 2017, the WCF began to bill its customers 17 HUD offices that purchase services financed through the fund directly for their estimated use of services. HUD s WCF is different from other intragovernmental revolving funds that we have previously reviewed in that it does not fund internally provided services at this time. The WCF is currently used as a centralized funding method to pay for the costs of four established shared services agreements or interagency agreements between HUD and three external shared service providers: USDA s National Finance Center (NFC) and Treasury s Administrative Resource Center (ARC) and Shared Services Programs. See table 1 for more information about the agencies providing shared services to HUD. According to WCF Division officials, HUD plans to expand the WCF in the future to finance both internal and additional external goods and services. For example, the WCF s fiscal years 2019 and 2020 budget justifications requested funding to centralize and support activities such as a Data Management Initiative and the Real Estate Assessment Center s (REAC) physical and financial assessment services, respectively. However to date, HUD did not receive budgetary authority to proceed with including either activity in the WCF. Several offices within HUD share responsibility for the management and operations of the WCF, including the WCF Division and business line offices. See figure 1 for information on the WCF s financial operations and entities involved. <2. HUD Does Not Fully Define Roles or Assess Results for Achieving Operational and Cost Efficiencies> <2.1. HUD Defines Most WCF Roles and Responsibilities, Except for Achieving Efficiencies> HUD defines most of the roles and responsibilities for management and oversight of the WCF. According to HUD policy and guidance documents: The WCF Committee provides financial and operational oversight of the WCF, including advising and supporting the WCF s strategic direction and providing annual approval of the WCF financial plan and budget, among other responsibilities. The Committee includes representation from OCFO leadership and all customer offices. The WCF Division within OCFO oversees the financial management of the fund, including managing day-to-day operations and establishing cost accounting for all shared services and customers that use the fund. In addition, the WCF Division supports customers with WCF-specific services, such as billing and service usage reports. OCFO and OCHCO manage the provision of the services financed through the WCF to customer offices. As the designated business line offices, OCFO and OCHCO oversee the quality and timely delivery of services, including monitoring service provider performance and serving as the liaisons between HUD customer offices and service providers concerning any issues with service quality. WCF Customers place orders with the WCF Division to receive services from the external service providers. In addition, customers reimburse the WCF for their estimated use of those services. However, HUD also performs additional actions to support the efficient and effective delivery of goods and services consistent with the goals of the WCF. Specifically, the WCF Division conducts business process analyses to identify opportunities for efficiencies across the department. Yet, there is no mention in guidance of the roles and responsibilities of the WCF Division, business line offices, or other stakeholders in identifying, monitoring, and implementing the actions recommended because of these analyses. In support of the WCF s goal to support efficient and effective delivery of goods and services, the WCF Division provides quarterly usage reports to customers and business line offices and assists them with monitoring their consumption and associated costs of shared services. WCF Division officials told us they conduct a more detailed review of the data when they find anomalies, such as unusually high volumes of transactions. WCF Division officials told us that they will collaborate with the responsible business line office to conduct a business process analysis, which is used to identify actionable ways to address the cause of the high service volume and costs in specific circumstances. A business process analysis is generally conducted when there is an availability of resources, support from the business line office, and potential for cost savings or operational efficiencies. For example, in 2018, in response to an increase in the volume of two service areas overseen by OCFO help desk calls and commercial purchase order accruals the WCF Division examined data and determined that HUD could reduce its service volume and costs. See text box below. Working Capital Fund (WCF) Business Process Analysis Help desk calls: The WCF Division found that an unnecessarily high number of customer calls to help desks for password resets were contributing to higher costs to the department. For example, more than 20 percent of customer calls to Treasury ARC s financial management help desk were from customers requesting password resets, which can be manually resolved without calling the help desk and incurring a transaction fee. In fiscal year 2019, the cost to HUD per financial management help desk call was about $128. According to the WCF Division Director, the WCF Division presented its findings to OCFO leadership and the WCF Committee, including five recommendations targeted at reducing system password reset call volume and future costs to the department. Commercial purchase order transactions: Commercial purchase order accruals are more costly because they are manually processed. Among other findings, the WCF Division s analysis determined that changing OCFO s current business process for obligations below a certain threshold could reduce the volume of transactions processed. The OCFO official told us that OCFO plans to implement one of the recommendations with a new process for recording those accruals in the first quarter of fiscal year 2020 to reduce the volume of transactions. According to the WCF Division s analyses, the implementation of this recommendation could achieve potential annual costs savings of nearly $600,000. However, OCFO has not taken actions to address seven remaining recommendations, which the WCF Division found could produce additional benefits, including potential cost savings of more than $400,000 annually. The OCFO official told us that OCFO plans to examine HUD s fiscal year 2019 service usage to determine the effectiveness of the actions it has already taken to reduce the help desk call and commercial purchase order transaction volume. The estimated help desk call volume and associated cost to HUD for a given year, as agreed upon in HUD s agreement with Treasury ARC, is generally based on the average of the previous 2 years call volume. As such, changes in usage in 1 year will not necessarily result in lower service costs in the next year, but HUD may realize cost savings over time if usage is consistently lower. While they have a process for identifying opportunities for efficiencies through the business process analyses, WCF Division officials acknowledged that they have not defined and documented the WCF Division s own roles and responsibilities with regard to the analyses. WCF Division officials told us they are focused on other priorities, such as new business line proposals. However, they told us that they are open to defining these roles in the future. There are additional reasons why the WCF Division has not defined and documented roles and responsibilities for these activities. For example, Division officials told us that the Division faces organizational challenges which may limit its own ability to monitor and implement actions. First, as previously discussed, the business line offices are responsible for managing and overseeing the service lines. According to WCF Division officials, the business line offices are primarily responsible for identifying opportunities to achieve efficiencies with service usage, such as through conducting business process analyses. As such, WCF Division officials told us that they can support those offices by monitoring their usage and helping to identify actions to reduce high service volume and costs, but it is ultimately the business line offices responsibility to implement any changes to their own processes to improve service usage. In addition, given its location within OCFO, Division officials stated that the WCF Division has more leverage with OCFO to work with those officials to identify business process improvements. WCF Division officials told us that they have not collaborated with OCHCO or made recommendations for actions OCHCO could take to promote efficient and effective usage of the service lines it oversees. While the Division hopes to work with OCHCO to perform the same types of analyses, the WCF Division Director told us that making recommendations to OCHCO would be viewed as outside of its area of authority. According to Division officials, it is the role of the WCF Committee to provide oversight over the business line offices and ensure that such actions are implemented. However, officials acknowledged that these roles and responsibilities related to the business process analyses should be more clearly delineated in the WCF Handbook. Key operating principles for effective management of WCFs state that agencies should clearly delineate roles and responsibilities by defining key areas of authority and responsibility. In addition, federal standards for internal control state that management should establish an organizational structure, assign responsibility, and delegate authority to achieve the agency s objectives. As part of this, management should develop an organizational structure with an understanding of the overall responsibilities and assign those responsibilities to discrete units to enable the organization to operate in an efficient and effective manner. Without clearly defining and documenting these roles and responsibilities, it is unclear who is responsible for identifying, monitoring, and implementing actions through the business process analyses to address inefficiencies with service usage across HUD. As a result, opportunities to more efficiently and effectively deliver goods and services may not be fully and consistently implemented across the department. <2.2. HUD Has Established Performance Metrics but Does Not Assess Results of Business Process Analyses to Understand How They Support Efficient Delivery of Services> HUD established eight total performance metrics which, according to WCF Division officials, are intended to align with one or more of the WCF s three goals (see table 2). In fiscal year 2018, the WCF Division developed a draft performance scorecard to measure and track WCF performance in areas such as data and analysis, financial management, and stakeholder engagement. Division officials told us that they use 2019 data as their performance baseline for the scorecard and will continue to review and further develop the fund s metrics and targets. Part of one of the WCF s goals is to support the efficient delivery of goods and services. Some of the WCF s metrics, such as those targeting timeliness, will help the WCF Division improve its efficiency with respect to managing the fund. For example, usage report timeliness measures the number of weeks it takes for the WCF Division to share usage reports with customers. As previously discussed, the WCF Division conducts other activities, such as its business process analyses, that are also intended to support efficient service delivery. However, HUD does not assess the results of the WCF Division s business process analyses to better understand how they contribute to the WCF s goal. We previously reported that high- performing agencies continuously assess their efforts to improve performance. As part of this, agencies use fact-based understandings of how their activities contribute to accomplishing the mission and broader results. The WCF Division Director told us they have considered metrics to assess broader results of WCF Division activities such as efficiencies, but noted that it is difficult to quantify cost savings attributable to the WCF. This is due, in part, to the fact that HUD s service agreements are firm- fixed price contracts, meaning that a change in the volume of services HUD consumes in a given year will not result in direct cost savings that same year. However, HUD could assess the results of the WCF Division s business process analyses, which identified measurable operational and cost efficiencies that HUD could achieve through implementing the division s recommendations. For example, as previously discussed, in its analysis of help desk calls, the WCF Division identified potential efficiencies that it could track that would contribute to cost savings over time. While some of the recommendations may not directly result in cost savings, the Division identified other efficiencies such as process improvements that could improve the quality of services that it is capable of tracking. For example, the WCF Division determined that changes to HUD s processes could improve the accuracy of purchase order accrual estimates. Assessing the results of the WCF Division s business process analyses would help HUD better understand how the Division s efforts contribute to its goal of supporting the efficient delivery of goods and services. Without doing so, HUD risks not fully realizing more than $1 million in total potential annual savings identified by the WCF Division s analyses and freeing up resources that could be realigned for other departmental priorities. In addition to tracking progress towards its own goal, assessing these results would allow HUD to demonstrate how the WCF Division contributes to a 2018 cross-agency priority goal of improving the use, quality, and availability of administrative shared services, as well as the department s related strategic objective to organize and deliver services more efficiently. <2.3. WCF Handbook Includes Current and Complete Information on Policies and Procedures> In response to our review, HUD updated the WCF Handbook the primary reference guide for customers and stakeholders on WCF operations to include more current and complete information on WCF policies and procedures. For example, prior to February 2020, we found that the Handbook was not reconciled with more recently developed draft WCF procedures for contract and budget execution, and invoicing and payments. The WCF Handbook now includes these procedures, which contain detailed information about administrative and funds control responsibilities. For example, the procedures describe the WCF Division Director s cash management responsibilities and designation as the WCF s Funds Control Officer, as well as roles of WCF customer program and budget officers. In addition, during the course of our review, the WCF Division updated its Handbook to include current information on other policies and procedures. For example, the Handbook now reflects the WCF s performance metrics, which we previously discussed were initially developed by the WCF Division in 2018, and changes to other key policies, such as the implementation of the WCF s full cost recovery model in 2019. HUD now has reasonable assurance that its primary reference guide, the WCF Handbook, provides a current and complete understanding of existing WCF policies, consistent with federal standards for internal controls. <3. HUD Has Established a Process to Recover the WCF s Costs and Has Fully Developed and Documented Policies for Its Unexpended Balances> <3.1. HUD Has a Process Designed to Equitably and Transparently Recover the WCF s Estimated Costs> The WCF s price and cost allocation methodology is designed to equitably and transparently recover HUD s annual costs for externally provided shared services financed through the fund. According to HUD officials, the WCF has roughly recovered its costs of financing HUD s annual shared service agreements since its establishment in 2016. To recover its costs, the WCF Division has a process to divide HUD s total cost of shared services among the 17 customer offices based on their estimated service usage. For fiscal years 2016 through 2018, the WCF reported a negative accumulated operating result of $400,372, meaning that it reported it recovered nearly all of its costs since its inception. During this time period, the WCF reported years of positive and negative net operating results. Revolving funds such as the HUD WCF are designed to break even over the long term; therefore, year-to-year fluctuations are to be expected. Table 3 provides a detailed breakdown of HUD s reported cost recovery. <3.1.1. Equitable Cost Recovery> According to WCF Division officials, its shared service providers set annual prices for each service line at the outset of the fiscal year using their own pricing methodologies. The service providers then bill HUD in aggregate for an agreed-upon price under annual interagency agreements at firm-fixed prices. As illustrated in figure 2, the WCF Division determines how much each customer office will pay into the WCF for its respective share of HUD s total shared service costs using internally developed cost drivers and customers expected service usage. The cost drivers were selected by the WCF Committee, and are subject to annual review. According to WCF Division officials, the cost drivers are generally similar to those established by the external providers to maintain a clear connection between customer usage and provider charges. In some cases, however, the provider uses a nonunit based cost driver, such as level of effort. In those instances the WCF Division uses cost drivers which vary from the providers. According to HUD documentation, employee count is a common alternative driver used to fairly and equitably distribute costs among customers. In addition to the direct costs of HUD s shared services, HUD officials told us that the WCF received authority to collect reimbursement from HUD customers for the WCF Division s overhead costs in fiscal year 2019. The WCF s overhead covers operational expenses, including: WCF Division staff salaries and benefits, travel, support contracts, supplies and materials, and training. Customers are billed for a percentage of the overhead based on their share of HUD s total shared service costs. This charge is included as an individual line item in customers WCF billing statements. <3.1.2. Transparent Cost Recovery> The WCF Division shares information on pricing and its cost allocation process with customers in several ways. The WCF Handbook includes the billing process, which describes the method for allocating costs among customers. The WCF Division provides customer offices with a billing model which illustrates how costs are allocated across customers by service line. Customer invoices are broken out to show how customers are charged for each service. In addition, the WCF Division provides quarterly usage reports to customers to help them understand their service consumption. According to WCF Division officials, the WCF Division holds meetings and meets with customer offices one-on-one to explain the information provided. Participants in two of our three focus groups said that the WCF cost allocation model increases accountability and is a more equitable and fair distribution of service costs. Participants in all three focus groups said the WCF improved transparency over the old service model because they can see and consider the costs of their shared service usage. For example, one participant told us that, before the WCF, customers did not directly pay for their shared services and, as a result, did not think about costs. <3.2. HUD Has Developed Policies for Managing the WCF s Unexpended Balance> The WCF Division has processes to estimate and manage the WCF s unexpended balance, including establishing an operating reserve requirement. Properly managing unexpended balances is essential for ensuring self-sufficiency of the fund. Part of the unobligated balance includes an operating reserve which, according to WCF Division officials, is needed to finance ongoing revolving activities, facilitate payments, cover discrepancies between actual and projected shared service costs, and ensure continuity in case of funding disruptions. Evaluating Unexpended Balances: A Framework for Understanding In 2013, we identified the following questions for agencies and decision makers to consider when evaluating unexpended balances in federal budget accounts. Findings based on these questions can provide managers with important information about financial challenges and opportunities which may exist; in turn, this information may help guide more effective account and program management. In fiscal year 2017, the size of the WCF s unexpended balance increased by 60 percent from $10 million to $16 million, and it was relatively stable from fiscal years 2017 to 2018, as shown in table 4. While the WCF Division does not actually provide the shared services that it finances, nor manage dispute resolution between customers and service providers, it does communicate with customers on fund-related issues such as shared service billing and usage reports. Key operating principles for effective management of working capital funds state that to be flexible to customer input and needs, agencies should communicate with customers regularly and in a timely manner, and develop a process to assess whether customer demands are met. The WCF Division communicates and interacts directly with customers through a variety of channels. For example, WCF Division officials told us that they: organize quarterly WCF Committee meetings, hold meetings to provide information and answer questions about interpreting usage reports, and use an email inbox for communication between Division staff and customers. The WCF Division will also contact customers directly when issues, such as anomalies in shared service usage, are identified. Customers in all three focus groups reported that they turn to the WCF Division when issues or questions about WCF-related issues arise, and are generally satisfied with the Division s communication and responsiveness. <3.3. HUD Has Not Reviewed Shared Services to Ensure Strong Performance and Customer Satisfaction> HUD s business line offices those offices that oversee HUD s agreements for externally provided shared services have mechanisms to communicate with customers and obtain feedback on shared service quality. For example, an official from OCFO the office that oversees financial management, procurement, and travel services told us that OCFO has an email inbox dedicated to questions and concerns regarding services. Officials from OCHCO which oversees human resource (HR)- related services told us that OCHCO holds recurring meetings with customer offices and reviews feedback from government-wide employee surveys. That feedback is then used to inform HUD s annual negotiations for HR-related shared services and improve service delivery. According to the WCF Division Director, the WCF Committee quarterly meetings provide an additional opportunity for customer offices to provide feedback to business line offices on shared services. Business line office officials also told us they monitor data on service provider performance and go directly to the provider when discrepancies between the provider s actual performance and agreed-upon performance metrics are identified. However, while participants in all three of our focus groups acknowledged that OCFO and OCHCO are the designated points of contact for day-to- day issues, participants in two of our three focus groups mentioned that they have not been given opportunities to provide feedback on overall shared service quality. In addition, all three customer focus groups expressed some level of dissatisfaction with the quality of HR services, particularly with hiring. For example, participants in at least one of our focus groups identified the following issues with HR services: complications and excessive time consumption associated with resolving inquiries; HR service providers operating without specialized skills and knowledge relevant to HUD offices and programs; and inadequate adaption to spikes in service demand. The WCF Division Director told us that the WCF Committee has not conducted periodic reviews of WCF business lines since HUD transitioned to shared services. According to the WCF Committee Charter, the WCF Committee is responsible for conducting and overseeing periodic reviews of WCF business lines, as appropriate, to ensure effective management, strong performance, and customer satisfaction. In addition, federal standards for internal control call for periodic reviews of policies, procedures, and related control activities for continued relevance and effectiveness in achieving the entity s objectives or addressing related risks. According to the WCF Division Director, at this time the committee does not have plans to conduct such reviews. OCHCO officials told us that they are aware of customer complaints with the quality of HR services. According to officials, HUD has a new Chief Human Capital Officer as of May of 2019 who is taking action to obtain feedback on services by engaging directly with HUD customers through listening sessions. OCHCO officials told us they will introduce action plans in fiscal year 2020 to address recurring issues and customer complaints. In addition to these plans and the feedback OCHCO already obtains, OCHCO officials acknowledged that periodic reviews of the service line, as called for in the committee charter, would be valuable. Without conducting periodic reviews of shared services, HUD may not have a comprehensive understanding of whether customer needs are being met and could be missing out on opportunities to identify potential areas for improvement with the performance and management of services for which it is paying. Given the concerns customers in our focus groups told us about HR service lines, HUD should consider making it the first service line that is subject to a review. <4. Conclusions> WCFs provide agencies with an opportunity to operate more efficiently by consolidating services and creating incentives for customers to exercise cost control. HUD could maximize the potential of these opportunities by ensuring that it has a solid framework in place for managing the WCF before it expands to include additional shared services. During the course of our review, HUD took important steps to ensure that the WCF Handbook the primary reference guide for WCF operations includes up-to-date and complete information on WCF policies and procedures. Providing access to current and complete information on the management of the WCF promotes an understanding of who should be held accountable, and helps ensure that funds are effectively managed. HUD also took steps to fully document its processes to effectively manage the operating reserves. This will be particularly important as HUD continues to consider expanding the services provided through the WCF. By documenting its existing operating reserve policies, HUD is better positioned to address potential risk and to identify opportunities to achieve budgetary savings or redirect resources to other priorities. However, there are additional opportunities for improvement. Defining roles and responsibilities promotes a clear understanding of who will be held accountable for specific tasks or duties. Most of HUD s WCF roles and responsibilities are defined in guidance. However, while the WCF Division performs important business process analyses that identity opportunities to improve the efficiency of services, consistent with the goals of the WCF, HUD has not defined roles and responsibilities for the business process analyses, including who is responsible for identifying, monitoring, and implementing actions to achieve the efficiencies. This makes it difficult to hold offices accountable. By clearly defining the responsibilities of the WCF Division, business line offices, and other stakeholders, such as the WCF Committee, HUD could better ensure the business process improvements are being implemented fully and consistently across the department. Moreover, assessing the results of the WCF Division s business process analyses would help HUD better understand how the Division s efforts contribute to its goal of supporting the efficient delivery of goods and services. This would better position HUD to achieve the more than $1 million in potential annual savings identified by the WCF Division s analyses. Finally, opportunities for customers to provide input about services in a timely manner enables agencies to regularly assess whether customer needs are being met. WCF customers have several ways that they can communicate day-to-day concerns about shared services to the business line offices. However, they raised larger concerns during our focus groups, particularly about the quality of the externally provided human resource related services that deserve attention. Periodic assessments of WCF business lines would provide a more comprehensive understanding of customers overall satisfaction and would help HUD identify potential areas for improvement with the services for which they pay. <5. Recommendations for Executive Action> We are making a total of three recommendations to HUD. The Secretary of HUD should define and document roles and responsibilities for identifying opportunities to promote more efficient shared service usage through business process analyses, including defining roles for monitoring and implementing actions recommended because of these analyses. (Recommendation 1) The Secretary of HUD, in conjunction with OCFO, should ensure that the results of the WCF Division s business process analyses are assessed to better understand how these analyses contribute to the WCF s established goal to support the efficient delivery of enterprise goods and services. (Recommendation 2) The Secretary of HUD should ensure that the WCF Committee conducts periodic reviews of WCF business lines, as authorized in the WCF Committee Charter, to ensure effective management, strong performance, and customer satisfaction. (Recommendation 3) <6. Agency Comments and Our Evaluation> We provided a draft of this report for comment to the Departments of Agriculture (USDA), Housing and Urban Development (HUD), and the Treasury. In our draft report, we made five recommendations to HUD. HUD provided written comments, which are reproduced in appendix II. HUD officials agreed with four of the recommendations and described some steps they have taken or plan to take to address them. HUD sought additional clarification on one of the recommendations. One draft recommendation was that HUD ensure that existing WCF policies and procedures are current and complete, consolidated in the WCF Handbook, and made easily accessible to customers and stakeholders. HUD officials agreed with this recommendation, and during their review of the draft report, they provided documentation to show that they had updated the WCF Handbook in line with our draft recommendation. Another draft recommendation was that HUD fully document all existing processes related to the management of the WCF s unexpended balances and operating reserve. HUD officials also agreed with this recommendation, and provided documentation to show that they had established written processes in line with our draft recommendation. As such, we revised our final report to include both actions taken by HUD in February 2020 and to remove these two recommendations. In its written comments, HUD sought clarification on recommendation 1. On February 26, 2020, we spoke with HUD officials and clarified that the recommendation is more specifically targeted to the roles and responsibilities for identifying, monitoring, and implementing actions related to the business process analysis and efficiency efforts than the general guidance that HUD identified in its written comments. We added additional clarification to the report where appropriate. In addition to the written comments we received, USDA, HUD, and Treasury provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretaries of USDA, HUD, and Treasury; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have questions about this report, please contact Tranchau (Kris) T. Nguyen at (202) 512-6806 or nguyentt@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Working Capital Fund Customer Offices Represented in Our Focus Groups Appendix II: Comments from the Department of Housing and Urban Development Appendix III: GAO Contact and Staff Acknowledgments <7. GAO Contact> <8. Staff Acknowledgements> In addition to the above contact, Thomas J. McCabe (Assistant Director), Mackenzie D. Verniero (Analyst-in-Charge), Michael Alleyne, Jacqueline Chapin, Andrew J. Howard, Jason Marshall, Steven Putansu, and Alicia White made major contributions to this report. Ronald La Due Lake also contributed to this report. | Why GAO Did This Study
Moving to shared services is one way agencies can operate more efficiently. WCFs provide a way to centralize and simplify the funding of shared services. HUD's WCF was established in 2016 to provide HUD offices services on a cost-reimbursable basis. The fund currently finances services from external federal shared service providers—the Departments of the Treasury (Treasury) and Agriculture (USDA).
Congress included a provision for GAO to evaluate HUD's WCF. This report examines the extent to which HUD (1) delineated WCF roles and responsibilities and established performance measures, (2) established a transparent and equitable process to recover WCF costs, and (3) developed processes to obtain WCF customer feedback.
GAO analyzed agency documentation of WCF management and financial and budget data, using its work on effective WCF management and unexpended balances as criteria. GAO interviewed HUD, Treasury, and USDA officials and conducted three focus groups with WCF customer offices.
What GAO Found
The Department of Housing and Urban Development's (HUD) Working Capital Fund (WCF) is a self-sustaining fund that collects fees from HUD customers to pay for services needed across the department. HUD's WCF finances human resource (HR) and financial management related services provided by external federal shared service providers.
HUD defines most roles and responsibilities in its WCF handbook—the primary reference guide for WCF operations—and has established performance metrics. In addition, in response to GAO's review, HUD updated its handbook in February 2020 to include more current and complete information on existing WCF policies and procedures. However:
HUD has not defined who is responsible for identifying and implementing opportunities for achieving efficiencies with service usage, including roles for the business process analyses it periodically conducts.
HUD has not assessed the results of the business process analyses, or how those results could contribute to supporting efficient service delivery.
Clearly defining WCF roles and assessing the results of its analyses can help HUD better manage the WCF and improve its ability to identify, monitor, and potentially realize cost savings and other efficiencies.
GAO found that HUD has a process designed to equitably and transparently recover the WCF's costs for externally provided federal shared services. Prior to February 2020, it had not fully documented existing policies for managing the WCF's unexpended balances and operating reserves. However, HUD has since established its operating reserve policy that reflects all of the ways that the operating reserve can be used, such as to provide pricing stability to customers and ensure continuity of WCF activities in case of funding disruptions. Written documentation of such policies is essential to ensure that funds are managed appropriately and consistently over time.
Finally, the WCF Committee has not conducted periodic reviews of shared services to help ensure effective management, strong performance, and customer satisfaction. Officials from both business line offices—the Office of the Chief Human Capital Officer (OCHCO) and Office of the Chief Financial Officer —stated that they use a variety of mechanisms to obtain customer feedback on services. However, WCF customers in two of three focus groups GAO held said that they have not been given opportunities to provide feedback on the overall quality of services they receive, and some participants shared specific concerns with HR services. Officials from OCHCO—the office that oversees HR services—told GAO they are aware of customer concerns, plan to take additional actions to obtain customer feedback, and acknowledged the need for periodic reviews called for in the WCF Committee Charter.
Until such reviews are conducted to regularly assess customer satisfaction, HUD will likely lack a comprehensive understanding of the extent to which customer needs are being met and could be missing out on opportunities to improve the performance and management of services for which it pays.
What GAO Recommends
GAO is making three recommendations to HUD on its WCF: define roles for achieving efficiencies; assess results of its analyses; and conduct periodic reviews of business lines. HUD agreed with two and sought additional clarification on one. GAO clarified the recommendation based on further discussion with HUD. |
gao_GAO-20-164 | gao_GAO-20-164_0 | <1. Background> FAA issues aircraft registrations according to eligibility requirements prescribed by federal statute in support of International Civil Aviation Organization requirements that every aircraft engaged in international air navigation must bear its appropriate nationality and registration marks. Specifically, the law requires that the aircraft may not be registered under the laws of a foreign country and must be owned by (1) a citizen of the United States, (2) a foreign citizen lawfully admitted for permanent residence in the United States, (3) a noncitizen corporation that is organized and doing business under the laws of the United States or a state if the aircraft is based and primarily used in the United States, or (4) the U.S. government, District of Columbia government, or the government of a U.S. state, territory, or possession. By law and FAA policy, FAA imposes safety obligations on all owners of registered aircraft. To meet these obligations, an owner must maintain current information about the identity and whereabouts of the operators of an aircraft and location and nature of the aircraft s operation on an ongoing basis. In doing so, the owner is to retain the ability to provide the operator with safety-critical information in a timely manner, and to obtain information responsive to FAA inquiries, including investigations of alleged violations of FAA regulations. Such information supports FAA s ability to carry out its oversight obligations under U.S. and international law. FAA s aircraft registry is an owner registry; it is not intended to include aircraft operator information. Only an aircraft s owner may apply for registration, and a registration is not valid if the interest of the applicant in the aircraft was created by a transaction that was not entered into in good faith, but rather was made to avoid registration requirements. In addition, anyone who knowingly and willfully submits documents to FAA with false, misleading, or fraudulent information could be subject to criminal penalties and revocation of the aircraft registration. <1.1. Aircraft and Aircraft Dealer Registration Requirements> To register an aircraft for a 3-year period, in addition to a $5 application fee, applicants must submit to FAA at least two primary documents: (1) a completed application form and (2) a bill of sale or other evidence of aircraft ownership. A sample aircraft registration submission for an individual owner is shown in figure 1 below. For additional information about required documentation based on registration type, see appendix III. According to FAA officials, in 2018 FAA received approximately 71,000 registration applications. FAA also issues dealer certificates, also known as dealer licenses, in support of aviation commerce. Individuals and legal entities who are U.S. citizens can apply for an aircraft dealer certificate. The dealer certificate is valid for 1 year at a cost of $10 for the initial certificate and $2 for additional certificates. The certificates allow manufacturers and dealers to demonstrate and merchandize aircraft for prospective buyers and to make flight tests without a standard aircraft registration certificate. A dealer may obtain one or more certificates and may use a certificate for any aircraft the dealer owns. Dealer certificates require the applicant to be a U.S. citizen, identify an established place of business in the United States, provide a mailing and physical address, and substantially engage in manufacturing or selling of aircraft. Among other things, a dealer certificate is generally valid when the dealer, his or her agent or employee, or prospective buyer within the United States operate the aircraft, and only for flights that are required for testing of the aircraft or necessary for, or incident to, the sale of the aircraft. In 2018, there were 9,864 dealer certificates in the aircraft registry, primarily issued to corporations, limited liability companies (LLC), or individuals. <1.2. Aircraft Registration Types and Ownership Structures> FAA s aircraft registration application form identifies eight registration types, including individual, corporation, and government. In 2018, there were 294,221 aircraft registered with FAA across all registration types (see fig. 2). The various registration types are associated with different types of aircraft ownership structures. Individuals who are U.S. citizens or resident aliens can register aircraft in the United States as individual owners or as part of a legal entity, such as a corporation or LLC. Legal entities that meet certain requirements can also register aircraft in the United States. For most types of legal entities, the entity must qualify as a U.S. citizen. For example, a corporation may own and register an aircraft as a U.S. citizen if (1) it is organized under the laws of the United States or a state, District of Columbia, or a territory or possession of the United States; (2) the president and at least two-thirds of the board of directors and other managing officers are citizens of the United States; (3) it is under the actual control of citizens of the United States; and (4) at least 75 percent of the voting interest is owned or controlled by persons that are citizens of the United States. Depending on the type of legal entity, additional requirements may apply, and in some cases additional documentation must be provided to FAA. For some legal entities, the registered owners of aircraft may not be the beneficial owners the persons who ultimately own and control an aircraft. See appendix III for further information about the types of registrations and an additional ownership structure, along with associated documentation requirements beyond the aircraft registration application form, bill of sale, and $5 registration fee. <1.2.1. Use of Voting Trusts to Meet U.S. Citizenship Requirement> If necessary, a corporation may use a voting trust to establish the fourth element of citizenship noted above for the purposes of registering an aircraft. Generally, a voting trust legally transfers the voting control in the corporation from a foreign citizen to a U.S. citizen who holds those interests in trust; however, the exact requirements are governed by the law of the state in which the trust is created. FAA regulations have included requirements around the use of voting trusts since 1980. When promulgating the relevant regulations, FAA explained that use of a voting trust allows a domestic corporation to come within legal compliance by placing the voting interest of the stock of the corporate applicant . . . in the hands of U.S. citizens as voting trustees that the trustees have a valid, independent, and bona fide control of the voting interest. As a result, if a voting trust is used by the domestic corporation to meet the fourth element of citizenship, the corporation must submit to FAA a copy of the voting trust agreement, which identifies the voting interests and must be binding upon all parties to the transaction, as well as an affidavit from each voting trustee, which represents that the voting trustee is an independent actor. A sample aircraft registration submission for a corporation using a voting trust is shown in figure 3 below. <1.2.2. Use of Trusts in Aircraft Registrations> Trusts are not a registration type on the FAA aircraft registration application form; however, trusts are a legal structure that may own property such as an aircraft and therefore may be used to register an aircraft. As of June 2019, according to FAA data, there were 11,364 trusts in the aircraft registry. Depending on whether the trustee is an individual or an entity as well as on the specific terms of the trust, the aircraft s owner in the FAA registry may be listed as an individual or as a corporation (see fig. 4). A trust may own and register an aircraft if each of the trustees is a U.S. citizen or resident alien, and 75 percent of the control of the trust must be vested in U.S. citizens or resident aliens. Specifically, each trustee must affirm that trust beneficiaries who are not U.S. citizens or resident aliens do not have more than 25 percent of the aggregate power to influence or limit the exercise of the trustee s authority. However, foreign citizens who are not resident aliens may have more than 25 percent of the beneficial interest in the trust. Trusts for which foreign citizens have a majority of the beneficial interest are generally referred to as noncitizen trusts, even though legal title in the aircraft remains owned by one or more U.S. citizen or resident alien trustees. In a 1979 rulemaking, FAA cited increased activities of foreign investors in aircraft financing as a reason for updating its regulations related to noncitizen trusts. In the ensuing decades, FAA experienced problems obtaining important operational and maintenance information concerning aircraft owned by noncitizen trusts from the owner trustees, prompting FAA in 2011 to begin a review of its policies and practices regarding the registration of such aircraft. After a series of public meetings and receipt of written public comments, FAA issued a notice of policy clarification for noncitizen trusts in 2013. Among other things, the policy clarification confirmed that the FAA does not consider the status of the trustee as the owner of the aircraft under a trust agreement as having any differing effect on its responsibilities for regulatory compliance issues compared to other owners of a U.S.-registered aircraft, and that FAA is not aware of any basis for treating one type of owner such as a trustee under a noncitizen trust differently from any other owner of a civil aircraft on the U.S. registry when considering issues of regulatory compliance. <1.3. Information and Data Collected by Aircraft Registry> FAA collects, stores, and makes publicly available aircraft registration information. FAA collects basic aircraft registration data from the application form, which are available and searchable on FAA s website or in imaged records in portable document format (PDF). FAA data available on its website include aircraft registration number (tail or N- number), serial number, aircraft make and model, owner name, owner s address, and registration status. According to FAA officials, FAA stores scanned images in two key systems: (1) aircraft records, which includes documents such as registration application forms and bills of sale, and (2) ancillary files, which includes documents such as trust agreements. FAA officials told us that aircraft record files are accessible to the LEAU, FAA LEAP, and aviation safety inspectors who access aircraft records files via a web-based portal. Ancillary files must be accessed on-site at the FAA Aeronautical Center in Oklahoma City, Oklahoma. The LEAU has direct access to the ancillary files and provides aircraft record and ancillary file information to law-enforcement agencies, FAA LEAP, and aviation safety inspectors. Additionally, all records are accessible to the public in FAA s public documents room located at the FAA Aeronautical Center or upon request. Figure 5 shows collection, storage, and availability of FAA s aircraft registration documentation. <1.4. Users of Registry Information> Within FAA s Aviation Safety office, the Flight Standards Service manages the Civil Aviation Registry and is the primary user of aircraft registry information. Registry staff process registrations for U.S. civil aircraft, issue aircraft registration numbers, and record conveyances affecting interest in aircraft. Internal FAA users of registration information include officials from ASH, LEAP, and SEIT, and aviation safety inspectors. FAA LEAP and SEIT coordinate closely with registry officials to request registration information in support of their missions on security and law-enforcement assistance. Apart from FAA, major users of aircraft registry information are organizations serving the aviation industry, international civil aviation agencies, federal safety officials, and law- enforcement agencies (see table 1). <1.5. Selected Legislation and Regulations> In 1964, FAA issued updated aircraft registration regulations and set the aircraft registration fee at $5. In 1988, Congress passed the Federal Aviation Administration Drug Enforcement Assistance Act of 1988 (FAA DEA Act), which declared that it is FAA policy to assist law-enforcement agencies in the enforcement of laws relating to the regulation of controlled substances and, among other things, required FAA to promulgate regulations that would require individuals to provide their driver s license number and entities to provide a tax identification number in their registration application. In 1990, FAA issued a proposed rulemaking that, among other things, required a driver s license number for an individual and a tax identification number for others. In 2005, FAA issued a notice of proposed rulemaking withdrawal, stating that it fulfilled the requirements of the FAA DEA Act, with certain exceptions, through changes to its system and procedures used by the FAA Civil Aviation Registry, such as by providing law-enforcement agencies access to the registry data. With regard to the requirement to provide a driver s license number or tax identification number, FAA determined that the requirement would be detrimental to users of aircraft records and potentially to the aircraft owners, and cause an unnecessary burden on aircraft owners and government, and that this information was not necessary for law-enforcement agencies to carry out their responsibilities. In 2010, to improve the quality of registry data and to provide more accurate information to law-enforcement agencies and other users, FAA started requiring aircraft registration renewal. Such renewals must occur every 3 years. In 2018, the FAA Reauthorization Act of 2018 required FAA to modernize the Civil Aviation Registry s information technology (IT) systems. The act also required FAA to initiate a rulemaking to extend the registration duration for noncommercial general aviation aircraft from 3 to 7 years. <1.6. International Standards and Guidance on Beneficial Owners and Misuse of Corporate Structures> Beneficial ownership and legal information can assist law-enforcement and safety authorities by identifying those natural persons who may be responsible for the underlying activity of concern, or who may have relevant information to further an investigation. The Financial Action Task Force (FATF) an international standards-setting body for combating money laundering, financing of terrorism, and other related threats to the integrity of the international financial system has examined how legal and beneficial ownership information can assist law- enforcement and other competent authorities. FATF was established by the group of seven economic summit partners, known as the G7, of which the United States is a member, and the Treasury s Office of Terrorist Financing and Financial Crimes leads the U.S. delegation to FATF. FATF developed a series of 40 recommendations, last updated in 2019, that are recognized as the international standard for combating of money laundering and the financing of terrorism and proliferation of weapons of mass destruction. Specifically, FATF Recommendations 24 and 25 call on member countries to ensure the availability of adequate, accurate, and timely information on the beneficial ownership of corporate vehicles that can be accessed by competent authorities in a timely fashion. To the extent that such information is made available, it may help financial institutions and other organizations to implement the due-diligence requirements on corporate vehicles including to identify the beneficial owner and to identify and manage financial crimes risks, including sanctions requirements. <1.7. Internal Controls and Risk Management> Internal controls help entities fulfill their mission and objectives while safeguarding assets and ensuring proper stewardship of public resources. According to federal internal control standards, managers are responsible for an effective internal control system, which increases the likelihood that an entity will achieve its objectives. Additionally, managers are responsible for proactively managing risks, including fraud risks and misconduct such as waste and abuse, to facilitate the entity s mission and strategic goals by ensuring that taxpayer dollars and government services are being used for their intended purposes. The Fraud Reduction and Data Analytics Act of 2015, enacted in June 2016, required federal agencies to establish financial and administrative controls for managing fraud risks. These requirements are aligned with leading practices outlined in A Framework for Managing Fraud Risks in Federal Programs (Fraud Risk Framework). GAO s Fraud Risk Framework outlines leading practices to prevent, detect, and respond to fraud risks. As depicted by the larger circle for prevention in the sidebar, preventive activities generally offer the most cost-efficient use of resources, since they enable managers to avoid costly and inefficient recovery activities following fraudulent transactions. Therefore, leading practices for strategically managing fraud risks emphasize risk-based preventive activities. <2. Limited Verification of Registration Information and Transparency in Aircraft Ownership Hinder FAA s Ability to Prevent Registry Fraud and Abuse> FAA reviews registry applicant information for completeness and compliance with regulations generally accepting self-certification of eligibility and aircraft ownership but does not verify this information or collect key information on applicants and aircraft owners, according to our review of the registry process. This limits FAA s ability to prevent fraud and abuse in aircraft registrations, which has enabled aircraft-related criminal, national security, or safety risks, according to our case-study review. Specifically, FAA s review of aircraft registrations and dealer certifications primarily focuses on ensuring that applicants provide required documents and that forms are complete. Additionally, FAA requires limited personally identifiable information (PII), and it generally does not use that information to verify applicant information. The registry is further vulnerable to fraud and abuse when applicants register aircraft using opaque ownership structures that limit transparency into beneficial owners of aircraft. FAA s approach has focused on obtaining and recording the required documents, and consequently, FAA has not identified fraud risks, their likelihood and impact, the suitability of controls, and other aspects of a fraud risk assessment that would support fraud prevention activities. As a result, FAA is limited in its ability to ensure registrant eligibility and prevent fraud and abuse and associated criminal, national security, and safety risks involving U.S.-registered aircraft. <2.1. Limited Registration Verification and Risk Management Hinder FAA s Ability to Prevent Fraud and Abuse> FAA generally accepts certification by applicants of their eligibility and aircraft ownership and performs limited review of applicant information to identify potential fraud or abuse. Specifically, FAA requires applicants to submit signed documents that attest to the requirements relevant to their registration type, including U.S. citizenship, resident alien status, or eligibility as a noncitizen corporation. Where owners are LLCs or trusts, applicants submit documentation that the entity is organized under U.S. or state laws. Additionally, applicants must submit evidence of aircraft ownership, such as a bill of sale, and attest to their ownership of the aircraft. According to FAA policy, by signing the application form, applicants certify to the truthfulness and accuracy of the information provided and that they understand that knowingly and willfully submitting documents to FAA with false, misleading, or fraudulent information could subject the person to criminal penalties and revocation of the aircraft registration. FAA collects applicants name and address, although according to officials, it accepts this information as factually valid and does not make an attempt to detect intentional fraud at the time of application. FAA does not require or collect other PII, such as the applicant s date of birth or driver s license information for individual applicants, or taxpayer identification numbers and state of incorporation for legal and corporate entities, for identity verification or record keeping. FAA collects some PII in the airmen registry, such as for pilot licensing, but it does not use this information for aircraft registration verification purposes. Use of PII is a key way federal programs verify the identity and eligibility of potential beneficiaries. FAA s policy is to review documents for acceptability during the initial registration. This includes, for example, checking for internal discrepancies within the documents submitted, ensuring that documents are complete, and that the self-certification is signed. For previously registered aircraft, FAA also reviews prior bill of sale documents for inconsistencies in the chain of ownership. Where owners are corporations with complex ownership structures, such as LLCs that are owned by other LLCs, registry officials may request review by FAA s legal counsel to confirm eligibility. FAA s legal counsel may also review documentation provided by noncitizen corporations as well as trust agreements and related documents for registrations involving noncitizen trusts, statutory trusts, and corporations using voting trusts to meet U.S. citizenship requirements at the time of registration. In these cases, according to FAA officials, FAA legal counsel reviews documentation to ensure that the entity is organized under U.S. or state laws and may periodically perform spot checks by contacting a Secretary of State office to confirm the existence of an entity. However, where the owner is a U.S.-citizen corporation, FAA generally does not request or review articles or certificates of incorporation to ensure the entity is organized under U.S. or state laws. In addition, FAA does not require or review additional documentation for individual, partnership, and government registration types. For these applicants, FAA checks (1) all sections of the application form for completeness, (2) chain of ownership, and (3) that applicants self-certify their U.S. citizenship. Further, according to FAA officials, when FAA informs applicants of its unfavorable determination, such as when reviewing LLC documentation, for example, applicants are generally provided an opportunity to remedy deficiencies and resubmit their applications. According to FAA officials, FAA applies the same scrutiny to resubmissions as it does to initial applications. In addition, FAA does not review documents for eligibility when individuals certify that there have not been any changes since initial registration. As with aircraft registrations, FAA does not verify dealer identity, check for prior relevant violations, or enforce requirements associated with dealer certificates, such as verifying that dealers are substantially engaged in manufacturing or selling aircraft or only operating domestically, except when delivering an aircraft to a foreign purchaser. Furthermore, FAA regulations do not prescribe enforcement mechanisms to ensure continued dealer eligibility once approved or at the time of certificate renewal. Law-enforcement and FAA LEAP agents told us that dealer certificates is an area in need of greater oversight because dealer certificate applications have been falsified similar to other aircraft registrations, as discussed below. Additionally, FAA LEAP agents told us that they have identified instances of dealers acting as nominees on behalf of foreign entities, registering aircraft under their U.S. dealer certificate. The use of a nominee is an invalid means to register an aircraft, including for dealers. FAA LEAP agents noted that, in their experience, this practice may have enabled otherwise ineligible foreign entities to meet aircraft registration citizenship requirements. In our case studies and interviews with FAA, we identified examples of fraudulent registrations and potential abuse of the registry that occurred within the context of FAA s current practice of limited verification and review of applicant information. In addition, our analysis of address data and investigation of selected addresses highlights the risks of abuse arising from FAA s approach of not verifying address information. The examples below illustrate some of the risks associated with FAA not verifying: (1) applicant identity, (2) ownership, and (3) address information. Applicants falsified identities and registration self-certification. A 2017 case involving an aircraft registered through a falsified identity illustrates inherent risks of not verifying applicants information and identities, such as through PII or other checks. According to FAA documents, an applicant registered an aircraft as an LLC owner with supporting documents identifying two individual members. In registration documents, the applicant provided the name of a stolen identity for the first LLC member s name and John Doe for the second. FAA accepted the registration information as factually valid and the aircraft remained legitimately registered for about 1 year. A DEA and FAA LEAP investigation of aircraft operating outside the United States eventually discovered the falsification. When FAA LEAP agents contacted the first named individual of the LLC, he affirmed that he was not a member of the LLC, never owned an aircraft, and never executed any documents to register an aircraft in his individual capacity or on behalf of a business entity. FAA LEAP determined that the stolen identity had been used to submit aircraft registration paperwork without the individual s knowledge or consent. Accordingly, FAA revoked the aircraft registration, finding that the registration was invalid because the applicant s interest in the aircraft was created by a transaction that was not entered into in good faith. This revocation was associated with a broader effort by DEA and FAA involving international operations of multiple U.S.-registered aircraft that resulted in aircraft and cocaine seizures, discussed later in this report. Aircraft broker fraudulently registered multiple aircraft for bank loan fraud scheme. A 2013 case involving an aircraft sales broker and dealer who was convicted of making a false statement to FAA in registering aircraft, among other convictions, illustrates risks associated with FAA s reliance on self-certification and limited review of ownership information. In this case, the broker submitted fraudulent registration applications and bills of sale to FAA using forged signatures for over 20 aircraft as part of a multi-million-dollar bank fraud scheme. FAA accepted the broker s self- certification as factually valid. The broker used the registration certificates that FAA had provided as an asset to support a loan application that resulted in a $3 million bank loan for his failing aircraft sales business. The bank uncovered the fraud over a year after the sales broker first submitted to the bank fraudulent aircraft registration documents to execute the bank loan. A subsequent investigation by the Federal Bureau of Investigation revealed the extent of the fraud, namely that the main thrust of the fraud scheme was to pledge 22 aircraft as collateral, which neither the broker nor his company owned, in order to obtain money from the bank. As a result of the fraud, some of the rightful owners of the aircraft experienced difficulty reinstating aircraft registrations in their names. For example, one owner told federal investigators that he could not fly his aircraft for 2 years because the registration of his aircraft was in the name of the fraudulent broker. This aircraft broker was also a licensed dealer, who held and renewed a dealer certificate during the time he was perpetrating his illicit scheme submitting fraudulent aircraft registrations to FAA. Noncompliant addresses. We also identified registrations with potentially noncompliant addresses and addresses that did not match USPS postal verification data in our analysis of FAA s publicly available and ancillary registry data files. Our analysis illustrates noncompliance risks associated with FAA s approach of not verifying physical address information as well as safety and security risks associated with FAA s ability to readily identify or contact owners when issues arise. FAA regulations require that owners submit physical address information in their application forms. According to FAA policy, a physical address is needed so that the owner can be located, if necessary, for security or safety reasons. According to FAA officials, FAA will accept the use of a mail drop or a registered agent s address as a mailing address, provided the physical address is included. However, our analysis of 2018 physical and mailing address data shows that over 2,000 (about 1 percent) of addresses list a mail drop location without a physical address, which does not comply with FAA s requirement. We selected seven of these cases for further verification using online and subscription database research, including three for site inspection. In our review of seven selected cases based on categories of addresses and locality, we identified three cases in which a physical address was not provided by registrants. Through a site inspection for one of the selected cases, we were able to confirm a UPS Store location was provided as the mailing address, and no physical address was provided as required by FAA policy. (See sidebar.) For the remaining two cases, the registrants provided the addresses of the registered agents that likely facilitated the application on behalf of the registrants, but no physical addresses were provided. The address of one of these registered agents is the same address we identified in a case study discussed later in this report. In that case, FAA registry officials were not able to get in contact with the owner, who used a registered agent address, after the aircraft had crashed outside the United States. The aircraft was being operated by a foreign government, following its seizure on drug trafficking charges. FAA sent multiple letters to the owner to deregister the aircraft and also when the aircraft registration was expiring, but all were returned as refused by the registered agent. As discussed later, the use of a registered agent address may provide a layer of anonymity in aircraft ownership and pose challenges when FAA or law-enforcement agencies need to contact registered owners. The address of this mail drop location was used in one of the aircraft registration cases we selected for postal address verification, inconsistent with Federal Aviation Administration policy. Additionally, we selected five dealer addresses for further review. We found that in three cases physical addresses were provided on the certificate application forms as required. In two remaining cases, we cannot make any conclusions regarding the validity of the physical addresses provided because we could not confirm through our online and subscription databases whether the companies were or were not located at the physical addresses provided to the registry. In addition to fraud and abuse risks posed by limited verification and review of applicant information, the registry faces risks associated with nominee registrations. As noted above, use of a nominee is an invalid means to register an aircraft and involves a person or business acting on behalf of an ineligible owner, as shown in the following example. Fraudulently registered aircraft linked to notorious cartel. A 2016 case involving the use of a nominee to register an aircraft on behalf of an ineligible owner illustrates risks of registration fraud by individuals and entities misrepresenting their aircraft ownership. In this case, law- enforcement officials received information that an aircraft was in the process of being purchased by a foreign national. A U.S. corporation, acting on behalf of entities known to have ties to the Sinaloa Cartel, purchased the aircraft, filed registration documents for it, and represented itself as the aircraft owner. According to court documents, by registering as the aircraft owner, the nominee corporation concealed the otherwise ineligible non U.S. citizen ownership of the aircraft by entities with Mexican drug cartel ties. FAA accepted the registration and registered the aircraft in 2014. A law-enforcement agency, which was aware of the scheme, seized the aircraft shortly after final payment was made on it. Law-enforcement investigation into this case also revealed that some of the same entities had previously been involved in similar schemes involving aircraft purchases and registration associated with drug trafficking. The aircraft was subsequently forfeited to the federal government because its registration was fraudulent and it was purchased with assets derived from wire fraud, money laundering, or other unlawful activities. As part of its IT modernization effort, FAA identified some risks to the aircraft registry, such as financial fraud and terrorist access. FAA officials have also pointed to various FAA LEAP and law-enforcement activities directed at managing these risks, as discussed later in this report. These are reactive measures, and the current process which accepts applicant information at face value is not designed to identify and prevent fraud and abuse. Preventive activities generally offer the most cost-efficient use of resources because they enable managers to avoid a costly and inefficient pay-and-chase approach. According to federal internal control standards, managers should identify, analyze, and respond to risks. Furthermore, GAO s Fraud Risk Framework emphasizes risk-based preventive activities that are based on a comprehensive, documented risk assessment that identifies risks, assesses them, and develops a strategy to address analyzed risks, including periodic assessments to evaluate continuing effectiveness of the risk response. To identify risks, managers should consider the types of risks, including both inherent and residual risks. To assess risks, managers should estimate the significance of a risk by considering the magnitude of impact, likelihood of occurrence, nature, and tolerance of the risk. Managers should then design overall risk responses for the analyzed risks based on the significance of the risk and defined risk tolerance. According to FAA officials, FAA has not conducted such an assessment, which would better position it to design and implement risk-based preventive and other controls to manage these risks. As our case studies and illustrative examples demonstrate, this has enabled illicit actors to defraud and abuse the registry, with criminal and national security consequences. In addition, federal internal control standards call for agency management to design control activities to achieve objectives and respond to risks, including designing a variety of transaction controls, which may include verifications, reconciliations, and authorizations. As discussed in the Fraud Risk Framework, a leading practice to effectively prevent instances of potential fraud is for managers to take steps to verify reported information, particularly self-reported data and other key data necessary to determine eligibility. According to FAA officials, the law directs FAA to register an aircraft or issue a dealer certificate that meets eligibility requirements, but does not require FAA to verify the accuracy of the information included in the registration application. Yet without such a review to verify applicants information, FAA cannot be assured it is appropriately determining eligibility for the approximately 71,000 applications the registry processes annually. In turn, this limits FAA s ability to prevent fraud and abuse of the registry from registrants engaged in illicit activities. Aircraft Registration and Dealer Fees Aircraft registration costs $5 and a dealer certificate costs $10 for initial application and $2 for additional certificates. While these fees are attractive to aircraft owners and dealers for economic reasons, we previously determined that the registration fee, in place since 1964, did not cover the cost of reviewing and processing an application. Considering only inflation adjustment, the $5 fee would be $41 in 2019 dollars, which may still be short of what the Federal Aviation Administration (FAA) would need to cover its expenses. FAA has been working to increase registration-related fees since 2013. According to FAA officials, FAA is evaluating regulatory strategy in light of registry information technology modernization and considering other regulatory priorities. According to FAA officials, although they have the authority to collect information for verification purposes, they do not have the tools and resources to do so. With respect to tools, as noted earlier, FAA is making plans to modernize registry operations by implementing streamlined and automated processes where registration information is submitted electronically. According to FAA officials, this is expected to improve online data availability and allow for cross-checking information with other data sources, such as other government databases. With respect to resources, FAA collects a fee that is intended to cover registration processing activities. However, the registration fee has remained the same $5 since 1964, and for many years has not covered FAA costs associated with registration processing. In a 1993 report, we estimated that FAA had forgone about $6.5 million in fees since 1968 because the registration fee did not cover the cost of reviewing and processing an application. Since that time, U.S. taxpayers have subsidized the processing of aircraft registrations and dealer certificates, including legal analysis, and covering the costs of labor, technology, postage, and other direct and indirect expenses. GAO s federal user fee guide states that fee collections should be sufficient to cover the intended portion of program costs over time, including factors such as inflation. (See sidebar.) Without a fee that keeps pace with inflation and covers the cost of collecting and verifying applicant information for these high-value assets, FAA passes these costs on to U.S. taxpayers and limits the resources available for applicant verification. <2.2. Use of Opaque Ownership Structures in Aircraft Registrations Provides Opportunities for Abuse> Individuals or entities may use opaque ownership structures a legitimate means to register aircraft to disguise potential ineligibility or hide illicit activity, according to our illustrative case and intermediary research, and interviews with FAA and law-enforcement officials. Opaque ownership structures are legitimate business structures that are widely used by corporations and individuals to facilitate commerce as well as for asset and tax management. However, we identified cases where these structures were used to name legal entities or trusts as the owner of an aircraft to disguise potential ineligibility or provide layers of anonymity in support of illicit activity. The lack of transparency related to these registrations also creates challenges for safety and law-enforcement investigators seeking information about beneficial owners of aircraft to support timely investigations, according to these officials. On the basis of interviews with FAA LEAP, SEIT, and law-enforcement officials, we identified four types of ownership structures that can be used to register an aircraft so that the beneficial owner is not transparent. The four types can be used alone or in combination and include the use of (1) LLCs, (2) shell companies, (3) noncitizen trusts, and (4) U.S. citizen corporations using voting trusts. According to our analysis of the registry s calendar year 2018 data, although not mutually exclusive, there were 54,549 aircraft registered to LLCs; approximately 2,300 aircraft registered to likely shell companies; 3,300 registered as noncitizen trusts, and 4,200 registered to U.S. citizen corporations using voting trusts. The four types of opaque ownership structures are often established by intermediaries individuals and entities that facilitate aircraft registration for a fee, such as by establishing legal structures and submitting aircraft registration applications and renewals. (See sidebar.) The use of intermediaries adds a layer of opacity to aircraft registrations. Intermediaries may not know, and most are not required to know, beneficial owners of aircraft they help to register. However, intermediaries that are banks are required to establish due diligence procedures for accepting and monitoring their clients as part of banks anti-money-laundering requirements under the Bank Secrecy Act and its amendments. To obtain beneficial ownership information, banks must identify and verify the identity of any individual who owns 25 percent or more of a legal entity, and an individual who controls the legal entity. Other intermediaries are not required to establish due-diligence procedures for accepting and monitoring their clients. Another approach that adds opacity to aircraft registrations is when applicants use the address of a registered agent a person or entity authorized to accept service of process or other important legal and tax documents on behalf of a business as the applicant s address. Although the use of opaque ownership structures, intermediaries, and registered agents can serve legitimate purposes, they can also be abused in the context of aircraft registration to disguise potential ineligibility or hide illicit activity, according to our analysis of registry data and research. (See app. IV for additional details on the use of opaque ownership structures for aircraft registration.) In our analysis of illustrative cases involving U.S.-registered aircraft and our intermediary research, we identified examples where opaqueness and complexities of aircraft registrations using the ownership structures hindered FAA s ability to prevent abuse of the registry to facilitate other criminal activity. In these examples, intermediaries used mechanisms allowable under current registration requirements to register aircraft, sometimes using multiple ownership structures for the same registration. The first example, based on our review of FAA registration records, illustrates opaqueness of information contained in FAA registration records and includes the use of multiple intermediaries and jurisdictions for an aircraft associated with asset forfeiture. The second example illustrates the use of an intermediary in establishing opaque ownership structures for several aircraft involved in illicit activities, including actions subject to U.S. sanctions. Use of multiple intermediaries and jurisdictions to obscure ownership of aircraft. According to our review of registry documents for this case, an intermediary registered the aircraft in 2010 using a noncitizen trust, providing limited information about the corporate trustor, whose beneficial owner was a high-net-worth foreign national. To register the aircraft, the intermediary a bank providing corporate owner trustee services for aircraft registrations established the noncitizen trust. The trust agreement identified the trustor as a company established in the British Virgin Islands. The trustor s address for correspondence was listed as a post office box in Switzerland, with an email address indicating another trust company. Signatures of two trustors, identified as directors of two other apparent intermediary companies, were illegible and omitted printed names of individuals (see fig. 6). In 2019, the foreign national consented to the forfeiture of this aircraft and other property to DOJ in exchange for the release of certain other frozen assets, with both parties agreeing that the agreement did not constitute a finding of guilt, fault, liability, or wrongdoing. Use of an intermediary to obscure ownership of multiple aircraft. Between 2011 and 2018, an intermediary set up various corporations to facilitate aircraft registrations. The intermediary was an attorney who established the corporations using a registered agent service and also established voting trusts for those corporations to meet U.S. citizenship requirements for the aircraft registrations. Acting as director of these corporations, which have indicators of being shell companies, he registered two aircraft in 2011 and 2013. In 2019, individuals associated with these companies were sanctioned by OFAC as part of a U.S. sanctions program. Specifically, the individuals were designated in connection with paying bribes and involvement in a corruption scheme designed to take advantage of Venezuela s currency exchange practices. The intermediary facilitated an aircraft sale about a month prior to the OFAC sanction designation for one aircraft and resigned from his position as director of the other company upon the OFAC announcement. Another aircraft registered by a company with the assistance of this intermediary in 2012 was seized in 2016 and forfeited to the U.S. government as part of the black-market currency exchange scheme. The investigation revealed that the aircraft had been purchased by a U.S. corporation whose sole beneficial owner was a Venezuelan individual using proceeds from a scheme that involved black-market currency exchange. The U.S. government seized the aircraft, alleging it was purchased with assets traceable to money laundering or other illegal activities, and the aircraft was later forfeited. Through our research on intermediaries, we identified another aircraft in which this intermediary had been similarly involved. Registration documents for this aircraft indicate a pattern of activity associated with potential trade-based money laundering. We are making a referral to DHS HSI for further investigation to determine whether individuals associated with the aircraft may have engaged in unlawful activity. Opaque ownership structures pose challenges for law-enforcement investigations. According to the 2018 National Money Laundering Risk Assessment, federal law-enforcement agencies noted that misuse of legal entities posed a significant money laundering risk and that law- enforcement efforts to uncover beneficial owners of companies can be resource-intensive, especially when ownership trails lead outside the United States or involve numerous layers. Law-enforcement officials across multiple agencies and FAA ASH, LEAP, and SEIT officials noted that challenges identifying beneficial owners of aircraft can impede their investigations. According to FAA LEAP agents, it is an ongoing challenge for them to identify beneficial owners. For example, according to FAA LEAP agents, a secretary of a company frequently registers aircraft on the company s behalf and it takes time to determine the identity of the company s beneficial owner. Limited PII in the registry records further impedes law-enforcement efforts. FAA LEAP agents and law-enforcement officials from DHS HSI and DEA described challenges they experience in their investigative work because aircraft registration records do not contain relevant PII, as noted above. For example, according to LEAP agents, they experience daily challenges identifying individuals without PII, particularly those with common names, hyphenated names, and multiple last names. This can be particularly difficult when aircraft are registered through legal structures, and, as DHS HSI officials noted, penetrating through the layers of ownership can take time, slowing down investigations. Further, one DEA official stated that without PII, identifying beneficial owners of aircraft is a challenge in his investigations, and in two cases he was ultimately unable to identify beneficial owners of aircraft. In prior work, we reported on challenges that law-enforcement officials face in their investigations when information is not available, particularly company ownership information such as names of directors or officers. As discussed earlier, the FAA DEA Act required FAA to promulgate regulations in consultation with other federal agencies, law-enforcement officials, and representatives of the general aviation industry that would require individuals to provide driver s license and taxpayer identification numbers, but did not require applicants to provide date of birth. FAA s approach, however, did not require applicants to submit driver s license and taxpayer identification numbers. In part to serve the aviation community, which relies on publicly available registration information for the purchase and sale of aircraft, in 2005 FAA determined that adding PII to the records would require restricting access to them and therefore it would be detrimental to users of aircraft records, burdensome on aircraft owners and the government, and not necessary for law enforcement. FAA s IT Modernization The Federal Aviation Administration (FAA) is making plans to modernize its information technology (IT) infrastructure for the registry, including potentially revising relevant regulations. According to FAA, it plans, among other things, to (1) enhance service delivery through process improvement and automation for near real-time access to accurate information; (2) utilize technology to reduce or eliminate mail, fax, or paper-driven service requests, processing, and information delivery; and (3) utilize technology to mine data to support risk-based decision-making, including the use of business intelligence algorithms to eliminate fraud, inaccurate information, and inappropriate use. In a May 2019 report, the Department of Transportation (DOT) Office of Inspector General (OIG) assessed FAA s efforts and plans and determined that the agency has not identified costs, schedule, or an acquisition strategy for IT modernization. DOT OIG recommended, among other things, that FAA develop a timeline for making key decisions to implement IT modernization. See Department of Transportation, Office of Inspector General, FAA Plans To Modernize Its Outdated Civil Aviation Registry Systems, but Key Decisions and Challenges Remain, AV2019052 (May 8, 2019) We recognize the concerns for federal agencies associated with collecting and storing PII as well as the potential burden for applicants to submit such information. However, according to FAA officials, the IT modernization for which FAA is currently in its planning stages is intended to provide FAA the technical capability to adjust the level of access to registry records for various users, restricting PII access for some while allowing broader access to authorized users such as law-enforcement agencies. (See sidebar.) Industry associations and corporate registry users we interviewed expressed concerns about client privacy; however they also indicated openness to future technology improvements of FAA systems. Additionally, as noted earlier, use of PII is a key way federal programs verify the identity and eligibility of potential beneficiaries. Including in the planning stages of IT modernization basic elements of PII such as name, date of birth, physical address, and a driver s or pilot s license could provide FAA with the initial capability to verify applicant information while it develops a risk-based approach informed by its risk assessment. According to federal internal control standards, managers should use quality information to achieve the entity s objectives, including obtaining relevant data from reliable internal and external sources in a timely manner. By not collecting and recording PII at the time of application and renewal, FAA has limited assurance of registrants eligibility, and lacks information that could support its oversight and law-enforcement officials ability to identify relevant persons and entities as part of investigations involving registered aircraft. As with applicant PII, FAA does not require applicants to submit information on beneficial owners of aircraft individuals and certain entities that own more than 25 percent of the aircraft. In addition to the federal internal control standards for managers to use quality information to achieve the entity s objectives, U.S. implementation of international standards for combating money laundering and terrorism financing would need to ensure availability of adequate, accurate, and timely information on beneficial ownership of high-value assets. By not collecting and recording information on beneficial owners in an electronic format that facilitates data analytics, FAA has limited assurance of registrants eligibility, and lacks information that could support its oversight and law- enforcement officials ability to identify relevant persons and entities as part of investigations involving registered aircraft. <3. FAA Uses Some Registry Information to Detect Potential Fraud and Abuse, but Registry Data Format Hinders Analysis, and Additional Data Could Support Oversight FAA Makes Some Use of Registry Information to Detect Potential Fraud and Abuse> FAA makes some use of registry information on a case-by-case basis to detect potential fraud and abuse. FAA LEAP agents, in addition to supporting law-enforcement officials by providing access to registry information and specialized guidance related to aviation issues, have conducted registry analyses to identify suspicious and potentially illicit actors. For example, in 2018, FAA LEAP agents and registry officials started a project to flag aircraft registrations for FAA LEAP monitoring when applications are filed by entities or individuals, such as multiple shell companies associated with a certain individual, suspected of abusing registry processes. Additionally, one FAA LEAP agent told us that he reviews aircraft registrations filed the previous day and checks them against other information sources to determine suspicious activity, sharing leads identified through this analysis with law-enforcement officials for further investigation. However, this case-by-case review is limited to the data and information FAA currently collects, and is further hindered by a data format that does not support data analytics for fraud and abuse detection. <3.1. Most Registry Data Are Not in a Format That Facilitates Data Analytics to Support Oversight and Risk Mitigation> FAA collects some information that could support fraud and abuse detection and oversight. As described earlier, FAA collects information on aircraft owners from the registration application, such as name and address, and these data are searchable and electronically analyzable. In April 2018, FAA also began tracking aircraft registrations that use voting trusts to meet U.S. citizenship requirements and trusts with noncitizen trustors, which are opaque ownership structures discussed earlier. This included recording in ancillary files the names of individuals and entities with potentially significant responsibilities for aircraft ownership, such as trustors and voting trustees. Additionally, according to FAA and some industry officials, the 3-year registration renewal implemented in 2010 has helped improve the quality of registry data that FAA collects. According to FAA officials, in addition to updating owner address information, registration renewal improves data quality as it prompts (1) reports of unreported aircraft sales, (2) new registrations due to ownership changes, and (3) cancelations due to destruction, scrapping, and exports. However, the benefits of registration renewal for data-quality purposes could diminish when the renewal period for noncommercial general aviation aircraft changes from 3 to 7 years, in alignment with new requirements from the FAA Reauthorization Act of 2018. Nevertheless, most of the information that FAA collects in the ancillary files and elsewhere is not recorded in a format that facilitates data analytics, according to our review of FAA s registry system. Specifically, data on individuals and legal entities with potentially significant responsibilities for aircraft ownership such as trustors, beneficiaries, stockholders, directors, and managers are stored as imaged PDF records that, due to information-system limitations, cannot facilitate data analytics. For example, information on LLC directors and managers as well as directors, managers, and stockholders of U.S. citizen corporations that use voting trusts is stored in imaged records. Our intermediary research identified an aircraft registered to a company whose sole stockholder was subject to U.S. sanctions; however, FAA currently stores data on foreign stockholders of U.S. citizen corporations that use voting trusts in PDF records, preventing it from being able to conduct data analysis to identify such individuals or entities for all registrations. Such data may be useful in identifying entities and individuals subject to U.S. sanctions, as discussed below. Additionally, the current system configuration limits FAA to viewing individual records within the ancillary files. This configuration prevents agency officials from tracking aircraft registration numbers a common identifier across records or linking them to the registration data portion of the registry. Further, FAA internally tracks noncitizen trusts and U.S. citizen corporations using voting trusts as one category within registry data, preventing analysis and monitoring of each group of registrations. Lastly, FAA stores records of declarations of international operations requests that expedite registration processing for aircraft intending to travel outside the United States as imaged PDF records, so information about the aircraft, owner s name, departure and destination locations, date of intended travel, and name of the individual submitting the declaration are not in a format that facilitates data analytics. According to 2017 2018 analysis of information from declarations of international operations with checks against flight history data, FAA SEIT identified patterns of activity that could be used in support of safety and law- enforcement investigations, as discussed later in this report. Furthermore, due to manual data entry and lack of verification, the registry s postal data may not support effective data analytics and oversight. FAA staff also have the option to override the formatting prompts produced by its address validation software. Our analysis of 2018 physical and mailing address data found that about 25,000 (9 percent) of all registrant addresses did not match a valid address in the USPS postal verification data, while just over 300 (about 3 percent) of all dealer addresses did not match. Of the seven aircraft registration cases we selected based on address category and locality, we found three registrant addresses that indicated a registry data-quality issue and one that did not. Specifically, our review of the application forms for two registrants showed that a physical address was provided by the registrants, but was not recorded in the physical address file. In another case, our review of five registration records for one company showed that FAA revoked registrations for the five aircraft in 1971, but did not deregister them until 2019, sending deregistration notification letters to the original address, which were returned as undeliverable. We did not find any noncompliance in the last case and, based on our review of aircraft registration documents, determined that a change of address form was provided to FAA following the most-recent renewal, but the new address had not yet been updated at the time we received the physical address data. As described earlier, FAA is taking steps to modernize its IT system for the registry because it is outdated. According to a recent DOT OIG report, the system had its last significant upgrade in 2008, is approaching the end of its service life, suffers intermittent outages, and uses an outdated programming language. According to FAA, the future system is expected to streamline and automate processes, allow for the submission of electronic forms, improve online data availability, and implement additional security controls, such as software that can cross-check aircraft registrations with other government databases. In December 2018 and June 2019, FAA issued requests for information to conduct a market survey and to develop a strategy based on feedback received, respectively. As of November 2019, FAA was making plans to issue a request for proposal, but did not identify specific time frames. Registry system modernization presents an opportunity to mitigate data format limitations as FAA designs new systems and controls. According to federal internal control standards, managers should use quality information to achieve the entity s objectives. Managers can do that by designing processes and identifying information requirements needed to achieve objectives and address risks as well as by processing obtained data into quality information that supports the internal control system. This could include electronically analyzable information from declarations of international operations and information on owners and related individuals and entities with potential significant responsibilities for aircraft ownership such as beneficial owners, trustors, trustees, stockholders, directors, and managers. Without analyzable data on significant parties involved in aircraft registrations that can be linked through a common identifier, FAA is limited in its ability to exercise its domestic and international oversight functions and fully support safety and law-enforcement investigations. <3.2. Analyzing Registry Data with Other Data Sets Could Assist in FAA s Detection of Fraud and Abuse Risks> Use of data analytics to detect suspicious activity, anomalies, or patterns is one of the leading practices identified in GAO s Fraud Risk Framework. However, registry officials primarily use collected data to send automated notifications, such as for aircraft renewals, and current use of data to support oversight is limited, in part hindered by data format limitations described earlier. In addition, registry officials do not analyze various external data sources against registry data to detect patterns of potential fraud or abuse. Risk indicators identified through such analyses may serve as points of inquiry for a broader fraud risk assessment, or for further examination of conduct that may pose criminal, national security, or safety risks. To demonstrate how FAA could identify registrations with indicators of potential fraud or abuse that may enable criminal activity, national security, and safety risks, we analyzed aircraft registry and related data. Specifically, we analyzed aircraft registry data from publicly available and ancillary files, as well as matched registry data against other datasets to identify (1) registrations using registered agent address, (2) registrations using opaque ownership structures, (3) aircraft registration addresses located in countries identified by the Department of State as associated with major illicit drug production and money laundering, (4) OFAC data on individuals and entities subject to U.S. sanctions, and (5) NTSB safety accident and incident reports. Based on this analysis, we found over 17,000 registrations out of approximately 300,000 registrations associated with one or more risk indicators for fraud or abuse. The majority of registrations (over 15,000 or about 90 percent) were associated with one risk indicator, about 2,000 registrations (10 percent) were associated with two risk indicators, and the remaining 140 (1 percent) were associated with three or more risk indicators. The results of our various analyses are described below. Use of registered agent address. As discussed earlier, registered agents are authorized to accept legal documents on behalf of a business. According to FAA officials, FAA will accept the use of a registered agent s address as a mailing address, provided the owner s physical address is also included. Our analysis of registry data identified cases where a registered agent s address was recorded as the registrant s physical address. The registry data do not specifically identify registered agents, but by analyzing address information for calendar year 2018, we identified at least 4,080 cases using registered agents addresses. For one of the registered agents we were able to confirm, we identified 965 associated registrations, including about 300 registrations associated with characteristics of a likely shell company or that were a noncitizen trust or a U.S. citizen corporation using a voting trust. Further, for this one registered agent, we identified about 280 unique business names, associated with about 760 registrations, which used this registered agent s address on aircraft registration applications. Additionally, based on our analysis of postal address data provided by FAA as well as verification of selected cases, we identified and confirmed through site inspections two additional registered agents whose addresses were used in over 100 registrations and over 3,220 registrations, respectively. Use of registered agent addresses, when not accompanied by physical address information, particularly in combination with opaque ownership structures, provides a layer of anonymity to beneficial owners of aircraft and may mask ineligibility or illicit actors. Noncitizen trusts and U.S. citizen corporations using voting trusts. We reviewed internal FAA trust data from April 2018 through May 2019 the full range of data available at the time of our review to identify the number of registrants that were noncitizen trusts or were U.S. citizen corporations using a voting trust. In total, we found about 6,800 such registrations contained in the registry data. Of these registrations, two were associated with individuals subject to U.S. sanctions, four were associated with an FAA revocation or suspension, and 16 appeared to be shell companies. FAA regulations allow for registrations using noncitizen trusts and U.S. citizen corporations using voting trusts as valid means of enabling registrants to meet FAA s citizenship requirements. However, as discussed earlier and according to FAA and law-enforcement officials, registrations using noncitizen trusts and U.S. citizen corporations using voting trusts may also mask ineligibility or illicit actors. Consistent with their program-management responsibilities, if FAA registry officials detect aircraft owners, dealers, or intermediaries potentially abusing registration requirements or abusive use of noncitizen or voting trusts, they may send them warnings of denial of future services if observed abusive actions continue. For example, if registry officials suspect that an entity applying for registration is misrepresenting its citizenship, officials could request citizenship information as appropriate for the president, board of directors, and managing officers. If the inquiry results in a determination that the entity does not qualify as a citizen, FAA could deny the application or issue a letter of apparent ineffectiveness for an existing registration. However, according to FAA officials, they take mitigation actions on a case-by-case basis because they do not have a systematic way to analyze data and detect potential fraud and abuse. Department of State country lists associated with major illicit drug production and money laundering. We analyzed registry address data using lists of countries associated with major illicit drug production and money laundering published by the Department of State to identify aircraft registrations associated with such countries. We found 251 registrations with addresses located in countries on the Department of State s list of money laundering jurisdictions that were registered as noncitizen trusts or corporations using voting trusts. Countries identified in the Department of State s lists do not necessarily indicate that a registration is associated with criminal activity. However, the risk of abuse or illicit activity with these registrations may be increased when combined with the use of opaque ownership structures, another risk indicator that, according to FAA and law-enforcement officials, may mask ineligibility or illicit activity. U.S. sanctions. We analyzed and matched registry data to U.S. sanctions data that contain information on blocked assets and sanctioned entities and individuals. Through this data analysis as well as illustrative case and intermediary research, we identified six aircraft owned by entities subject to Venezuela-related U.S. sanctions from 2017 to February 2019. These six aircraft involved registrations established by intermediaries using noncitizen trusts or by U.S. citizen corporations using voting trusts, where aircraft were beneficially owned by noncitizen trustors or stockholders of companies using voting trusts to meet U.S. citizenship registration requirements. However, as discussed earlier, trust agreements that contain information on aircraft owners and related individuals and entities with potentially significant responsibilities for aircraft ownership are stored in PDF format that are not electronically analyzable, potentially inhibiting detection of sanctioned individuals or entities. Additionally, our analysis identified limitations in the sharing of sanctions information within FAA, specifically between the aircraft registry and dealer records. These limitations present the risk of registry abuse or illicit activity through sanctions violations while potentially impeding effective coordination between FAA and Treasury s OFAC, which administers U.S. sanctions programs. On the basis of U.S. national security and foreign policy goals, OFAC can impose controls on transactions and block or freeze assets under U.S. jurisdiction, including aircraft. By blocking an asset such as an aircraft, its title remains with the targeted individual or entity; however, these individuals and entities cannot exercise the powers and privileges normally associated with ownership unless authorized by OFAC. Certain activities related to the use of the aircraft may violate the relevant sanctions program. Additionally, OFAC regulations generally prohibit persons and entities within the United States from engaging in transactions involving blocked property including U.S-incorporated companies and aircraft of sanctioned individuals and entities. OFAC-Sanctioned Aircraft One of the U.S.-registered aircraft about which Treasury s Office of Foreign Assets Control (OFAC) notified the Federal Aviation Administration (FAA) was used as part of an illicit narcotics trafficking scheme. According to its 2017 announcement, OFAC designated a high-ranking Venezuela government official as a Specially Designated Narcotics Trafficker pursuant to the Foreign Narcotics Kingpin Designation Act ( Kingpin Act ) for playing a significant role in international narcotics trafficking. According to OFAC, the sanctioned official used a front man who laundered drug proceeds and purchased assets. In addition to a network of international companies, according to OFAC, the front man owned or controlled five U.S. companies, including a limited liability company (LLC) that registered an aircraft with FAA and used a voting trust to meet U.S. citizenship requirements. As part of its action, OFAC identified the U.S.-registered aircraft and the LLC as blocked property. FAA deregistered the aircraft in 2019 after registration renewal documentation submitted to FAA contained numerous errors. However, because the flags placed on sanctioned individuals and entities registration records do not extend to dealer records, FAA issued a dealer certificate to the blocked LLC after the OFAC designation and without coordination with OFAC, according to FAA records and officials. The blocked LLC held the dealer certificate for a year until the certificate expired. (See app. I.) FAA relies on OFAC to share information on sanctions and does not check whether applicants and aircraft are subject to U.S. sanctions or blocking at registration, at renewal, or on a periodic basis. Specifically, FAA does not proactively obtain and use OFAC data to detect (1) blocked aircraft, (2) entities or individuals subject to sanctions, or (3) those with potentially significant responsibilities for aircraft ownership, such as intermediaries registering on behalf of blocked aircraft or entities. Our analysis of the six cases revealed that OFAC officials initiated coordination with FAA, notifying FAA about four of the six cases. According to FAA officials, when FAA finds out about a blocking action from OFAC, it internally flags registry records and will withhold registration processing actions until further communication with OFAC. However, according to FAA officials, FAA does not have the authority to deny or revoke a registration solely because the registration is associated with an individual subject to OFAC sanctions. Accordingly, in those instances, FAA would register the aircraft or the aircraft s registration would remain valid. In addition, although FAA flags sanctioned individuals and entities registry records, the flags do not extend to dealer certificate records. As a result, sanctioned individuals or entities flagged in aircraft registration records are not flagged by FAA for OFAC coordination before receiving a dealer certificate, which could allow operation of blocked aircraft under that certificate. One of the six cases we identified illustrates the criminal and national security risks involved with the use of U.S.-registered aircraft by OFAC-sanctioned individuals and entities, as well as risk-management challenges associated with dealer certificates. (See sidebar.) OFAC efforts to identify aircraft assets associated with sanctioned individuals and entities can encounter obstacles. According to OFAC officials, they search the publicly available FAA registry to identify aircraft for potential blocking. Where OFAC is aware that a sanctioned individual has control of a company, and the company had directly registered an aircraft, a search of the public database can provide relevant information about the aircraft. However, according to OFAC officials, identifying aircraft is more challenging when, for example, a voting trust or a shell company is the registered owner. As a result, OFAC does not have all the information from FAA it might need to support its investigations or enforcement when aircraft associated with sanctioned entities and individuals are not readily identifiable. FAA s IT modernization provides an opportunity for FAA to link flagged records across aircraft registration and dealer systems and to proactively check OFAC sanctions data. OFAC provides information on individuals and entities subject to sanctions on its website that can be checked using online searches or by downloading data, but FAA officials said that checking sanctions designations would require resources and extend processing time for aircraft registrations. However, automated linkages across aircraft registration and dealer systems, and checks of OFAC information, could be achieved through FAA IT modernization, which aims to automate near-real time access to accurate information. An aspect of the modernization project could involve automatically cross-referencing sanctions data, which are dynamic and updated in real time in response to U.S. sanctions programs, with aircraft registration information on owners and related individuals and entities with potentially significant responsibilities for aircraft ownership, such as intermediaries. FAA noted that it does not have authority to deny or revoke a registration based solely on an OFAC sanctions designation. Nevertheless, records that are flagged across aircraft registration and dealer systems, as well as awareness of blocked aircraft, sanctioned owners, or intermediaries doing business with sanctioned entities, would help to ensure coordinated actions with OFAC. Such coordination would allow OFAC to seek a delay from FAA of the registration or dealer certification, to alert law- enforcement agencies to determine aircraft location, or to coordinate with its U.S. partner agencies on investigations as appropriate. By not linking flagged records across systems and not proactively checking OFAC sanctions data, FAA and OFAC may be unaware of, and therefore not well-positioned to manage, risks associated with registration of blocked aircraft, sanctioned entities, or intermediaries operating in violation of U.S. sanctions. In addition, FAA misses opportunities to address abuse of the registry for illicit purposes, as well as to provide information to OFAC in support of U.S. efforts to curb drug trafficking, corruption, and other illicit activity. Aircraft primarily operating outside the United States. According to our analysis of NTSB data, we identified 303 cases of U.S.-registered aircraft involved in accidents and incidents outside the United States from calendar years 2010 to 2018. According to FAA officials and our illustrative case research, U.S.-registered aircraft that are primarily based and operated outside the United States may be associated with risk of registration abuse. For example, FAA SEIT and LEAP officials told us that they were aware of numerous cases of aircraft operated primarily outside the United States that were registered to nominee buyers. In addition, they noted international operation of aircraft that were associated with illicit activity and registration violations such as bills of sale identifying foreign owners and cloned registrations. A 2010 case involving a U.S.-registered aircraft seized for alleged drug trafficking by the Panamanian government highlights registration violation risks related to aircraft primarily operating outside the United States. After Panama seized the aircraft, it was turned over to the country s civil aviation authority (CAA), which registered the aircraft in Panama and painted a Panamanian registration number on it. According to FAA officials, the CAA did not seek to deregister the aircraft from the United States, and the new registration was likely invalid under international law. According to FAA officials, the Panamanian CAA operated the aircraft for about 1 year before it crashed. During that time, the aircraft remained registered to the original U.S. owner at a registered agent address. FAA sent multiple letters to the owner to deregister the aircraft and also when the aircraft registration was expiring, but all were returned as refused by the registered agent. Multiple Safety Violations Contributed to the Crash of an Aircraft Primarily Operating Outside the United States Our research identified a case where safety violations contributed to a fatal accident in the Caribbean involving a U.S.-registered aircraft in 2016. A Jamaican aviation training center was operating the aircraft since 2015 and at the time of the crash. The accident investigation by Jamaican authorities identified multiple safety deficiencies as the causes and contributing factors of the crash. This included falsified aircraft maintenance records, an engine replacement that did not conform to aircraft model and type, and the use of non-U.S.-certified maintenance programs. (See app. I.) Furthermore, aircraft that are based and primarily operated outside the United States may pose safety risks by not meeting FAA aircraft maintenance standards. Once registered with FAA, aircraft owners must continue to meet eligibility requirements and, along with operators, comply with certain maintenance responsibilities in order to operate, regardless of their location. According to FAA officials, U.S.-registered aircraft operating outside the United States may receive less scrutiny and inspections from other countries CAAs, and nefarious actors prefer a U.S. registration when aircraft are inspected abroad. Additionally, FAA SEIT and LEAP officials told us that they were aware of many U.S.- registered aircraft primarily operating in Latin American countries that may not be following required U.S. maintenance programs, thus posing aviation safety risks. One of our case studies highlights safety risks related to U.S.-registered aircraft that are primarily based and operated outside the United States. (See sidebar.) In another example involving 2011 and 2013 FAA examinations, an FAA maintenance inspector conducted inspections of U.S.-registered helicopters and airplanes located in Panama at the request of the Panama CAA and found multiple violations. According to FAA, the inspection of 16 aircraft initially found that, in addition to registration issues such as flying with a temporary registration, ten aircraft had maintenance issues, including maintenance performed by nonauthorized personnel. At least seven of the issues identified during this inspection resulted in FAA enforcement actions. According to this official, two of the aircraft had significant maintenance concerns and were not airworthy. On the basis of his experience inspecting aircraft domestically, safety violations among the aircraft inspected in Panama were more significant. In combination with other data sources and information, flight history data can provide indications of safety risks associated with aircraft based and primarily operated outside the United States. However, according to registry officials, they do not use these data to identify such risks. To examine specific registrations based on the entire risk-indicator data analysis, we also reviewed randomly selected aircraft registrations across each overall risk-indicator category. Our review of 20 selected registrations generally confirmed the risk-indicator characteristics we had identified for analysis. We did not identify further indicators of risk as part of this review except for the OFAC cases described earlier. Analysis of various data sources, alone or in combination, can help detect patterns of potential fraud or abuse. As demonstrated by our analysis, FAA data, such as postal addresses, information on dealers, noncitizen corporations, intermediaries, and entities with significant responsibilities for aircraft ownership, among others, along with various external databases could be used for such a purpose. FAA also has access to flight history data, currently used on an ad hoc basis, but which could also serve for (1) routine oversight functions such a verifying aircraft are based and primarily operating in the United States for certain registrant types or (2) to detect patterns of activity associated with declarations of international operations that could be used in support of safety and law- enforcement investigations. In addition, our analysis of registry data against external data sources, such as OFAC sanctions lists, illustrates the utility of such analyses for detecting registrant risks. FAA currently does not use internal or external information for such analysis or to assist in safety or law-enforcement oversight responsibilities across multiple aircraft, registrations, or dealer certificates. This is due, in part, to data limitations, but also because, according to registry officials, their role is primarily focused on recording of aircraft registration information. Aircraft registration data made available through IT modernization, as well as other currently available data, could support ongoing monitoring and risk- based oversight by FAA. Federal internal control standards call on managers to establish and operate activities to monitor the internal control system and evaluate results. By not analyzing available internal and external data, FAA is missing opportunities to identify registrant risks, conduct oversight, and safeguard the registry from potential fraud and abuse. Furthermore, while FAA registry officials may take risk-based mitigation actions, such as by sending warnings letters or denying services if abusive actions are detected, it generally does not take such action. According to FAA officials, the registry focuses on recording information, while it is currently the responsibility of other FAA organizations, such as ASH, LEAP, and SEIT, to detect fraud. However, federal internal control standards require managers to respond to risks by remediating internal control deficiencies on a timely basis. Without timely and measured risk-based mitigation actions, the aircraft registry continues to be vulnerable to fraud and abuse. In this context, as the key program office, aircraft registry is best positioned to manage fraud and abuse risks by preventing, detecting, and responding to risks in close coordination with stakeholder organizations such as ASH, LEAP, and SEIT. <4. FAA and Law- Enforcement Agencies Have Mechanisms to Respond to Registration Fraud and Abuse Risks, but Collaboration Is Not Formalized FAA Can Take Administrative Actions, and Law-Enforcement Agencies Can Seize Aircraft> FAA and law-enforcement agencies have a variety of enforcement mechanisms to respond to instances of suspected fraud and abuse in aircraft registrations. For example, FAA can use administrative actions, such as aircraft registration suspensions and revocations, and law- enforcement agencies can use civil actions and criminal prosecutions to seize aircraft, among other enforcement actions. Law-enforcement agencies such as DEA, DHS HSI, and DOT OIG have authority to investigate criminal activity and take actions to seize aircraft when warranted. <4.1. FAA and Law-Enforcement Agencies Have Established a Task Force, but Coordination Remains Informal> Recognizing the need for better dialogue and coordination, in August 2017 FAA LEAP agents launched the Aircraft Registry Task Force to discuss ideas and solutions for dealing with potentially fraudulent aircraft registrations and to improve FAA processes to assist the law-enforcement community. The first meeting, in August 2017, included participants from FAA aircraft registry officials, legal counsel, ASH, LEAP, and SEIT as well as other federal agencies, including DEA and DHS HSI. This meeting was the first time these various units came together to discuss aircraft registry vulnerabilities. FAA and law-enforcement officials presented cases associated with fraudulent aircraft registrations, highlighting safety implications. Participants also discussed issues related to deregistration, and aircraft seizures, among others. According to aircraft registry officials and FAA LEAP agents, the task force meeting discussions resulted in several changes, including revisions to the signature block in the aircraft application form, addition of a separate registration type for LLCs for tracking purposes, and sharing of declarations of international operations with FAA LEAP and SEIT. Specifically, regarding modifications to the signature block, in 2018 FAA added a statement requiring applicants to certify that information they provide is true and accurate while also identifying specific penalties for false information. The subsequent task force meeting, held in October 2018, included only FAA participants. Aircraft registry officials, legal counsel, ASH, LEAP, and SEIT, among others, discussed follow-up from the previous meeting and covered topics associated with ongoing concerns such as falsification of registration documents, incomplete applications, and proof of citizenship, among others. According to FAA officials, since the October 2018 meeting, the task force has not met. FAA and DEA have also established informal mechanisms to address registration violations and safety risks associated with aircraft based and operated outside the United States. For example, in 2016 and 2017, DEA and FAA LEAP and SEIT officials conducted a joint initiative at the request of the government of Guatemala to examine multiple U.S.- registered aircraft located in Guatemala. According to FAA, a total of 81 U.S.-registered aircraft were inspected through this effort as of April 2017. During the inspections, FAA identified more than 25 registration violations and numerous safety violations resulting in approximately 31 condition notices. Additionally, authorities seized eight aircraft with an approximate value of $2.5 million as well as over 400 kilograms of cocaine. According to FAA, registration violations identified during this effort included inconsistencies with trust agreements and associated documentation, violations involving U.S. corporations having individuals listed as president who do not meet U.S. citizenship requirements, and documentation allowing non-U.S. citizens to control U.S.-citizen entities that had registered aircraft. Since then, according to FAA officials, on the basis of the results of this initiative, DEA and FAA officials have conducted similar visits to other countries in Latin America and the Caribbean. The visits typically include training for local CAA officials on authorities to inspect U.S.-registered aircraft, ramp checks of U.S.- registered aircraft located in these countries, and maintenance inspections. FAA and DHS HSI also use informal collaboration mechanisms to support law-enforcement investigations. According to DHS HSI officials, they have a robust relationship with an FAA LEAP agent with whom they communicate on a daily basis. This agent has helped to investigate aircraft sale transactions and other cases and also provided leads to DHS HSI officials. Declarations of International Operations The Convention on International Civil Aviation requires registration certificates for international operations. The Federal Aviation Administration s typical registration process takes 16 20 working days, during which applicants may fly domestically using a temporary registration. Registry officials have put in place declarations of international operations for applicants to notify the registry of the intent to operate internationally thereby expediting typical processing time to the same day or next day. FAA registry officials have been sharing expedited registration filings declarations of international operations to expedite registration processing for aircraft intending to travel internationally with FAA LEAP and SEIT officials for monitoring and analysis purposes. (See sidebar.) However, this informal collaboration does not extend to FAA sharing of declarations of international operations with DHS HSI or DEA. According to law- enforcement officials, declarations of international operations present challenges. Specifically, DEA officials noted that expedited registrations limit the amount of time law enforcement can effectively query appropriate sources of information to determine that payment for the aircraft is not derived from illicit proceeds. In addition, according to DEA officials, expedited registrations shorten the amount of time investigators have to determine whether the aircraft is being used to facilitate drug crimes and to identify beneficial owners of the aircraft, which, as discussed earlier in this report, can be a time-consuming process. The lack of notification about declarations of international operations further compounds these challenges. DHS HSI officials explained that they have experienced challenges not receiving information from expedited registrations, which could have allowed some illicit actors to expediently move or export aircraft out of the country, including as part of trade-based money laundering or trafficking schemes. According to these officials, aircraft can be purchased with illicit proceeds to launder money as well as used to smuggle illicit cargo such as persons, cash, cigarettes, and liquor. DHS HSI officials stated that, in one case, which resulted in aircraft seizure, the aircraft potentially could have been seized 2 years earlier if they had received declaration of international operations at the time of aircraft registration. Additionally, according to DHS HSI officials, information from declarations of international operations could help to generate leads, including information on planned travel to countries that are associated with illicit drug trafficking or money laundering. For example, they noted that in investigations of trade-based money laundering schemes, information from declarations of international operations can be used to check against shipping export declarations and trade data from other countries. Separately, in our analysis of aircraft registered to entities subject to U.S. sanctions described earlier, we found that five of the six aircraft registrations received expedited processing. Although not a precise indicator of actual travel, information from declarations of international operations could provide timely information about potential planned movement of aircraft in time-sensitive situations as well as bring awareness for longer-term investigative purposes. Expedited registrations provide more immediate opportunity to move aircraft out of the country and information on applicants intention to do so, which can inform monitoring and law-enforcement action. However, FAA does not provide declarations of international operations to DHS HSI or DEA. Without declarations of international operations, these law-enforcement entities may be missing opportunities to generate leads that would ultimately support FAA s interests in addressing abuse of the registry for illicit purposes and support detection and response to potential trade-based money laundering and other cross-border schemes. Our prior work on interagency collaboration identified practices that can help enhance and sustain collaboration among federal agencies, including written agreements and use of liaison positions. Agencies that articulate their agreements in formal documents, such as memorandums of understanding, can strengthen their commitment to working collaboratively. Additionally, articulating a common outcome and roles and responsibilities in a written document can facilitate coordination. Similarly, the use of liaison positions, when an employee of one organization is assigned to work primarily or exclusively with another agency, can enhance coordination. For example, by providing direct access to agency information, liaison positions have helped to facilitate sharing of information and coordination of missions and activities. As relatively new and unofficial collaboration mechanisms, the Aircraft Registry Task Force and other efforts have not been fully utilized or leveraged some of the enhanced collaboration practices such as written agreements or liaison positions at law-enforcement agencies. While FAA LEAP agents coordinate with law-enforcement officials, these are not liaison positions as suggested by leading practices for collaboration, wherein an employee is assigned to or works primarily with another agency and has direct access to agency staff and information, and arrangements are formally outlined, such as in memorandums of understanding. Rather, FAA LEAP agents are assigned to FAA and do not have formal agreements for collaboration. The Aircraft Registry Task Force holds potential for FAA to work collaboratively internally and externally by formalizing various informal coordination efforts, such as international inspections by FAA and DEA and sharing of declarations of international operations with law-enforcement agencies, to bring together varied perspectives, functions, and skill sets necessary to mitigate aircraft registry vulnerabilities going forward. Leading practices in risk management also call for involvement of relevant stakeholders as part of risk-assessment and risk-mitigation activities. In the FAA context, the aircraft registry is best positioned to develop preventive measures and controls in coordination with FAA LEAP, SEIT, and law-enforcement stakeholders. <5. Conclusions> FAA s aircraft registry, the largest in the world, is preferred by aircraft owners for safety, economic, and financial reasons. Accordingly, the integrity of owner information for registry users is important to support these benefits. It is also important to ensure the registry is not exploited for fraudulent purposes or to support illicit activity involving U.S.- registered aircraft. FAA s current process does not include strong controls to prevent ineligible registrants and potential fraud and abuse, instead allowing registrants to self-certify their information with limited independent review. A comprehensive registry risk assessment could help to manage risks of fraud and abuse, which enable criminal, national security, and other risks. Such a risk assessment, which considers inherent and residual risks as well as determination of likelihood, impact, and risk tolerance, would support the development of a risk-based strategy and approach to guide registry actions in preventing, detecting, and responding to fraud and abuse risks. To support its eligibility determinations, FAA currently obtains limited PII from individual registrants, aircraft dealers, or those entities (e.g., trustors) who might have a significant role in aircraft registrations. Additionally, the registry lacks information about beneficial owners of aircraft. Further, the registry generally accepts self-certification of eligibility and aircraft ownership and does not verify the information it receives. Such an approach may be appropriate for the majority of law-abiding registrants, but it leaves the registry vulnerable to exploitation by those who wish to circumvent eligibility requirements, disregard safety standards, or pursue criminal activities. Limited transparency into who beneficially owns aircraft has also precluded FAA from maximizing its collaboration with partners in the law-enforcement and safety communities to support detection and investigation of criminal, national security, and safety risks associated with registered aircraft. U.S. taxpayers have subsidized the costs of aircraft registration for several decades. Without a change to aircraft registration and dealer fees, the costs of FAA labor, technology, coordination, and risk-based oversight for these high-value assets would continue to be borne by the public and limit resources available for applicant verification. The absence of more and electronically analyzable information has substantially hindered FAA s ability to use the registry as a tool to detect potential fraud and abuse and to oversee registered aircraft. As part of its ongoing IT modernization, FAA has an opportunity to collect such data and record them in a format that facilitates data analytics. These data could help FAA detect potential fraud and abuse and conduct preventive, risk-based monitoring and oversight of aircraft registrations as well as dealer certifications to ensure the integrity of the registry. They would also support a risk-based approach for verifying information provided by some registry applicants as well as for taking corrective actions. Additional information would position FAA to more broadly prevent, detect, and respond to risks associated with the aircraft registry and to facilitate data analytics by FAA and stakeholders for oversight, safety, and law- enforcement purposes. For example, FAA officials could analyze data patterns for potential fraud and abuse, as well as share data across dealer and aircraft records and to check OFAC sanctions data to ensure that they coordinate about owners with sanctions designations, as appropriate. Lastly, FAA lacks formal agreements with other federal entities to respond to risks. Specifically, FAA can provide additional support to law- enforcement and safety investigations by sharing quality information about individuals and entities with potentially significant responsibilities in aircraft registrations, as well as other registration information, such as declarations of international operations. FAA s Aircraft Registry Task Force positions FAA to work collaboratively internally among officials from the aircraft registry, legal counsel, ASH, LEAP, and SEIT and with external law-enforcement to share information and to take advantage of collaborative mechanisms to formalize coordination. <6. Recommendations for Executive Action> We are making the following 15 recommendations to FAA: The Administrator of FAA should conduct and document a risk assessment that considers inherent and residual fraud and abuse risks that may enable criminal, national security, or safety risks. (Recommendation 1) The Administrator of FAA should determine impact, likelihood, and risk tolerance as part of a risk assessment. (Recommendation 2) The Administrator of FAA should develop a strategy that outlines specific actions to address analyzed risks, including periodic assessments to evaluate continuing effectiveness of the risk response. (Recommendation 3) The Administrator of FAA should collect and record information on individual registrants, initially including name, address, date of birth, and driver s license or pilot s license, or both, with subsequent PII elements informed by the risk assessment, once completed. (Recommendation 4) The Administrator of FAA should collect and record information on legal entities not traded publicly on each individual and entity that owns more than 25 percent of the aircraft; for individuals: name, date of birth, physical address, and driver s license or pilot s license, or both; and for entities: name, physical address, state of residence, and taxpayer identification number. (Recommendation 5) The Administrator of FAA should verify aircraft registration applicants and dealers eligibility and information. (Recommendation 6) The Administrator of FAA should increase aircraft registration and dealer fees to ensure the fees are sufficient to cover the costs of FAA efforts to collect and verify applicant information while keeping pace with inflation. (Recommendation 7) The Administrator of FAA should ensure, as part of aircraft registry IT modernization, that information currently collected in ancillary files or in PDF format on (1) owners and related individuals and entities with potentially significant responsibilities for aircraft ownership (e.g., beneficial owners, trustors, trustees, beneficiaries, stockholders, directors, and managers) and (2) declarations of international operations is recorded in an electronic format that facilitates data analytics by FAA and its stakeholders. (Recommendation 8) The Administrator of FAA should link information on owners and related individuals and entities with significant responsibilities for aircraft ownership through a common identifier. (Recommendation 9) The Administrator of FAA should, as part of IT modernization, develop an approach to check OFAC sanctions data on owners and related individuals and entities with potentially significant responsibilities for aircraft ownership for coordination with OFAC and to flag sanctioned individuals and entities across aircraft registration and dealer systems. (Recommendation 10) The Administrator of FAA should use data collected as part of IT modernization as well as current data sources to identify and analyze patterns of activity indicative of fraud or abuse, based on information from declarations of international operations, postal addresses, sanctions listings, and other sources, and information on dealers, noncitizen corporations, and individuals and entities with significant responsibilities for aircraft ownership. (Recommendation 11) The Administrator of FAA should develop and implement risk-based mitigation actions to address potential fraud and abuse identified through data analyses. (Recommendation 12) The Administrator of FAA should develop mechanisms, including regulations if necessary, for dealer suspension and revocation. (Recommendation 13) The Administrator of FAA, in coordination with relevant law-enforcement agencies, should enhance coordination within the Aircraft Registry Task Force through collaborative mechanisms such as written agreements and use of liaison positions. (Recommendation 14) The Administrator of FAA, in coordination with relevant law-enforcement agencies, should develop a mechanism to provide declarations of international operations for law-enforcement purposes. (Recommendation 15) <7. Agency Comments> We provided a draft of this product to DOT, DOJ, DHS, and Treasury for review and comment. DOT provided written comments, which are reproduced in appendix V. DOT concurred with our recommendations. Specifically, DOT stated that it supports other government agencies in addressing illegal activities and enforcing U.S. sanctions and agreed that enhancements to the accuracy of registry information would expedite enforcement actions and reduce the risk of ineligible aircraft registrations. FAA and DHS provided technical comments, which we incorporated as appropriate. DOJ and Treasury did not have any comments. We are sending copies of this report to the appropriate congressional committees, the Secretary of Transportation, the Attorney General, the Secretary of Homeland Security, the Secretary of the Treasury, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at 202-512-6722 or shear@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Appendix I: Case Studies We conducted illustrative case research related to U.S.-registered aircraft generally covering the 2010 2018 period, including over 1,200 publications and reports from cases investigated by law-enforcement agencies, news articles, and agency and safety investigation reports. We selected six case studies for in-depth review across three categories of risk enabled by aircraft registration fraud and abuse criminal activity, national security, and safety (see app. II for additional details on the selection methodology). All selected cases are intended for the purpose of illustrating fraud and abuse vulnerabilities associated with the aircraft registration process. These cases may not represent all existing vulnerabilities and are not generalizable to the Federal Aviation Administration (FAA) registry population as a whole. From 2010 to 2011, an aircraft sales broker obtained multiple registration certificates from FAA for aircraft he did not rightfully own or possess. According to court records associated with this case, the broker submitted to FAA fraudulent registration applications and bills of sale with forged signatures for 22 aircraft as part of a multi-million-dollar bank fraud scheme. He used the registration documents that FAA provided as an asset to support a loan application that ultimately resulted in an approximately $3 million bank loan used to float his failing aircraft-sales business. The bank uncovered the fraud over a year after the sales broker first submitted the fraudulent aircraft registration documents to execute the loan. A subsequent investigation by the Federal Bureau of Investigation revealed the extent of the fraud, namely that the main thrust of the fraud scheme was to pledge as collateral 22 aircraft that neither the broker nor his company owned, in order to obtain money from the bank. Court records reveal that law-enforcement officials interviewed some of the rightful owners of the aircraft, who stated that the aircraft were always in their possession and they had never sold the aircraft to the fraudulent broker. These owners identified the signatures on the bills of sale used to register the aircraft as forged. In 2013, the broker pled guilty to bank fraud, making a false statement to a federally insured financial institution, and making a false statement to FAA in the registration of aircraft. As a result of the fraud, some of the rightful owners of the aircraft experienced difficulty in reinstating the aircraft registrations in their name. For example, one owner told federal investigators that he could not fly his aircraft for 2 years because the registration of his aircraft was in the name of the fraudulent broker. Another owner stated that he incurred thousands of dollars in legal fees to reinstate the registration of the aircraft in his name. Additionally, the court ordered the broker to pay approximately $2.4 million in restitution to the bank. In 2014, a U.S.-registered aircraft was seized by and subsequently forfeited to the U.S. government in 2016 because the aircraft had been fraudulently registered and it was purchased with assets derived from wire fraud, money laundering, or other unlawful activities, according to court records associated with this case. The registration was found to be fraudulent because at the time of registration, the applicant was not the true owner of the aircraft. Rather, the U.S. corporation that registered the aircraft acted as a nominee to purchase and register the aircraft on behalf of entities known to have ties to the Sinaloa Cartel, one of the world s most notorious criminal enterprises. Law-enforcement officials were aware of the scheme and seized the aircraft shortly after final payment was made on it. Court records reveal that this corporation had been previously investigated for violations related to false and fictitious U.S. registration of aircraft on behalf of a criminal organization, and that the corporation s owner was well known to members of law-enforcement agencies for his suspected role in multiple illegal activities. The aircraft was ultimately forfeited to the U.S. government because it had been purchased with proceeds traceable to illegal activities. In 2012, an intermediary established a U.S. corporation for a foreign national beneficial owner, and the company registered the aircraft. The foreign national was engaged in the black-market currency exchange, which is a common scheme used in trade-based money laundering. In this case, the foreign national conspired with another individual to fraudulently purchase millions of dollars in Venezuela at a rate preferred by the Venezuelan government that was reportedly established as a control to prevent capital flight from Venezuela. Court records show that the aircraft was purchased with illicit proceeds from this fraudulent scheme. In 2016, U.S. law enforcement seized the aircraft, and in 2018 it was forfeited to the U.S. government. In 2017, as the result of a multiyear investigation, the Department of the Treasury s Office of Foreign Assets Control (OFAC) designated the Executive Vice President of Venezuela as a Specially Designated Narcotics Trafficker pursuant to the Foreign Narcotics Kingpin Designation Act for playing a significant role in international narcotics trafficking. According to the 2017 OFAC announcement on this case, this Venezuelan government official facilitated shipments of narcotics with the final destinations of Mexico and the United States, including control over airplanes and ports used in drug trafficking in Venezuela. According to OFAC, in previous government positions, this official oversaw and partially owned large narcotics shipments destined for the United States. Further, this official also used a front man who laundered drug proceeds and purchased assets. In addition to a network of international companies, according to OFAC, the front man owned or controlled five U.S. companies, including a limited liability company (LLC) that registered an aircraft with FAA using a voting trust to meet U.S. citizenship requirements. As part of its action, OFAC also designated the front man for providing material assistance, financial support, or goods or services in support of the international narcotics trafficking activities of, and acting for or on behalf of, the Venezuelan Executive Vice President. OFAC also identified as blocked property the U.S.-registered aircraft as well as the LLC used to register the aircraft. According to FAA officials, the agency does not have the legal authority to deny a registration solely because of a sanctions designation. OFAC notified FAA of the designation, and FAA flagged the aircraft in its system. FAA deregistered the aircraft in 2019 after registration renewal documentation submitted to FAA contained numerous errors. However, because the flags placed on sanctioned individuals and entities registration records do not extend to dealer records, FAA issued a dealer certificate to the blocked LLC after the OFAC designation and without coordination with OFAC, according to FAA records and officials. The blocked LLC held the dealer certificate for a year until the certificate expired. In 2011, an aircraft registered to a U.S. citizen with a registered agent address disappeared and was reported to have crashed off the coast of Panama with six fatalities. At the time of the crash, the government of Panama was operating the aircraft while it was still under the U.S. registration of the owner. According to FAA officials and documents we reviewed, the aircraft was in the possession of the Panamanian government because it had been seized by Panamanian authorities in 2010 on allegations that it had been used to traffic narcotics from Panama into Colombia. According to an FAA official knowledgeable about this case, as part of the seizure, a Panamanian court assigned the aircraft to the Panamanian civil aviation authority, which then registered the aircraft in Panama and painted a Panamanian registration number on it. However, the Panamanian civil aviation authority did not take the actions to first deregister the aircraft in the United States, so the new registration was likely invalid under international law. When told this by an FAA official, Panamanian authorities removed the Panamanian registration number from the plane and replaced it with the original N-number. FAA sent multiple letters to the owner to deregister the aircraft and also when the aircraft registration was expiring, but all were returned as refused by the registered agent. According to an FAA official we interviewed about this case, the Panamanian civil aviation authority operated the aircraft under U.S. registration for approximately 1 year until its crash. According to this official, at the time of the crash the aircraft was reportedly operated by the Panamanian civil aviation authority for the purposes of radar maintenance missions in that country. In 2016, an aircraft registered to a U.S.-based LLC crashed in the Caribbean, resulting in fatal injuries to all three people aboard. According to the accident report, the aircraft was operated by a foreign entity, an aviation training center located in Jamaica. The Jamaican civil aviation authority, the entity responsible for investigating the accident, found multiple safety deficiencies as the causes and contributing factors of the fatal crash. These deficiencies include the aircraft s engine replacement not conforming to its design type; engine parts showing signs of wear ranging from worn to extremely worn conditions exhibiting heavy corrosion; and falsified maintenance records. FAA, by law, imposes safety obligations on all owners of aircraft. To meet these obligations, an owner must maintain current information about the identity and whereabouts of the actual operators of an aircraft and location and nature of the operation on an ongoing basis, thereby allowing that owner to provide the operator with safety-critical information in a timely manner, and to obtain information responsive to FAA inquiries, including investigations of alleged violations of FAA regulations. Such information is an essential element in FAA s ability to carry out its oversight obligations under U.S. and international law. The safety deficiencies cited in the accident report indicate that, as the registered owner of the aircraft, the LLC may not have been fulfilling its safety obligations. Appendix II: Objectives, Scope, and Methodology Our objectives were to assess the Federal Aviation Administration s (FAA) (1) actions to prevent fraud and abuse in aircraft registrations, (2) ability to detect potential fraud and abuse in aircraft registrations, and (3) actions and coordination with law-enforcement entities to respond to aircraft registry related fraud and abuse risks. To address all objectives, we reviewed laws, regulations, and FAA policies pertaining to the aircraft registration eligibility requirements and processes. We also reviewed standard operating procedures, policy statements, and guidance for staff charged with processing aircraft registrations and addressing administrative compliance actions including FAA Order 2150.3C issuing enforcement actions per its compliance and enforcement program, FAA Aircraft Examiner s Guidelines outlining the steps for processing aircraft registrations, and published International Civil Aviation Organization civil aviation standards. We also reviewed prior GAO reports and Department of Transportation (DOT) Office of Inspector General (OIG) reports regarding the quality and utility of registry data, risks, and ongoing challenges associated with the registry s information technology (IT) system. For all objectives, we interviewed FAA officials from: aircraft registry, legal counsel, FAA s Security and Hazardous Materials Safety (ASH), FAA s Law Enforcement Assistance Program (LEAP), and FAA s Special Emphasis Investigation Team (SEIT). We also interviewed aviation safety, foreign policy, and law-enforcement officials to obtain broader perspectives, where applicable, on the registration process, challenges, and vulnerabilities, including officials from the National Transportation Safety Board (NTSB), the Department of the Treasury s (Treasury) Office of Foreign Assets Control (OFAC) and Internal Revenue Service Criminal Investigations, the Department of Justice s (DOJ) Drug Enforcement Administration (DEA), the Department of Homeland Security s (DHS) Homeland Security Investigations (HSI), and DOT s OIG. We interviewed aviation industry associations, selected based on a range of aviation interests, such as general aviation and equipment leasing. We also interviewed aircraft registry intermediaries individuals and entities that facilitate aircraft registrations for others such as trust companies, banks, and a registered agent, selected based on our analysis of aircraft registry data across types of intermediaries and number of registrations. We also reviewed relevant international standards on countering money laundering and issues related to transparency of corporate structures and beneficial ownership of assets. We performed a descriptive analysis of the registry data from calendar year 2010 through 2018. To do this, we first performed an in-depth review of the calendar year 2018 registry master data which contains the most-current registration information for our review period and selected key fields such as aircraft registration number and registrant name information for further analysis. For the remaining calendar years 2010 to 2017 annual files, we focused on identifying any substantive differences occurring between years for the selected key fields. We developed frequencies of the selected key fields to determine the number of registered aircraft, registration types and ownership structures (such as corporations, trusts, and dealers) used to register aircraft, and registration status across the 9-year period of our review. In September 2018 we conducted a site visit to the FAA Registry facility located at the Mike Monroney Aeronautical Center in Oklahoma City, Oklahoma. During the site visit, we interviewed officials from FAA s major components responsible for processing aircraft registrations and addressing administrative compliance actions, including registry data analysts and managers for the aircraft and airmen systems, FAA ASH officials, and an Office of the Chief Counsel attorney. We also observed firsthand the registry s process for receiving, sorting, scanning, and recording aircraft registration and renewal application packages. To determine potential fraud and abuse in aircraft registration and FAA actions to prevent them, we analyzed and synthesized a variety of information, including agency reports, registration, postal, and sanctions data, and news articles, among other sources. Our review of information generally spanned fiscal years 2010 through 2018. To identify illustrative cases of potential fraud and abuse, we conducted a literature review that included sources such as Lexis Nexis news articles, DOJ press releases, and investigative reports published by DOT OIG, FAA LEAP, Internal Revenue Service Criminal Investigations, and DHS HSI. We also searched the NTSB publicly available online database of aviation accidents and incidents for examples of safety-related cases. Our literature search yielded over 900 publications and over 300 aviation accident reports for further screening. We then applied two levels of criteria to filter the results for case narrative selections. For the first level, we identified 66 cases from fiscal years 2010 to 2018 involving U.S.- registered aircraft related to three categories of risk enabled by fraud and abuse criminal activity, national security, and safety. Next, we performed a secondary level of review and selected 28 illustrative cases that included case details, such as entity names and aircraft registration numbers, to facilitate further research including legal review to ensure that selected case studies were adjudicated by a court of law, where applicable. Of those 28 cases, we selected six case studies for in-depth review. We also drew examples from our research of intermediaries of the registry, including selected banks, trust companies, and registered agents. For our in-depth research of these cases, we reviewed available information contained in the FAA Civil Aviation Registry, FAA Electronic Document Retrieval System, and ancillary files; aircraft flight plans; NTSB accident report information; state business registration data; court records; and GAO s internal resources that included a mix of government and corporate databases. All selected cases are intended for the purpose of illustrating fraud and abuse vulnerabilities associated with the aircraft registration process and may not represent all existing vulnerabilities, nor are they generalizable to the FAA registry population as a whole. To further determine potential fraud and abuse in aircraft registrations, we analyzed FAA aircraft registry address data from calendar year 2018. Using registry address information, we performed a match to United States Postal Service (USPS) data to identify examples of potentially unverified and noncompliant addresses provided to the registry. To analyze postal address data, we used the address fields contained in the FAA registry master and dealer data to verify address information and identify examples of invalid addresses provided to the registry in calendar year 2018, which is the most-current registry data included in our review. Additionally, we obtained data from an internal registry physical address report that we then matched to the calendar year 2018 registry master data to replace mail drop boxes with physical address information, where available. We then performed a match of this updated address file to the USPS Address Matching System as of June 2019 to identify examples of potentially invalid addresses. Our match results revealed a number of commercial mail drop locations, including post office boxes, and addresses that did not match to the postal data. We selected seven aircraft registration addresses and five dealer addresses (total of 12 match results) using a randomized list filtered by locality. We then manually verified the match results for these selected cases using publicly available online geo-mapping tools such as Google Maps and company listings such as White Pages. On the basis of the results of those searches, we selected three aircraft registrations and three dealer certifications that highlight examples of potentially noncompliant addresses provided to the registry in violation of FAA regulations and policy. We conducted subscription database searches and reviewed FAA registration documents for these selected cases based on categories of addresses, such as mail drop boxes, and verified three addresses selected based on locality through site inspections by GAO investigators. Finally, we analyzed the costs associated with aircraft and dealer certificate registrations. To do this, we reviewed an FAA internal report that assessed the costs of FAA s registration processing, and compared proposed fees to the current fee values for aircraft registrations and dealer certificates. We also reviewed GAO s federal user fee guide provision that states that fee collections should be sufficient to cover the intended portion of program costs over time, including accounting for factors such as inflation. We reviewed a prior 1993 GAO report in which we determined that the registration fee, in place since 1964, did not cover the cost of reviewing and processing a registration application. Finally, we performed an inflation analysis of the 1964 fee level adjusted for inflation based on the Consumer Price Index. To assess FAA s ability to detect potential fraud and abuse in aircraft registrations, we examined FAA aircraft registry data collection and storage as well as oversight actions based on registry information and data. We also conducted data mining and matching to identify registrations with indicators of potential fraud or abuse that may enable criminal activity, national security, and safety risks by analyzing FAA aircraft registry master data from calendar years 2010 through 2018, as well as other registry-based and external data sets. We selected five risk indicators, which were informed by interviews with FAA and law- enforcement officials and our background research, for analysis of registry-related data and for matching to a selection of external data sets. We analyzed FAA aircraft registry data to identify registrations with characteristics that matched one or more risk indicators, such as registrations using opaque ownership structures corporation- and trust- based ownership that disguises the beneficial owner and registration addresses in countries identified by the Department of State as associated with major illicit drug production and money laundering, among other factors. The risk indicators do not prove fraud or that any unlawful activity has occurred. Alone or together, the risk indicators may serve as points of inquiry for further examination of conduct that may run counter to the interests of the federal government by posing potential criminal, national security, or safety risks. On the basis of the results of our risk-indicator analysis using registry data, we selected a total of five items as potential risk indicators. We selected three risk indicators based on public and internal aircraft registry data. We compared the registry master data to the list of countries published in the latest Department of State narcotics control and financial crimes watch lists. Additionally, we reviewed nonpublic extracts of FAA registry voting trusts used by U.S. citizen corporations and noncitizen trusts from April 2018 through May 2019 the most complete data available at the time of our review due to their opaque ownership structures and potential for abuse as registration vehicles. We also performed an analysis of types of intermediaries and selected a registered agent as a risk indicator based on confirmed misuse of its address as a means for corporate entities to register aircraft. To establish our population of corporate entities for outreach, we selected four corporate codes contained in the registry data. Next, we developed selection criteria that included geographic distribution (U.S.-based or foreign-based); registrant size based on thresholds that reflect the distribution of registered aircraft (small, medium, or large); and finally, registrant type (bank, trust company, or registered agent). Based on these criteria, we randomly selected two U.S.-based banks and four U.S.- based and foreign trust companies to interview. To identify registered agents, which are not specifically coded in the registry data, we summarized the registry address information and selected all entities with two or more aircraft registrations per address for further screening. We then randomly selected one established registered agent entity for outreach. We analyzed extracts from two external selected data sources for the risk indicator data matching Treasury OFAC lists of sanctioned entities and individuals, and an NTSB accidents and incidents report covering the period January 2010 through March 2019, where available. To do this, we used key fields to match the selected data sources to the FAA registry master and trust data, and selected additional risk indicators based on our analysis of the match file results. We matched aircraft registry data to the OFAC lists of sanctioned entities and individuals as of March 2019 to identify aircraft, individuals, and entities subject to U.S. sanctions. We combined five cases identified from our OFAC data match with one additional case identified through our illustrative case and intermediary research to report on our findings of U.S.-sanctioned individuals and aircraft. We included all NTSB-reported accidents and incidents of U.S.- registered aircraft taking place outside the United States as a safety risk indicator. Using the FAA registry aircraft registration number and registrant name fields as the primary match keys, we performed a final merge of all risk indicators identified through our multiple analysis steps described above. Our combined risk flag match returned over 17,000 records, which we used to develop totals for each risk indicator category that we identified. Next, we randomized the list generated from our combined match and applied criteria to filter cases for further review. These criteria included cases with multiple risk indicators, as well as prioritization of risk based on a combined evaluation across all risk indicator categories, among other filters. In total, we selected 20 cases for agency follow-up and in-depth file reviews based on a comprehensive assessment of risk flag categories described above. However, without reviewing a generalizable sample of cases across all categories, we were unable to determine the extent of risk such cases may represent as a proportion of total registrations. Therefore, we used the results of our file reviews for these 20 cases solely to illustrate examples of the risk indicators that we identified. We assessed the reliability of each data set described above for the purposes of generating high-level totals, as well as identifying and tracking potential risk-indicator cases across time. To do this, we performed electronic tests using reports from eight information systems to determine the completeness and accuracy of key fields contained in the data files. We also submitted to the overseeing offices for all eight information systems general data-quality questions regarding the purpose of the data, their structure, definitions and values for selected fields, automated and manual data-quality checks to ensure the accuracy of the data, and limitations. Overall, we found that the data were generally reliable for the purpose of performing a cross-comparison of current registrations associated with safety and compliance violations over the nine-year period of our review. To assess FAA s actions and coordination with law-enforcement agencies to respond to registration-related risks, in addition to the interviews noted above, we reviewed FAA policies pertaining to the aircraft registration process and documents about FAA and law-enforcement efforts to address registry-related vulnerabilities. We reviewed FAA enforcement actions and government-wide data on aircraft seizures. To generate government-wide totals for aircraft seizures and forfeitures over time, we obtained data extracts from the DOJ Consolidated Asset Tracking System and DHS Customs and Border Protection Seized Assets and Case Tracking System from fiscal years 2010 through 2018. We limited our Consolidated Asset Tracking System data request to aircraft adjudicated as either seized and forfeited, or seized and substituted for cash forfeiture, while the report from the Seized Assets and Case Tracking System contains all seizures recorded by Customs and Border Protection during our review period. Therefore, the reports represent different populations, and we opted to report the totals for the two databases separately. Where feasible, we assessed the reliability of data in each system described above for the purposes of generating high-level totals. Our data-quality testing of selected data elements showed that the primary fields of interest were well-populated and sufficiently reliable for our purposes. We conducted this performance audit from November 2017 to March 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted our related investigative work in accordance with investigation standards prescribed by the Council of the Inspectors General on Integrity and Efficiency. Appendix III: Registration Types and Documentation Requirements In addition to an aircraft registration application form, evidence of ownership, and $5 registration fee, the Federal Aviation Administration (FAA) requires additional documentation based on the type of individual or entity that owns the aircraft, as discussed in table 2 below. Appendix IV: Use of Opaque Ownership Structures for Aircraft Registration Opaque ownership structures are legitimate business structures that are widely used by corporations and individuals to facilitate commerce as well as for asset and tax management. However, the lack of transparency related to aircraft registrations using opaque ownership structures also creates challenges for safety and law-enforcement investigators seeking information about beneficial owners to support timely investigations. The Financial Action Task Force (FATF) and other international organizations have determined that beneficial ownership information can be obscured through, among other things, the use of shell companies (which can be established with various forms of ownership structures) especially in cases where there is foreign ownership that is spread across jurisdictions; complex ownership and control structures involving many layers of shares registered in the name of other legal entities; formal nominee shareholders and directors where the identity of the beneficial owner is undisclosed; trusts and other legal arrangements that enable a separation of legal ownership and beneficial ownership of assets; and use of intermediaries in forming legal entities, including professional intermediaries. Shell companies, one of the opaque ownership structures, may be formed for legitimate purposes to obtain financing prior to starting operations. In the aircraft ownership context, shell companies may own aircraft by holding title for registration purposes. However, shell companies may also be used to conceal the beneficial owner s identity for illicit purposes. For example, according to Federal Aviation Administration (FAA) officials, some aircraft registrations have stacked company ownership, where shell companies own each other. Such ownership arrangement can be used for illicit purposes to conceal the identity of foreign-based beneficial owners and create challenges for investigators, according to law- enforcement officials. Further, shell companies may use a registered agent s mailing address on their aircraft application forms, further obscuring aircraft ownership information. Table 3 describes the four opaque ownership structures, their legitimate uses, and how they can be vulnerable to abuse, according to our illustrative case and intermediary research, and interviews with FAA and law-enforcement officials. In the example and figure below, we illustrate opaqueness and complexities of aircraft registrations using intermediaries and opaque ownership structures. It is based on an actual case from our review of aircraft registration documents and research from corporate filings and other databases. Apparent shell company and noncitizen trust used to register aircraft for unknown foreign beneficial owner. In this case, a foreign company obtained U.S. aircraft registration through an intermediary, using opaque ownership structures. This is allowable under current registration requirements and there is no identified wrongdoing in this case. The application, depicted in figure 13, shows the involvement of an intermediary, who used various legal entities and took a number of steps to facilitate aircraft registration for a beneficial owner who is unknown. The intermediary listed himself as the director of a corporation, N003 Inc., which was established using a company that provides company formation and registered agent services. Among other indicators, N003 Inc. appeared to be a shell company established shortly before the filing of the aircraft registration. The intermediary also used the mailing address of the registered agent as the owner s address on the aircraft registration application. Further, the intermediary established a noncitizen trust for aircraft ownership. The trust agreement identified N003 Inc. as the owner trustee of the aircraft, and a foreign corporation, DEF Ltd., as the trustor. As such, the role of the intermediary, the use of apparent shell company and noncitizen trust ownership structures, and use of the registered agent s mailing address worked to obscure the foreign beneficial owner of the aircraft while facilitating access to U.S. aircraft registration. Appendix V: Comments from the Department of Transportation Appendix VI: GAO Contact and Staff Acknowledgments <8. GAO Contact Staff Acknowledgments> Rebecca Shea, (202) 512-6722 or shear@gao.gov In addition to the contact named above, Tonita Gillich (Assistant Director), Irina Carnevale (Analyst-in-Charge), James Ashley, Priyanka Sethi Bansal, Gary Bianchi, Daniel Bibeault, Kimberley Bynum, Steven Campbell, Colin Fallon, Robert Graves, Ying Long, Olivia Lopez, Maria McMullen, James Murphy, George J. Ogilvie, Sean Peck, and April Van Cleef made key contributions to this report. | Why GAO Did This Study
The U.S. aircraft registry, managed by FAA, maintains information on approximately 300,000 civil aircraft. FAA issues aircraft registration to individuals and entities that meet eligibility requirements, such as U.S. citizenship or permanent legal residence. Registry fraud and abuse hinders the ability of law-enforcement and safety officials to use the registry to identify aircraft and their owners who might be involved in illicit or unsafe operations.
GAO was asked to examine registry fraud and abuse. This report assesses FAA's actions to (1) prevent, (2) detect, and (3) respond to fraud and abuse risks in aircraft registrations.
GAO reviewed relevant laws, regulations, and FAA policies; reviewed reports, DOJ press releases, and court cases that illustrated risks associated with the registry; analyzed aircraft registry data from fiscal year 2010 through 2018 to identify registrations with risk indicators; and interviewed FAA registry, legal, law-enforcement liaison, and safety officials, as well as officials from DOJ and DHS.
What GAO Found
To register civil aircraft, the Federal Aviation Administration (FAA) generally relies on self-certification of registrants' eligibility and does not verify key information. According to GAO's review of the registry process, there are risks associated with FAA not verifying applicant identity, ownership, and address information. The registry is further vulnerable to fraud and abuse when applicants register aircraft using opaque ownership structures that afford limited transparency into who is the actual beneficial owner (i.e., the person who ultimately owns and controls the aircraft). Such structures can be used to own aircraft associated with money laundering or other illegal activities (see example in figure). FAA has not conducted a risk assessment that would inform its eligibility review and collection of information to manage risks. Without a risk assessment, FAA is limited in its ability to prevent fraud and abuse in aircraft registrations, which enable aircraft-related criminal, national security, or safety risks.
FAA makes some use of registry information to detect risks of fraud and abuse, but the format of the data limits its usefulness. Specifically, most data on individuals and entities with potentially significant responsibilities for aircraft ownership, such as trustors and beneficiaries, are stored in files that cannot be readily analyzed due to system limitations. As FAA modernizes its information-technology systems, it has an opportunity to develop data analytics capabilities to detect indicators of fraud and abuse in the registry.
FAA takes administrative actions, such as registration revocations, to respond to registration violations and coordinates with law-enforcement agencies on investigations and enforcement actions such as aircraft seizures. Since 2017, FAA has coordinated with the Departments of Justice (DOJ) and Homeland Security (DHS) as part of an Aircraft Registry Task Force to address aircraft registry vulnerabilities. However, this coordination is informal, and other mechanisms for joint enforcement actions, sharing of information, and use of liaison positions are not in place,
What GAO Recommends
GAO is making 15 recommendations to FAA, including that it collect and verify key information on aircraft owners; undertake a risk assessment of the registry; leverage information-technology modernization efforts to develop data analytics approaches for detecting registry fraud and abuse; and formalize coordination mechanisms with law-enforcement agencies. FAA agreed with all recommendations. |
gao_GAO-20-341 | gao_GAO-20-341_0 | <1. Background> Antibiotics are drugs that work by killing bacteria or slowing their growth. However, some bacteria have developed ways to resist the effects of antibiotics, for example, by preventing antibiotics from entering the cell or pumping them out after the antibiotic enters. Bacteria that are able to survive in the presence of antibiotics will multiply and pass on their new genetic material that confers resistance to future generations of bacteria and, in some cases, to other types of bacteria. Resistance can arise in bacteria in humans, animals, and the environment, including in health care settings, and can spread through contact with infected people or animals, contact with contaminated water, soil or surfaces, or consumption of contaminated food. The spread of antibiotic resistance threatens not only the ability to fight bacterial infections but also threatens to reverse some significant medical gains. For example, in addition to treating infections, antibiotics have allowed for numerous medical procedures, such as joint replacements, caesarian sections, organ transplants, chemotherapy, and dialysis all of which would be significantly riskier without effective antibiotics. Antibiotic resistance also poses a significant economic burden resulting from the direct costs of treating those with resistant infections and the loss of economic productivity from those who get sick or die. In the 2013 Threats Report, CDC identified 17 bacterial pathogens that the agency considers to be urgent, serious, or concerning because they have developed enough resistance to antibiotics to be considered a threat to human health. (See fig. 1.) According to CDC, certain types of bacteria, called gram-negative bacteria, are particularly worrisome because they are becoming resistant to nearly all drugs that would be considered for treatment. The most serious gram-negative infections can be acquired in hospitals or other health care settings and can cause pneumonia, bloodstream infections, wound or surgical site infections, and meningitis. Nine of the 17 bacterial threats on CDC s threat list are gram-negative. One of the bacteria CDC considers to be an urgent threat Clostridioides difficile (C. difficile) is classified as a threat not because it is resistant to antibiotics, but because it is caused by the same factors that drive antibiotic resistance, such as antibiotic use. CDC estimates that C. difficile alone accounted for 12,800 deaths in U.S. hospitals in 2017. CDC s 2013 Threats Report also identified one type of fungus Candida auris that it considered to be a serious threat (see text box). Candida auris Is a Resistant Fungal Threat Candida auris (C. auris) is an emerging infectious fungus that, according to the Centers for Disease Control and Prevention (CDC), presents a global health threat in part because it is highly resistant to anti-fungal drugs and is challenging to address. C. auris was first identified in Japan in 2009. CDC reported 806 confirmed cases in the United States, as of August 31, 2019. According to CDC, C. auris is highly transmissible and some commonly used hospital surface disinfectants appear to be less effective against C. auris. A CDC official told us C. auris is a good example of an emerging threat that requires more research and associated efforts to properly address. Addressing C. auris is challenging for reasons including the rise of resistance and limitations in diagnostic tests. According to CDC, there are three classes of antifungals available to treat C. auris. However, CDC has identified strains that are resistant to all three classes. A CDC official noted that getting new antifungals to market is challenging because, among other things, the demand for antifungals, relative to antibiotics, is low. Additionally, according to FDA, although reliable tests for identifying C. auris exist, commonly used laboratory tests may misidentify this fungus, posing a barrier to correct diagnosis. In 2018, the Food and Drug Administration (FDA) cleared a test based on mass spectrometry to identify C. auris, but this test cannot characterize resistance. FDA officials told us there are three FDA-cleared tests available for testing for other Candida species resistance to fluconazole. However, none of these tests can provide rapid results, such as within an hour. Finally, interpretation of culture-based diagnostic tests, which examine how well bacteria grow in the presence of an antibiotic, is challenging due to the lack of established interpretive criteria for C. auris, by both the Clinical and Laboratory Standards Institute, which promotes the development and use of voluntary laboratory consensus standards and guidelines within the health care community, and by FDA. U.S. spending on antibiotics in health care from 2010 through 2015 was estimated in one study to be nearly $56 billion, ranging from $8.4 billion to $10.6 billion annually. While CDC states that antibiotic prescribing improved nationally with a 5 percent decrease from 2011 to 2016, the agency estimated in 2017 that at least 30 percent of antibiotics used across both outpatient and inpatient settings are still prescribed unnecessarily or incorrectly and, therefore, are considered inappropriate. According to CDC, approximately 85 to 95 percent of the nation s antibiotic use, by volume, occurred in outpatient settings from 2010 through 2015; and roughly 270 million antibiotic prescriptions equivalent to 836 per 1,000 persons in the United States were written in these settings in 2016. (For more information on antibiotic use in the United States, see text box.) Antibiotic Use in the United States A 2017 Centers for Disease Control and Prevention (CDC) report estimates that about 30 percent of antibiotics used in U.S. hospitals are inappropriate (unnecessary or prescribed incorrectly), and as much as 50 percent of antibiotics prescribed in outpatient settings such as physicians offices, emergency departments, urgent care centers, and retail clinics may be inappropriate. For example, CDC reports that each year, an estimated 47 million unnecessary antibiotic prescriptions are written in physicians offices and emergency departments. Most of these unnecessary prescriptions are for respiratory conditions most commonly caused by viruses including common colds, viral sore throats, and bronchitis that do not respond to antibiotics, or for bacterial infections that do not always need antibiotics, like many sinus and ear infections. Furthermore, CDC reports that even when antibiotics are needed, prescribers often favor drugs that may be less effective and may carry more risk over more targeted, first-line drugs recommended by nationally recognized antibiotic prescribing guidelines. (First-line drugs are the drugs generally recommended for initial treatment for a given diagnosis, often combining the best efficacy with the best safety profile or the lowest cost.) According to CDC, antibiotics are among the most frequently prescribed medications in nursing homes, with up to 70 percent of residents receiving one or more courses of systemic (non-topical) antibiotics in a year; CDC also cites studies showing that 40 to 75 percent of antibiotics prescribed in nursing homes may be inappropriate. CDC further reports that harms from antibiotic overuse include the risk of serious diarrheal infections from C. difficile, increased adverse drug events and drug interactions, and increased risk of infection with antibiotic-resistant organisms. According to CDC officials, unnecessary antibiotic use means the antibiotic was prescribed when no antibiotic was needed, based on clinical practice guidelines. Inappropriate antibiotic use includes both unnecessary antibiotic use, as well as inappropriate antibiotic selection, dosing, or duration when antibiotics are indicated. CDC officials also told us they consider misuse and inappropriate use to be synonymous terms. <1.1. The National Action Plan and Federal Agency Responsibilities> Vaccines Can Also Help Prevent Antibiotic Resistance While we did not include vaccines in the scope of this report, vaccines play a role in helping combat antibiotic resistance because they are designed to prevent infections, including resistant infections. In addition, by preventing infections from occurring, they can reduce the need to use antibiotics, which in turn, can slow the development of antibiotic resistance. For example, according to the Centers for Disease Control and Prevention (CDC), since introduction of the pneumococcal conjugate vaccine among children in 2000, rates of antibiotic-resistant infections caused by certain Streptococcus pneumoniae strains decreased by 97 percent among children under 5 and by more than 60 percent among adults. However, few vaccines are available that target antibiotic-resistant bacteria on CDC s threat list. In September 2014, the President signed Executive Order No. 13676 (Executive Order), which directed that several federal actions be initiated related to antibiotic resistance. For example, the Executive Order directed the creation of the National Action Plan, which the White House released in 2015, to provide a roadmap for federal agencies to respond to the threat of antibiotic resistance. The National Action Plan set five major goals over 5 years related to (1) slowing the emergence of resistant bacteria and preventing the spread of resistant infections; (2) strengthening national One-Health surveillance efforts to combat resistance; (3) advancing the development and use of rapid and innovative diagnostic tests for the identification and characterization of resistant bacteria; (4) accelerating basic and applied R&D for new antibiotics, other therapeutics, and vaccines; and (5) improving international collaboration and capacities related to the first four goals. In addition, the National Action Plan discusses the importance of preventing and controlling infections, such as through rapid detection, to combat antibiotic resistance domestically and globally (see text box). Within each of these five goals, the National Action Plan contains numerous objectives, sub-objectives, agency-specific milestones, and other performance targets called significant outcomes. For example, the National Action Plan set a significant outcome of reducing inappropriate antibiotic use by 50 percent in outpatient settings and by 20 percent in inpatient settings by 2020. According to the World Health Organization (WHO), effective infection prevention and control measures are a practical and scientific approach to reduce health care- associated infections in patients and health care workers, and help combat antibiotic resistance. Infection prevention and control measures serve as the cornerstone of actions needed to address epidemics, pandemics, and antibiotic resistance. Such measures include implementing hand hygiene practices, providing vaccinations, cleaning and disinfecting hospital rooms, isolating patients with infectious diseases, decontaminating and sterilizing medical equipment, and tracking data about emerging infectious diseases. WHO states that health care-associated infections are a global challenge from which no country or health care facility is immune. The Centers for Disease Control and Prevention (CDC) has taken actions to address and track health care-associated infections, including antibiotic-resistant infections. For example, in 2009, CDC issued guidance for infection control targeting Enterobacteriaceae that may be resistant to carbapenem, a class of antibiotics. In 2018, CDC published a study suggesting that a tracked decline in the proportion of resistant bacteria, including carbapenem-resistant Enterobacteriaceae, observed in some health care settings, could be attributable at least in part to actions such as those outlined in its 2009 guidance. In addition, CDC has reported that U.S. hospitals have made major progress since 2005 in declining rates of methicillin-resistant Staphylococcus aureus (MRSA) bacteremia because of infection prevention measures. The interagency CARB Task Force, which was created by the Executive Order to issue and monitor the implementation of the National Action Plan, is co-chaired by the Secretaries of Defense, Agriculture, and HHS, and is additionally comprised of representatives from VA and several other agencies. Representatives from HHS agencies including BARDA, CDC, CMS, FDA, and NIH make up nearly two-thirds of the task force s participants (see table 1). According to the HHS Assistant Secretary for Planning and Evaluation officials who coordinate it, the task force is developing a new National Action Plan that will span the years 2020 through 2025. To provide additional advice to the CARB Task Force and the Secretary of HHS, the Executive Order also created the Presidential Advisory Council on Combating Antibiotic-Resistant Bacteria (PACCARB), which is composed of 15 non-governmental members. The Executive Order also charged the CARB Task Force with providing annual updates to the President regarding progress made in implementing the National Action Plan, plans to address any barriers preventing its full implementation, and recommendations for any new or modified actions, taking federal government resources into consideration. Since 2015, the CARB Task Force has produced four progress reports, which summarize agency actions toward meeting the goals and milestones laid out in the National Action Plan; these reports were provided to the President and are publicly available. <2. CDC Has Expanded Surveillance of Antibiotic Resistance, but Faces Challenges Determining the Magnitude of the Problem CDC Has Expanded Surveillance of Priority Bacteria> Since the National Action Plan was released in 2015, CDC has made progress in expanding surveillance for antibiotic resistance in the United States and abroad. However, the magnitude of the problem and its trends over time remain unknown, in part because of challenges in three areas: (1) tracking antibiotic resistance across all health care settings, (2) reporting complete and timely information on magnitude and trends of antibiotic resistance, and (3) tracking and assessing the global antibiotic resistance threat. To better assess the full extent of antibiotic resistance, CDC has expanded its surveillance of priority bacteria in the United States in order to better assess the full extent of antibiotic resistance since the 2015 National Action Plan was released. CDC tracks antibiotic resistance through several infectious disease surveillance systems in collaboration with state and local health officials, health care providers and facilities, and laboratories. Rather than establishing a single surveillance system for antibiotic resistance, CDC generally incorporates tracking of antibiotic resistance into broader surveillance systems, according to agency officials. The surveillance systems are spread across various divisions within CDC that specialize in specific types of infection or certain settings. (See table 2 for a description of each system and the resistant bacteria it tracks.) According to CDC and other officials and documents we reviewed, including the National Action Plan Year 3 Progress Report, CDC has taken the following actions, among others, to expand surveillance in order to better assess the scope of antibiotic resistance: Established the Antibiotic Resistance Laboratory Network in 2016 to improve testing capacity to better identify antibiotic resistance in the United States. The network consists of 55 state and local (including Puerto Rico), and seven regional, public health laboratories and the National Tuberculosis Molecular Surveillance Center. The network is improving and expanding laboratory capacity response at public health laboratories around the country, as well as at regional centers, according to representatives from two national professional organizations of state and local health officials and epidemiologists. Expanded antibiotic resistance-related efforts in its Emerging Infections Program (EIP), a network that seeks to monitor, prevent, and control emerging infectious diseases. For example, since 2015, more of the existing 10 EIP sites are conducting surveillance for invasive Staphylococcus aureus infections, carbapenem-resistant Enterobacteriaceae, and C. difficile, among others. Separately, the National Action Plan had included a goal for CDC to expand EIP by adding up to 10 sites within 3 years. However, CDC officials told us that in light of resource limitations, they chose to instead increase the number of pathogens reported at existing EIP sites. They told us they determined this was a better use of the limited funds, and that existing EIP sites are sufficient for current EIP efforts related to antibiotic resistance. Updated the domestic tuberculosis surveillance system by incorporating advanced drug susceptibility testing and reporting and by developing capacity for state surveillance systems to report their tuberculosis test data electronically to CDC laboratories. Supported state and local health departments to better track, investigate, and prevent resistant foodborne disease, among other things, through the National Antimicrobial Resistance Monitoring System for Enteric Bacteria (NARMS). For example, the system can now carry out whole genome sequencing for all the pathogens it tracks, which enhances its detection and response capabilities, such as by expanding CDC s ability to detect new and emerging resistance, according to CDC officials. Launched the Enhanced Gonococcal Isolate Surveillance Program (eGISP), which augments the main Gonococcal Isolate Surveillance Program (GISP). Whereas GISP only collects samples from the urethras of men with symptoms of gonorrhea, in select sexually transmitted disease clinics, eGISP also collects samples from women and from other sites on the body, such as the throat. The specimens are sent to regional laboratories for resistance testing. CDC has also worked with international partners to expand surveillance of antibiotic resistance abroad. These efforts involved CDC collaborations with WHO, the European Center for Disease Prevention and Control, the government of the United Kingdom, other governments, and other multi- country efforts, such as the Surveillance and Epidemiology of Drug- Resistant Infections Consortium and the Transatlantic Taskforce on Antimicrobial Resistance (TATFAR). The collaborations aimed to develop technical guidance to help improve surveillance in other nations and to organize an international forum. CDC also launched its Antibiotic Resistance (AR) Solutions Initiative, which invests in national and international infrastructure to address resistant infections across health care settings and communities and from food. <2.1. The Precise Magnitude and Trends of Antibiotic Resistance Are Unknown, in Part Because of Challenges CDC Faces in Three Areas> CDC faces three general challenges in tracking and reporting trends in antibiotic resistance. First, it faces limitations in data reporting and resistance testing from hospitals, as well as challenges ensuring that its resistant gonorrhea surveillance system is representative of the U.S. population. Second, CDC faces challenges in reporting complete and timely information on the magnitude of and trends in antibiotic resistance. Finally, CDC faces challenges to detecting resistance threats abroad. <2.1.1. Challenges in Tracking Resistance> The first challenge CDC faces in tracking trends in resistance is addressing low hospital participation in a new option of CDC s National Healthcare Safety Network (NHSN) system intended to address some limitations in NHSN. NHSN is, among other things, an online system for tracking health care-associated infections. It provides facilities, states, regions, and the nation with data needed to identify problem areas, measure the progress of prevention efforts, and ultimately eliminate health care-associated infections, according to CDC. Patients in settings such as hospitals and long-term care facilities (e.g., nursing homes) in many cases already have a weakened immune system or an underlying illness, making an antibiotic-resistant infection especially dangerous, according to the Centers for Disease Control and Prevention (CDC). A high proportion of the morbidity and mortality associated with antibiotic resistance is seen in health care-associated infections. Tracking resistance in health care settings is therefore critical to national surveillance efforts. CDC established three modules within NHSN that allow hospitals to report select antibiotic-resistant infections, among other things, which include reporting required by states or by CMS, according to agency officials. Two modules track patients who have an infection associated with a medical device or resulting from a surgical procedure. Hospitals only report on resistance in these modules for specific combinations of antibiotics and bacteria, such as carbapenem-resistant Enterobacteriaceae. The third module tracks certain hospital patients who test positive for certain multidrug-resistant infections, including methicillin- resistant Staphylococcus aureus (MRSA) a type of bacteria found on people s skin that is usually harmless but can cause serious infections, according to CDC. However, according to CDC, many antibiotic- resistant infections detected during hospital care do not fall into one of these three modules and therefore would not be captured in NHSN, limiting CDC s ability to identify important new resistances or trends. In 2014, to help address this limitation, CDC officials told us they introduced a new option for hospitals to report data on antibiotic resistance the Antimicrobial Resistance Option (AR Option). This option allows for reporting of data on antibiotic resistance for certain bacteria, regardless of whether the patient has a health care-associated infection. In contrast to the other three modules, reporting to the AR Option is voluntary. As a result, while about 86 percent of the 17,529 eligible U.S. health care facilities participate in at least one of the older three antibiotic-resistance reporting modules, only about 10 percent of the 6,836 eligible hospitals participate in the newer, voluntary AR Option, according to our analysis of NHSN hospital participation data as of January 2020. The hospital participation rate among U.S. states and territories ranged from no participation (in nine states and territories) to about 27 percent. Representatives from a national association of state public health officials we interviewed said that this low rate limits the value of the data, a view that echoed the findings of a 2018 report by the Joint Public Health Informatics Task Force. CDC officials acknowledged that participation in the AR Option is low and cited reasons for this, including hospital resource limitations, and in many cases because participation is voluntary because hospitals do not prioritize submitting data to the AR Option. According to CDC officials, it is particularly challenging for many smaller hospitals and Indian Health Service facilities with resource constraints to participate, as it requires significant information technology investment. The Joint Public Health Informatics Task Force report noted two other common challenges: low capacity for information technologies needed to support data submission to the AR Option, and a lack of motivated leadership, such as a facility champion, to oversee the development and maintenance of needed reporting infrastructure. For example, the maintenance of reporting infrastructure could address changes to electronic medical records that are not immediately compatible with the AR Option reporting format. CDC officials told us the agency is taking some steps to increase participation in the AR Option. For example, it is encouraging the over 1,500 hospitals (as of December 31, 2019) that are participating in a related reporting effort known as the Antimicrobial Use Option (AU Option) but not in the AR Option to participate in both. In addition, the agency is working with vendors of equipment and electronic health record software to make it easier for hospitals to participate in the AR Option. One of CDC s goals for the AR Option is to use reported data to conduct regional and national assessments of resistance. To help meet this goal, officials said they would like participation by all eligible hospitals in the AR Option, but they have not determined the needed participation rates or appropriate distribution of participating hospitals. Our past work has shown that leading practices for federal strategic planning include articulating specific goals, establishing a method to assess progress toward these goals, and aligning the plans and goals with the agency s mission. By taking steps to determine the participation rates and distribution of participation hospitals needed for CDC to meet its goal of conducting regional and national assessments of antibiotic resistance of public health importance, CDC would have more reasonable assurance that it can achieve its goal. The second challenge CDC faces is ensuring representativeness of its resistant gonorrhea surveillance system. CDC has classified resistant gonorrhea as one of the most urgent antibiotic-resistance threats in the nation, affecting over half a million patients annually. According to the agency, resistant gonorrhea warrants this designation because of the limited remaining treatment options, the high number of gonorrhea infections, potential adverse outcomes (such as increased transmission of HIV), and the prospect that gonorrhea may become incurable if new resistance arises and spreads. The Urgent Threat of Resistant Gonorrhea According to the Centers for Disease Control and Prevention (CDC), gonorrhea is the second most commonly reported notifiable disease in the United States, with over 500,000 infections reported in 2017. However, CDC estimates that the true number could be as many as 820,000 each year. In addition to being a very common infection, gonorrhea is developing resistance to treatment options. As recently as 2006, CDC had five recommended options, but it estimates that nearly half of U.S. infections are now resistant to available antibiotics, including combinations. Consequently, it now recommends only one regimen. In 2014, a case of dual-therapy failure was reported in the United Kingdom, and in February 2018, a similar case in the United Kingdom was reported that also failed to respond to the last-resort therapy, spectinomycin, resulting in treatment failure. As of June 2019, CDC reported that it had not received any reports of verified clinical treatment failures to any cephalosporin in the United States. It is not clear, however, that GISP data are representative of the general U.S. population because GISP draws on a limited sample of that population. Specifically, GISP collects culture specimens -called isolates and accompanying epidemiologic data from only the first 25 men with inflammation of the urethra consistent with gonorrhea visiting each participating sexually transmitted disease clinic each month. It does not collect culture specimens from women. In addition, the number of participating clinics each year has varied from 21 to 30 (see fig. 2 for the current sites). CDC estimates that the cases of gonorrhea identified through GISP surveillance represent only about 1 to 2 percent of all reported cases of gonorrhea in the United States each year. Further, the GISP sample design also over-represents cases in the western United States, where antibiotic-resistant gonorrhea has tended to initially emerge, according to CDC. According to CDC, this design allows for more rapid detection of emerging resistance by ensuring a sufficient sample size from the western United States because resistance tends to emerge from that area. CDC has two projects Strengthening the United States Response to Resistant Gonorrhea (SURRG) and eGISP intended to, among other things, enhance domestic gonorrhea surveillance and learn more about the representativeness of GISP through limited testing of women and of body sites other than urethras, respectively. However, CDC s current methodology may limit its ability to establish a representative trend. According to CDC officials, GISP could improve its representativeness by adding clinics or covering more of the population at its current sites. However, efforts to expand GISP would be difficult due to limited local capacity (see text box). Barriers to Expanding the Gonococcal Isolate Surveillance Program (GISP) GISP currently tracks a limited sample of the U.S. population. According to Centers for Disease Control and Prevention (CDC) officials, a more thorough expansion of GISP would be more difficult because of limited local capacity to conduct culture-based testing for resistance in gonorrhea. Specifically, laboratories increasingly use newer gonorrhea testing technology that gives more rapid results but cannot currently be used to test for resistance. This trend has contributed to the reduced capability of many laboratories to perform the gonorrhea culture-based testing for antibiotic susceptibility testing, to the point that many clinics cannot collect specimens for testing, according to CDC officials. Furthermore, officials said that adding new clinics to GISP would require financial and other resources for, among other things, establishing culture testing for resistance and information technology needed to report data to the system. Most gonorrhea cases are diagnosed outside sexually transmitted disease clinics. However, expanding GISP to non-sexually transmitted disease clinic sites could be particularly costly and inefficient, officials said, because these sites tend to see many fewer gonorrhea cases per year compared to sexually transmitted disease clinics; therefore they may not be able to contribute significant data to GISP. Through the Strengthening the United States Response to Resistant Gonorrhea (SURRG) project, CDC is currently exploring options to work with states to enhance gonorrhea testing capacity. This program was established in 2016 but has not received the funding needed to expand capacity to the extent CDC had planned. In addition, physicians and other providers have limited time to devote to data collection and reporting needed to participate in GISP. CDC officials also told us the reimbursement rates for providers for these services are inadequate. CDC has taken some steps to assess the representativeness of the current GISP design, but it has not conducted a comprehensive study to assess the representativeness of the trends identified in GISP. A 2015 CDC evaluation concluded that the representativeness of GISP was good on a scale of fair, good, or great. However, the evaluation covered only part of fiscal year 2014 and consisted of a limited comparison of selected demographic characteristics captured in gonorrhea cases identified in GISP to those captured through the National Notifiable Diseases Surveillance System, according to CDC officials, and which has its own limitations. Further, the results of this evaluation have not resulted in any changes to the GISP design. CDC officials told us they hope to learn more about the representativeness of GISP urethral isolates from testing women, patients in non-sexually transmitted disease clinic sites in the SURRG project and eGISP, and testing at other body sites, and then comparing some of these results to those of GISP. However, these efforts overall were not specifically designed to fully assess the representativeness of GISP and may not provide a sufficient assessment for impacting changes to the GISP design. CDC s guidelines of efficient and effective public health surveillance systems state that, in order to be representative, the data from a public health surveillance system should accurately reflect the characteristics of the health-related outcome such as resistant gonorrhea under surveillance. A more precise evaluation of the representativeness of the surveillance system can be done via carefully designed studies to obtain complete and accurate data for the health event in question namely, the urgent threat of antibiotic-resistant gonorrhea. By evaluating the surveillance system for resistant gonorrhea to ensure that it includes measures of its representativeness, such as by comparing the trends in the sample population with those in the overall U.S. population, using specially designed studies if needed, CDC would have better assurance that the trends detected in GISP accurately reflect the characteristics of the health-related outcome the system is designed to monitor. In addition to the limited design of GISP, CDC faces the challenge of competing priorities under reduced funding that precluded it from completing its plans to expand the SURRG project. The SURRG expansion was designed to address a National Action Plan goal of controlling resistant gonorrhea, among other things, but also affects surveillance, as CDC officials told us SURRG was established to address some limitations in GISP surveillance. Specifically, one of the plan s milestones assigned to CDC is to maintain advanced capacity for rapid response to antibiotic-resistant gonorrhea for at least 20 state health departments. Such capacity includes detection, diagnosis, and investigation of suspected resistant cases within their state or region and assistance for health care providers in appropriately treating infected patients. CDC officials told us that because they received about half of the appropriations they had requested, CDC had to make cuts in some of their projects, and SURRG was one of those that CDC chose to reduce. Eight SURRG sites, rather than the 20 recommended by the National Action Plan, collect and analyze data. However, in its progress reports covering the first 4 years of the National Action Plan s implementation, the CARB Task Force did not identify plans to address barriers related to expanding the SURRG project. The CARB Task Force coordinators told us that the progress reports have not identified plans to address barriers largely because the task force focused on reporting the agencies accomplishments in implementing the National Action Plan. The coordinators also said that, in response to our inquiries during this review, the task force intends to identify agencies plans for addressing barriers in the progress report to be published in fall 2020. The Executive Order directs the CARB Task Force to provide annual updates to the President on federal government actions to combat antibiotic resistance, including progress made in implementing the National Action Plan, plans for addressing any barriers preventing its full implementation, and recommendations for any new or modified actions, taking federal government resources into consideration. Without reporting its plans to address such barriers, the CARB Task Force has not provided all the information required by the Executive Order and has not fully carried out its role to facilitate and monitor implementation of the National Action Plan, which may reduce the effectiveness of federal efforts to combat antibiotic resistance. The third challenge CDC faces tracking antibiotic resistance is addressing limitations to the use of test results in surveillance in health care settings. For example, some health care facilities are not using the most up-to-date testing methods for determining whether the bacteria causing an infection are resistant to certain antibiotics, according to CDC officials and a report from the Antibiotic Resistance Surveillance Task Force. In addition, laboratories may only report an interpretation of the test result to CDC (e.g., whether the bacteria is resistant or susceptible to an antibiotic) and not the quantitative results (e.g., measures of the growth of bacteria in the presence of the antibiotic). This presents a challenge for comparing data from different laboratories, since they may not be using consistent testing thresholds for determining antibiotic resistance. Another limitation is that some test equipment may be designed to give limited results for the purposes of guiding treatment recommendations and stewardship efforts, which may also limit the information available to CDC. For example, the test may inform the user that the infection is susceptible to one antibiotic but suppress information on susceptibility to other antibiotics, in order to guide the user toward treatment with the preferred first-line treatment. The Antibiotic Resistance Surveillance Task Force report noted that some suppression is done by the testing equipment itself and some by software systems that record, manage, and store data for clinical laboratories. CDC officials told us they are working with some diagnostic test manufacturers to explore these issues and develop solutions to address them. The Antibiotic Resistance Surveillance Task Force is also working to address the diagnostic test challenges related to antibiotic resistance surveillance. <2.1.2. Challenges in Reporting Complete and Timely Information on Magnitude and Trends> CDC also faces challenges in reporting timely and complete information on the magnitude of and trends in antibiotic resistance in the agency s Threats Reports. One challenge is in providing information in these reports on the uncertainties in reported numbers of deaths from antibiotic- resistant infections. Another challenge is in issuing such reports in regular, timely intervals. As a result of these challenges, among others, the true magnitude of, and trends in, antibiotic resistance over time are unknown, including trends in various places and among people with various characteristics. Surveillance for antibiotic resistance is complex and costly, according to experts at our meeting, CDC officials, and literature we reviewed. Experts told us such surveillance encompasses diverse pathogens, diseases, and health care settings and requires a variety of data sources and collection efforts. Furthermore, experts from our meeting told us the fundamental data required such as data on the number of illnesses and deaths attributable to resistance and data on related health care costs are currently insufficient. One expert added that there is a lack of real-time monitoring data, such as data that are available within hours or days of being generated. The data gaps are especially large for infections acquired in the community, as opposed to in a health care setting, because there is very limited tracking of such infections and whether they are resistant. As a result, CDC officials said, it is challenging to provide ranges of uncertainty, a critical component of any effort to measure and report on magnitude and trends. Neither the 2013 Threats Report nor the 2019 Threats Report provided quantitative measures of uncertainty, such as confidence intervals, for CDC s estimates of morbidity and mortality resulting from antibiotic- resistant infections. For example, the report stated that there are at least 23,000 deaths a year as a direct result of antibiotic-resistant infections, but it did not include an upper limit or a single point estimate for this number. Similarly, the 2019 Threats Report stated that there are at least 35,900 deaths a year, without an upper limit or a single point estimate. A recent re-estimate by a group of scientists has put the likely minimum number of deaths annually in the United States at approximately 153,000, or about four times the 2019 CDC minimum estimate. CDC officials told us that because of several limitations, its estimates were the best that could be derived from the data available. For example, for the 2013 Threats Report, CDC only had data from a national hospital survey intended to produce estimates of all health care-associated infections and indirect estimates of the proportion of infections that were resistant. These data did allow CDC to calculate confidence intervals for infections by specific pathogens, but this information was not disclosed in the Threats Reports. Because the data sources were not intended for this purpose, the 2013 intervals were wide, from approximately 26 percent to 380 percent of the point estimates for each pathogen. CDC officials told us they elected not to include these ranges of uncertainties to avoid confusion in the 2013 Threats Report, because the report was intended for a variety of audiences, including the general public. Officials told us they planned to provide confidence intervals in an appendix of the 2019 Threats Report, but they did not. CDC officials explained that they elected not to include confidence intervals in the 2019 Threats Report because several publications are pending that provide more granular data for many of the estimates included in the report. It is thus unclear whether CDC plans to include any measures of uncertainties in future Threats Reports. Federal standards for agency dissemination of information it produces stipulate that when information products are disseminated, error estimates are calculated and disseminated to support assessment of the appropriateness of the uses of the estimates or projections. Providing measures of uncertainties in antibiotic resistance estimates, such as standard errors or confidence intervals, as appropriate, in its Threats Reports would help CDC and others compare information within and across reporting efforts, without having to consult multiple documents over time. CDC and others could use this information to draw appropriate conclusions about the characteristics of antibiotic resistance in the United States, including limitations associated with reported findings and conclusions. Additionally, CDC does not have a plan for timely, regular issuance of their Threats Reports. It took CDC over 6 years to update the 2013 Threats report. CDC officials told us this length of time between reports was in part because, following issuance of the 2013 Threats Report, the agency was focused on implementing priority actions to improve antibiotic resistance surveillance data, including those efforts prescribed by the National Action Plan. In some cases, implementing these actions involved new data collection efforts that took time to establish, including that it can take up to 2 years to get new surveillance variables cleared by the Office of Management and Budget (OMB), CDC officials told us. In addition, CDC officials said it is time consuming to coordinate across the decentralized structure of antibiotic-resistance tracking at CDC to compile a consolidated report. However, lack of timely, regular updates may affect the information available to the public as well as policy-makers. For example, the 2013 Threats Report stated that there are at least 23,000 deaths a year as a direct result of antibiotic-resistant infections. The 2019 report stated the number of deaths each year to be at least 35,900 deaths a year. This report also revised the 2013 estimate from 23,000 to 44,000 deaths a year, suggesting a nearly two-fold revision to the initial 2013 estimate. CDC officials told us they would like to publish the report more frequently than every 6 years, and that it is reasonable they would develop such a plan for frequency of publication following the 2019 report. However, they said the agency does not currently have a plan for how often it will release future consolidated reports. CDC s attributes of efficient and effective public health surveillance systems include timely data dissemination for planning, implementing, and evaluating public health policies and programs. By developing a plan for more frequent dissemination of consolidated reporting on priority pathogens at regular intervals, CDC would have more timely trend data and other information necessary for users of the data, including policymakers, to prioritize, plan, implement, and evaluate public health actions to address antibiotic resistance. <2.1.3. Challenges in Tracking and Assessing the Global Threat> In October 2015, the World Health Organization (WHO) launched the Global Antimicrobial Resistance Surveillance System (GLASS). The objectives of GLASS are to foster national surveillance systems and harmonized global standards and estimate the extent and burden of antimicrobial resistance globally by selected indicators, among other things. As of November 2019, 86 countries were enrolled in GLASS, a 25 percent increase over 2018. Participants were in various stages of economic development (13 lower-income countries, 23 lower-middle- income countries, 17 upper-middle-income countries, and 33 high-income countries) and in all WHO regions. Seventy-five countries provided descriptive information on their surveillance systems for tracking antimicrobial resistance, and 57 countries provided resistance data for 2018. antibiotic resistance from the national surveillance systems of some countries are incomplete because of a lack of capability and resources for implementing standardized protocols, according to WHO officials. Moreover, most information on antibiotic-resistant infections is limited to laboratory test data and does not include epidemiological data, such as data on the patient and location, which could provide additional insight about the circumstances around the resistant infection. Also, a lack of a sampling strategy for the detection of cases that are antibiotic-resistant may bias the representativeness of the data and interpretation of results. Specifically, when case identification is done only on the population of patients that seeks medical care and is tested, or when testing of the population varies such as across health care settings, the incidence and trends determined from this population may not represent the total population of concern. Aggregated data reporting. Some countries report aggregated, rather than isolate, or infection-level, data to the WHO s Global Antimicrobial Resistance Surveillance System (GLASS), a practice that WHO officials stated creates a challenge for data analysis and results interpretation. According to officials, such aggregation limits statistical analysis that can be performed and limits analysis of factors such as the specific antibiotic-resistant bacteria, or the age or gender of the patient, among other things. Surveillance is a complex function. Many different health care and public health professionals are involved in the multistep process for generating data, according to a WHO report on GLASS. According to WHO officials, obtaining the staff commitment and training needed to ensure high-quality data can pose a challenge to public health agencies and health care organizations. As we noted above, CDC has worked with, and continues to work with, international partners to expand surveillance of antibiotic resistance abroad, including through U.S. participation in GLASS. For example, CDC has helped develop technical guidance for surveillance programs in other countries and has organized international forums for surveillance. CDC officials also told us portions of domestic surveillance systems data collection include collection of patient travel history. <3. Federal Agencies Have Helped Advance Diagnostic Tests and Promoted Their Use, but These Efforts Have Limitations> Federal agencies have helped advance the development of new FDA- authorized tests and the use of existing tests for diagnosing antibiotic- resistant infections, but these efforts have limitations. Specifically, HHS and DOD have funded studies and taken other steps to advance testing, but they have not defined leadership, roles, and responsibilities to address a key barrier to the use of tests: a lack of clinical outcome studies. FDA has taken additional steps to advance testing; however, it has not regularly monitored test updates. <3.1. Agency Efforts toward the Development and Use of Diagnostic Tests> <3.1.1. HHS and DOD Have Funded the Development of New Tests> HHS and DOD have awarded grants and contracts for the development of new FDA-authorized tests for diagnosing antibiotic-resistant infections. Some of these awards address specific needs in the current availability of FDA-authorized tests, while others support more general research and development efforts. In addition, these agencies have taken steps to help reduce the chances of duplicative funding. According to experts, tests for antibiotic resistance not only help clinicians decide what antibiotics to use, they also provide important information for surveillance, including the number of cases of resistant infections in a population and the mechanisms of resistance. to the 2013 Threats Report. Differentiate between viral and bacterial infections. Such a test would be useful primarily in preventing use of antibiotics for viral infections, which can contribute to the development of resistance in bacteria, among other things. HHS and DOD have awarded funding to address these needs. For example: CARB-X a program supported by NIH and BARDA within HHS has awarded funding to a company to develop a rapid test to both diagnose gonorrhea and test for antibiotic resistance. CARB-X is funding other companies to, among other things, develop rapid testing for identification of and resistance in bloodstream infections, including for some priority bacteria. In September 2016, NIH and BARDA announced the Antimicrobial Resistance Rapid, Point-of-Need Diagnostic Test Challenge. As of December 2019, there were five finalists, working on such projects as developing a rapid test to differentiate viral from bacterial infections and developing a test that can identify or detect antibiotic-resistant bacteria, including antibiotic-resistant gonorrhea. Within DOD, the Defense Advanced Research Projects Agency officials told us that the agency used fiscal year 2015 funding on contracts for the development of rapid molecular tests for resistant gonorrhea and to distinguish between viral and bacterial infections. Federal agencies have also funded more general research and development efforts related to resistance testing. For example: NIH officials told us their agency has supported extramural projects related to the development of tests for antibiotic resistance by issuing grants and entering into contracts since fiscal year 2015. Separately from the Antimicrobial Resistance Diagnostic Challenge, BARDA entered into contracts with three organizations to develop tests focusing on the advanced stages of test development, including clinical trials, according to BARDA officials. Within DOD, the Defense Threat Reduction Agency is funding three projects using Other Transaction Authority or direct funding to a DOD Service laboratory, for developing tests. Federal agencies have also taken steps to help reduce the chances of duplicative funding, including working with some international efforts to develop tests, according to agency officials. For example, NIH reviews current and pending support of key project personnel prior to issuing of any research award, to help ensure NIH support complements support from other agencies and organizations. Similarly, officials from HHS s Office of Global Affairs worked during the creation and launch of the NIH- BARDA challenge and an analogous United Kingdom innovation foundation competition called the Longitude Prize to help ensure these programs were designed to support different aspects of needed diagnostics. <3.1.2. HHS Has Funded Some Studies of Clinical Outcomes, but Has Not Clearly Identified Leadership, Roles, and Responsibilities> HHS has funded some studies to assess the extent to which testing patients to identify whether they have antibiotic-resistant infections leads to improved clinical outcomes, such as more effective treatment for patients or more judicious use of antibiotics. However, HHS has not identified relevant leadership, roles, and responsibilities among the HHS agencies that could fund such studies. Clinical outcome studies are important for encouraging the use of diagnostic tests for antibiotic resistance, among other things, because such studies can demonstrate the benefits of those tests. According to PACCARB, there is very limited information on why clinicians sometimes forgo diagnostic testing, but one possible explanation is that there may be limited data demonstrating the value of such testing. In the absence of such data, a clinician may choose to treat the patient immediately rather than using a test for antibiotic resistance that has unknown value. Research into the clinical outcomes associated with such testing could therefore be used to help promote the use of those resistance tests that are found to be beneficial. As a result, patient care could be improved and clinicians could be guided towards appropriate antibiotics to prescribe. Two HHS agencies have awarded grants for studies on the clinical outcomes of resistance testing, according to agency officials. For example, NIH provided grant support for a study that found, among other things, that using a rapid blood test for a range of potential bacteria and antibiotic resistance led to more judicious use of antibiotics. Similarly, officials from the Agency for Healthcare Research and Quality (AHRQ) stated that the agency is funding investigator-initiated grant studies to assess the impact of tests on antibiotic stewardship. However, agency officials only mentioned these and a few other examples of studies they have funded on clinical outcomes. International Needs for Diagnostic Tests for Antibiotic Resistance To better understand international needs for antibiotic resistance tests, we interviewed officials from international organizations and the Office of Global Affairs within the Department of Health and Human Services (HHS). A Public Health England official told us that United Kingdom users are not confident that these tests will have a clinical impact or be cost effective. Similarly, an official from a trade organization of British medical test manufacturers told us that the value of tests for antibiotic resistance needs to be captured and disclosed, especially because people are more willing to pay for treatment than for tests. However, other factors could also be important in determining which tests will be useful internationally. World Health Organization officials told us that they are working to determine what characteristics health care providers worldwide identify as key to making tests useful, so industry can develop such tests. They noted that tests designed for use in the United States may not be suitable for use in other countries. They also noted that laboratories in developing countries may not have the capacity to culture bacteria, so many need to use culture- independent tests. Office of Global Affairs officials told us that a big challenge is developing accessible tests for use internationally. Their ideal test would be inexpensive, rapid, and capable of point- of-care use. They noted that cost and usability are the barriers to test use, not technology, and that use of existing tests remains limited, including within the United States. Agency officials and experts agree that more needs to be done to evaluate clinical outcomes associated with use of diagnostic tests for antibiotic resistance. For example, in 2017, PACCARB reported that there is a lack of clinical and economic outcome studies showing that any diagnostic test could prevent the emergence of antibiotic-resistant bacteria and would be cost effective. Officials we interviewed from AHRQ, BARDA, CDC, FDA, and NIH all agreed with that PACCARB statement. Additionally, experts told us that such studies are lacking but important for advancing the use of tests. For example, one health care organization official told us the decision to adopt a test is based at least in part on whether there will be a clinical benefit. An infectious disease expert noted that to provide incentives for test use there needs to be some evidence that tests affect and improve care, but that most tests do not come with any evaluation of how they perform in practice. International organizations expressed similar opinions. One reason for the relatively low number of studies is that those agencies that could conduct or fund diagnostic outcome studies have not clearly identified leadership, roles, and responsibilities for doing so. Although they agree that more such studies are needed, they have not identified which agency or agencies should take the lead, and what the roles of the other agencies should be. Instead, agencies have offered differing views on what each agency could do. For example, BARDA officials told us their agency has not funded such studies because it generally does not play a role in test adoption. BARDA officials, as well as officials from DOD and NIH, said that CDC should play a role in funding or conducting the studies. However, CDC officials told us that a lack of resources has prevented their agency from doing so, and that the responsibility should fall at least partly on BARDA. Our previous work shows that key practices for interagency collaboration include identifying a lead agency (or, if leadership is shared, clearly identifying roles and responsibilities among the lead agencies), as well as clarifying the roles and responsibilities of all participating agencies. By taking these actions, agencies including AHRQ, BARDA, CDC, FDA, and NIH could more effectively address the need for clinical outcome studies. Those studies, in turn, could help demonstrate the value of diagnostic tests for antibiotic resistance, potentially increasing their use, improving patient care, and enhancing stewardship efforts. <3.1.3. CMS and FDA Have Taken Steps to Advance the Use of Tests, but Experts Have Identified Challenges with Payments> CMS and FDA have taken some steps to advance the use of tests, including those to identify antibiotic-resistant bacteria. For example, FDA established a Payor Communication Task Force, which helps facilitate communication between test manufacturers and payors. Such communication is important because payors decide whether tests will be covered by insurance, among other things. According to an FDA web page, by communicating with payors, test manufacturers could, for example, learn what data payors need to approve a test for coverage and then use this information to design clinical trials to provide that information. This process could reduce the time between when a test is cleared or approved by FDA and when it is covered. A similar step FDA and CMS took to advance the use of tests was to extend the Parallel Review program indefinitely, a move they announced in 2016. This program established a mechanism for FDA and CMS to simultaneously review clinical data, with the aim of reducing the time between FDA s approval and CMS s decision on whether to pay for the test. Experts told us challenges remain with test payments that may result in lower test use. For example, a PACCARB report states that currently, for many diagnostic tests is not aligned with the value of the test, and noted that supplementing payments for tests could drive test development and use. BARDA officials also told us that a major factor affecting adoption of new tests is the cost of the test relative to reimbursement. Additionally, experts, including those at our meeting, told us that test payments remain insufficient to encourage broad test use. For example, two experts from our meeting said that there is not always a clear link between the medical value of a test and the payment level for that test. One of these experts added that their laboratory decided not to adopt a test because low payment levels relative to costs made doing so a money-losing proposition. Three other experts we interviewed agreed that disparities between cost and payment can discourage test adoption. Regarding federal payments for tests involving CMS and their payments through Medicaid and Medicare, there are limits to CMS s ability to address any disparities. For example, CMS officials told us the payments for some tests are based on a weighted, median, private-payor rates pursuant to the Protecting Access to Medicare Act of 2014, so CMS cannot specify the methodology used to set those rates. Further, for inpatient tests, Medicare pays hospitals a single, bundled payment per patient stay, which is based on multiple factors, including the patient s diagnosis and treatment strategy, rather than on a specific service. As such, a separate payment for individual tests is not made under Medicare. <3.2. FDA Efforts to Advance the Development of New Tests> <3.2.1. FDA Has Taken Steps to Speed the Development of Tests for Newly Approved Antibiotics> FDA has taken steps toward the development of FDA-authorized tests for resistance for newly approved antibiotics a process that currently can take months to years, according to experts and agency officials. The delay stems in part from the need for a critical testing threshold known as a breakpoint the threshold that is used to help a clinician decide whether or not a pathogen is resistant to the antibiotic (see text box). The breakpoint of a new antibiotic is generally finalized only when FDA has approved the antibiotic. This means that breakpoints may often not be available for test manufacturers until after a new antibiotic is FDA- approved. As a result, test manufacturers generally may not be able to complete developing FDA-authorized culture-based tests for resistance to a specific antibiotic until after the antibiotic is commercially available. The result is that the development of such culture-based tests may be generally delayed even after the new antibiotic is approved by FDA. This delay could affect the ability of clinicians to treat patients. For example, according to an expert, such a delay could lead to underuse of a newly available antibiotic, among other things, because a clinician may not be willing to prescribe the antibiotic without test results to guide treatment. How Breakpoints Are Used to Interpret Tests According to officials from the Food and Drug Administration (FDA), breakpoints, also referred to as susceptibility test interpretive criteria, are used to define susceptibility and resistance to antibiotics to help guide patient care. Culture-based tests rely on breakpoints to provide a determination of resistance to clinicians. In the United States, breakpoints (based on clinical or microbiological data) are established by standards- development organizations such as the Clinical and Laboratory Standards Institute (CLSI) and FDA. One example of how breakpoints are used involves the Kirby-Bauer disk diffusion test. This test is conducted by spreading bacteria on a laboratory agar plate containing bacterial nutrients, and then placing paper disks containing a known amount of antibiotics on the lawn of bacteria. Plates are observed after overnight incubation to determine the extent of bacterial growth. Closer to the disk, there is a higher concentration of antibiotic, and the concentration declines with distance. Around most disks, there is a zone of inhibition, where the concentration of antibiotic is too high for bacteria to grow. After allowing the bacteria to grow for a defined period of time, the diameter of the zone of inhibition is measured in millimeters. Procedure for Assessing Antibiotic Resistance Using Breakpoints If the diameter is larger than or equal to the breakpoint, then the strain of bacteria is considered susceptible to the antibiotic, suggesting that the antibiotic can be used to treat infections caused by that strain. If the diameter is smaller than the breakpoint, then the strain is considered resistant, suggesting that the antibiotic should not be used. According to FDA, in most cases, there is a range of intermediate or susceptible dose-dependent diameters for which treatment might be effective. Other types of culture-based diagnostic tests for resistance have analogous breakpoints for interpreting the test. For example, the minimum inhibitory concentration the lowest concentration of an antibiotic that prevents growth of bacteria can be compared to a breakpoint to establish whether the bacteria are considered resistant. In addition to antibiotic developers waiting until FDA approves an antibiotic before a breakpoint is finalized, there are technical hurdles in developing a test for some new antibiotics, according to FDA officials. For example, it may be challenging for certain automated test manufacturers to address unique growth properties of certain bacteria in the presence of specific antibiotics or combinations of antibiotics. According to a test manufacturer, these hurdles include the need for additional studies, and such studies may not be straightforward because of the need to determine what clinical data FDA requires. In addition, in the case of automated tests, a representative from a test manufacturer association told us the software used to run and interpret a new test needs to be revised, which can be time consuming. The delay between approval of an antibiotic and the availability of a test for resistance could result in suboptimal treatment and increase burdens on the health care system. For example, one expert stated that during this delay, laboratories need to create or modify tests and then validate those tests instead of using a FDA-authorized test, which increases the time required and places demands on facility personnel and budgets. This expert added that to conduct validation studies, the laboratories need a variety of samples for testing, called isolates, which may not be available. A second expert said that the delay leads to both overuse and underuse of the new antibiotic: in the absence of a test, some clinicians will prescribe the antibiotic when it may be inappropriate, leading to overuse; some other clinicians refrain from prescribing the antibiotic, even if appropriate, leading to underuse. To help address this delay, FDA has created a process known as coordinated development, whereby test manufacturers can submit a coordinated development plan to FDA describing the test manufacturer s intent to coordinate with the antibiotic manufacturer. The plan is submitted prior to, or shortly after, submission of an application to market a new drug. Under the coordinated development program, FDA shares breakpoint information from the antibiotic manufacturer with a prospective test manufacturer. It then reviews the test application at the same time as the antibiotic application and takes other steps to facilitate more timely clearance of the test. FDA officials told us this process has significantly reduced the delay between approval of the antibiotic and clearance of the test. Another FDA step to help test manufacturers speed development of tests is the establishment, in collaboration with CDC, of a centralized repository of bacterial strains with well-characterized antibiotic resistance profiles. These strains are available to test manufacturers and others to help them design, validate, and evaluate tests by checking that they give the correct results for bacteria whose profile of antibiotic resistance is known. Finally, FDA officials also said that they offer pre-submission advice, whereby a test manufacturer can ask for initial guidance on the design of clinical studies for their tests. <3.2.2. FDA Has Taken Steps to Improve Breakpoint Recognition> In the United States, breakpoints are established and updated by organizations such as the Clinical and Laboratory Standards Institute (CLSI). After CLSI establishes a breakpoint, FDA may review and recognize the breakpoint, according to FDA officials. Test manufacturers rely on breakpoints recognized by FDA to support marketing authorization of their tests. An expert who works for CLSI identified more than 50 breakpoints that have not been recognized by FDA, and for which CLSI considers FDA recognition important in order to help make FDA-authorized tests available. Experts, including one from our meeting, cited the following examples of breakpoints needing recognition: CDC recommends a dual therapy of antibiotics azithromycin and ceftriaxone to be taken together to treat gonorrhea. However, FDA does not recognize any azithromycin breakpoints for N. gonorrhoeae, which an expert from our meeting told us could be a barrier to developing FDA-authorized culture-based tests for N. gonorrhoeae resistance to the recommended dual therapy. Colistin is an antibiotic used in hospitals because of its efficacy against carbapenem-resistant bacteria, according to one manufacturer of a test for colistin resistance. This manufacturer markets its test in many countries but not in the United States, because FDA does not recognize colistin breakpoints. FDA has taken some steps to address unrecognized breakpoints, which are a potential barrier to developing some tests for antibiotic resistance. For example, FDA officials told us that the agency conducts regular internal reviews of breakpoints. According to FDA officials, the agency reviewed the 2019 CLSI breakpoint standards and updated FDA s website with changes to recognized breakpoints as of June 2019. FDA has been posting such updates since December 13, 2017. FDA also accepts public comments requesting the recognition of new breakpoints, according to agency officials. However, we found there was some confusion between CLSI officials and experts and the FDA involving the number of comments FDA could review each year, which FDA later clarified on its website. One expert at our meeting later told us that CLSI adjusts its process for submitting comments based in part on their understanding of FDA s communication. This expert added that FDA making a public commitment to a specific number of comments they would review would help CLSI improve its planning. FDA officials told us there is no legal requirement for FDA to communicate the number of comments the agency can review, but that in previously published notices of opportunities for public comments, there was nothing that indicated there would be limits. However, after we informed FDA officials of concerns by experts regarding the number of comments FDA could review, FDA updated their webpage to clarify that they will review all submitted comments. <3.2.3. FDA Has Taken Limited Steps to Monitor Use of Updated Breakpoints> FDA has taken limited steps to monitor whether FDA-authorized tests are using new breakpoints after these breakpoints are updated and accepted by FDA. Because bacteria can develop increasing resistance to antibiotics, it is sometimes important to change the breakpoints used for determining whether or not bacteria are resistant to a given antibiotic. Using tests with out-of-date breakpoints could result in misidentifying a resistant infection as non-resistant, which can lead to treating a patient with an ineffective antibiotic and the further spread of the infection. FDA officials told us the agency has taken limited steps to monitor the status of breakpoint updates, and that out-of-date breakpoints being used in tests should be a rare occurrence. In contrast, a CDC official told us that keeping tests updated is a significant concern. This official cited the example of carbapenem- resistant Enterobacteriaceae infection, which triggers specific procedures to limit the spread of these bacteria. If the test breakpoint is out of date, the infection may not be detected in a timely manner, and the pathogen could spread broadly as a result. A recent study looking at hypothetical scenarios in one U.S. county estimated that a 32-month delay in updating tests to match CLSI breakpoints for carbapenem-resistant Enterobacteriaceae would have resulted in an average of almost 2,000 additional carriers of these bacteria county-wide. Additionally, an expert told us that use of out-of-date breakpoints could lead to improper patient care, improper surveillance reporting, and slower detection of emerging resistance. However, the true impact of this issue is challenging to discern (see text box). The Extent of Any Negative Effects of Out-of-Date Breakpoints on Public Health Is Unclear Experts and agency officials voiced a range of opinions on the public health effects of tests with out-of-date breakpoints. For example, one Centers for Disease Control and Prevention (CDC) official told us that despite the lack of breakpoint updates, cases of a type of carbapenem-resistant Enterobacteriaceae were likely ultimately caught by hospitals because a second test was used by all but a small number of hospitals. One expert stated that how quickly test breakpoints are updated is less important when deciding what test to adopt than other factors, such as ease of use. However, another expert noted that laboratories addressing emerging threats may feel the need to use non-Food and Drug Administration (FDA) cleared tests, because they are aware that FDA-cleared tests may not be updated as quickly as needed. Test updates may be an issue for smaller laboratories, which do not have dedicated personnel keeping track of breakpoint revisions, Department of Veterans Affairs officials told us. FDA officials told us that because manufacturers are strongly motivated to keep their tests current, only a few tests have out-of-date breakpoints. However, the only confirmation FDA officials offered for this statement was to mention an unofficial internal survey of FDA s database of existing tests, conducted in March 2019, which concluded that all FDA-authorized tests had implemented breakpoint updates made since December 13, 2017. They said this survey is not conducted regularly. They also stated that it is possible that some tests have not been updated to reflect breakpoint updates made prior to December 13, 2017, but that FDA is unaware of any such tests that also pose a public health threat. To assess the extent to which there are FDA-authorized tests using out- of-date breakpoints, we spoke with experts and stakeholders and reviewed studies they identified. We identified several FDA-authorized tests with breakpoints that were changed nearly a decade ago. Some of these tests could be used for diagnosing infection with carbapenem- resistant Enterobacteriaceae, which CDC identified as an urgent threat. One manufacturer told us that one of their tests has not been updated with new breakpoints nearly 10 years after a breakpoint revision. FDA officials acknowledged it is possible some FDA-authorized tests might continue to rely on outdated breakpoints. Further, in 2019, a scientific article listed four different test manufacturers offering tests that have not been fully updated to reflect revised breakpoints, including some affecting antibiotics for some types of carbapenem-resistant Enterobacteriaceae. Finally, CDC officials told us they asked hospital laboratories in a survey for 2017 and 2018 if they had updated their tests to reflect revisions in breakpoints for carbapenem-resistant Enterobacteriaceae that were implemented in 2010. According to CDC, nearly 1,000 of over 5,000 responding hospital laboratories had not implemented the revised breakpoints, and, of these, over 85 percent were using FDA-authorized tests. One CDC official stated that there is significant concern for patient safety associated with out-of-date breakpoints, and another said that there are few justifications for failing to update the tests after 8 years. FDA officials told us they have not received reports of suspected device-associated deaths, serious injuries, or malfunctions that are specific to out-of-date carbapenem-resistant Enterobacteriaceae breakpoints in FDA-authorized tests using such breakpoints. The officials added that it is possible to detect carbapenem-resistant Enterobacteriaceae under certain situations, even if the test had an out-of-date breakpoint for a given antibiotic against these bacteria. However, FDA does not know the actual negative effect, if any, of out-of- date breakpoints because it does not know how many FDA-authorized tests rely on such breakpoints. Since December 2017, FDA has conducted one unofficial survey of tests to assess breakpoint updates that was limited in scope and is not a regular event. Other than that, FDA is relying on market incentives to drive manufacturers to make sure their devices are updated. According to FDA and others, the extent of the problem is not clear. However, PACCARB identified updating test breakpoints as an important issue in a 2017 report. Additionally, one of the sub-objectives in the National Action Plan notes that rapid updating of breakpoints is essential to provide accurate information to guide appropriate drug treatment. Finally, the Standards for Internal Control in the Federal Government directs management to establish and operate monitoring activities to monitor its internal control systems and evaluate the results. In this case, monitoring and evaluation of the status of breakpoint updates in FDA-authorized tests could help FDA identify and address the National Action Plan sub-objective, as well as a strategic priority in the mission statement of its Center for Devices and Radiological Health: FDA assures that patients and providers have timely and continued access to safe, effective and high-quality medical devices. FDA officials said they do not believe the issue is a significant problem, but the agency has also not regularly evaluated any effects of using tests for antibiotic resistance with out-of-date breakpoints. FDA officials stated that there may be resource constraints to their ability to conduct regular monitoring and evaluation. By regularly monitoring and evaluating FDA-authorized tests, FDA would be better positioned to determine the extent of tests relying on out-of-date breakpoints and may be better positioned to provide assurance that patients and providers have timely access to safe and effective tests. Furthermore, by regular monitoring, FDA would be able to determine whether test manufacturers are updating breakpoints as needed, and help ensure that patient care and infection control efforts are effective. <4. Federal Efforts Have Not Fully Addressed Challenges to Developing New Treatments for Antibiotic-Resistant Infections> Experts, federal officials, and antibiotic developers have identified economic and other challenges to developing new antibiotics. Federal agencies, including HHS and DOD, have engaged in efforts to address some of the challenges; however, experts said these efforts are not sufficient and that additional federal incentives are needed to encourage the development of new antibiotics. <4.1. Economic and Other Challenges to Developing New Treatments Exist> Experts are concerned about a void in the discovery of new antibiotic classes and the current pipeline of antibiotics in development. According to The Pew Charitable Trusts, a nonprofit public policy organization that tracks the pipeline of antibiotics, no new classes of antibiotics approved for human use have been discovered since 1984. In addition, experts are concerned that the number of antibiotics in clinical development is insufficient to meet the threat of antibiotic resistance. For example, according to The Pew Charitable Trusts, only 42 antibiotics were in clinical development globally meaning clinical trials were being conducted to test their safety and efficacy in humans as of June 2019, and only 24 of them targeted bacteria on CDC s or WHO s priority lists. According to a recently published analysis, the authors found that the pipeline of antibiotics that target gram-negative bacteria is dominated by derivatives of existing classes of antibiotics and does not sufficiently address the problem of extensively drug-resistant gram-negative bacteria . For example, one study estimated the average cost per new molecular compound that received FDA approval between 2005 and 2013 to be $1.4 billion. See J. A. DiMasi, H. G. Grabowski, and R. W. Hansen, Innovation in the Pharmaceutical Industry: New Estimates of R&D Costs, Journal of Health Economics, vol. 47 (2016): pp. 20-33. Other studies suggest lower development costs. For example, another study estimated a median cost to develop cancer drugs of $0.6 billion. See V. Prasad and S. Mailankody, Research and Development Spending to Bring Single Cancer Drug to Market and Revenues After Approval, JAMA Internal Medicine, vol. 177, no. 11 (2017): pp. 1,569-1,575. for antibiotic-resistant infections have a narrow set of patients for whom the treatment would be appropriate. As a result of the perceived poor return on investment, many large pharmaceutical companies have discontinued their antibiotic development in recent years. In 2018, according to The Pew Charitable Trusts and other published sources, four large pharmaceutical companies worldwide had antibiotics in clinical development globally compared to 1990, when 18 were involved in antibiotic R&D. Two antibiotic companies declared bankruptcy in 2019; in the case of one, the company filed for bankruptcy only 10 months after its antibiotic, which targets resistant bacteria, received FDA approval. The majority of antibiotics in the development pipeline are being developed by smaller companies that do not have other drugs on the market to help cover their R&D costs. However, representatives from three small antibiotic developers we spoke with noted that their field is struggling because it is difficult to raise funds from private investors due to the low return on investment potential. with bacterial infections into certain clinical trials prior to initiating treatment can be difficult due to a lack of available rapid diagnostic tests to identify the type of infection and the urgent need to begin treatment immediately for acute infections. According to FDA officials, this is problematic for clinical trials because any prior treatment could obscure the true efficacy of the drug under investigation. Recognizing this often unavoidable issue, FDA has issued guidance giving antibiotic developers additional, but limited, flexibility with their clinical trial protocols in certain cases. Superiority trials, which aim to show that the drug being investigated is more effective than an existing drug. Non-inferiority trials, which aim to demonstrate that the difference between the effectiveness of the drug being investigated and an existing drug is small enough to show that the drug being studied is also effective. Typically, there are three phases of clinical trials, with the sizes of the trials increasing with each phase. FDA generally prefers that when conducting clinical trials, developers demonstrate the effectiveness of a new drug by showing its impact on a clinical endpoint a direct measure of how a patient feels, functions, or survives. FDA also accepts surrogate endpoints, which are laboratory measures or physical signs used as a substitute for a clinical endpoint that reasonably predict a clinical benefit. developers told us that, for most antibiotics, it is difficult to conduct superiority clinical trials and more feasible to conduct non-inferiority trials, because the latter allows for smaller enrollment. (See side bar for an explanation of clinical trial types.) They told us that the inability to demonstrate their drug s superiority limits their ability to market the drug, because it can be difficult to convince purchasers (e.g., hospitals) to choose the newly approved antibiotic over existing antibiotics, especially when the new antibiotic is significantly more expensive. Gaining approval for multiple indications. FDA generally approves drugs for a specific indication; therefore, antibiotic developers told us they tend to design their clinical trials around common infection types, largely because of the relative ease of enrolling patients. However, some antibiotics can treat infections in multiple parts of the body, which may not have been studied in a clinical trial. While providers are able to prescribe drugs for off-label use that is, for a condition or patient population for which the drug has not been approved they may lack information on the safety and efficacy of the drug for such use. In addition, such off-label use may not be reimbursed by the patient s insurance. According to The Pew Charitable Trusts, there were 29 nontraditional antibacterial products in clinical development for the U.S. market in June 2019. Among the 29 products in the pipeline, nine were antibodies, seven were vaccines, seven were live biotherapeutic products, and six were other types of products. No bacteriophages were in clinical development. More than half of these products are for the treatment of Clostridioides difficile or Staphylococcus aureus infections. Experts, antibiotic developers, and federal officials also said it is scientifically challenging to develop new antibiotics that can overcome existing mechanisms of resistance. One expert at our meeting explained that it is necessary to develop an antibiotic that works differently than existing antibiotics so that bacteria are not resistant to it. In particular, experts and federal officials have noted that it is challenging to develop antibiotics that can kill certain types of bacteria, called gram-negative bacteria, largely due to their double membrane that makes it difficult for antibiotics to enter the bacterial cell, and to pumps that can remove the drug once it enters. Three antibiotic developers we spoke to explained that as bacteria continue to evolve new ways to resist antibiotics, it is difficult for scientists to keep pace by developing new treatments that can overcome those mechanisms. In addition, experts noted that scientists have already discovered most of the antibiotics from known sources, such as soil. As a result, scientists are now exploring new sources of chemicals with antibiotic properties, such as insects. As the rate of antibiotic discovery has slowed, scientists have also begun to explore alternatives to traditional antibiotics which we call nontraditional products in this report. Many types of nontraditional products are currently being researched and developed to treat antibiotic- resistant infections, including, among others, live biotherapeutic products, antibodies, and bacteriophages. For example, one type of nontraditional product in use for the treatment of recurrent C. difficile- associated disease which causes diarrhea, abdominal cramps, and an estimated 15,000 deaths in the United States each year, according to CDC is fecal microbiota for transplantation, more commonly known as fecal transplants. (See text box.) However, scientists and companies researching and developing certain types of nontraditional products face development challenges. For example, according to a paper written by BARDA officials and others, certain types of nontraditional products target only one or a few types of bacteria, which makes enrollment of patients in clinical trials difficult and potentially cost-prohibitive. The authors also stated that additional research is needed to evaluate side effects and measure the efficacy of some types of nontraditional products. According to another published paper, more than half of the nontraditional products in development are intended to be used concurrently with a traditional antibiotic, and it can be difficult to demonstrate the additional clinical benefit of adjunctive therapies in clinical trials. The authors also noted that additional clinical trial endpoints still need to be developed and validated for such nontraditional products. Fecal Transplants The goal of a fecal transplant which involves collecting stool from healthy donors and transferring it to patients via enema, oral capsule, or another modality is to restore a healthy gut microbiome for recipients. According to the National Institutes of Health (NIH), multiple research studies have indicated that these transplants are effective, but their long-term safety has not been established. Questions remain about the Food and Drug Administration s (FDA) policy regarding stool banks that collect, prepare, and distribute fecal transplant products. FDA issued guidance in 2013 indicating its intention to exercise enforcement discretion regarding Investigational New Drug requirements for the use of fecal transplants to treat Clostridioides difficile infections, provided that the treating physician obtained adequate consent from the patient or his or her legally authorized representative. In other words, FDA s guidance indicated it would not require fecal transplant products to satisfy the Investigational New Drug requirements which refer to the requirements for FDA s approval before beginning clinical trials to test a product on humans. [FDA, Enforcement Policy Regarding Investigational New Drug Requirements for Use of Fecal Microbiota for Transplantation To Treat Clostridium difficile Infection Not Responsive to Standard Therapies; Guidance for Industry; Availability, 78 Fed. Reg. 42965 (Jul. 18, 2013).] However, FDA later issued draft guidance in 2016 stating that FDA did not intend to extend enforcement discretion with respect to the Investigational New Drug requirements applicable to stool banks distributing fecal products. [FDA, Enforcement Policy Regarding Investigational New Drug Requirements for Use of Fecal Microbiota for Transplantation To Treat Clostridium difficile Infection Not Responsive to Standard Therapies; Draft Guidance for Industry; Availability, 81 Fed. Reg. 10632 (Mar. 1, 2016).] FDA has not finalized the 2016 draft guidance, which leaves the final guidance from 2013 as the current policy. According to FDA, the agency received many comments from patients and industry groups in response to the 2016 draft guidance expressing concern about the effect that the requirement for clinical trials would have on access to these products. In March 2019, FDA officials told us they were still reviewing comments to the 2016 draft guidance and were unable to say whether or not it would be finalized. In November 2019, FDA held a public hearing to obtain further input on the use of fecal transplants to treat C. difficile infection not responsive to standard therapies and to better understand the effect of FDA s enforcement policy on product development. <4.2. Federal Agencies Have Made Some Progress toward Addressing Treatment Development Challenges> Multiple federal agencies have supported the development of new antibiotic treatments, including providing funding for antibiotic R&D, issuing guidance related to antibiotic clinical trials, and implementing Medicare payment mechanisms. Agencies have made available both push incentives, which directly support antibiotic R&D, and pull incentives, which offer financial benefit, either directly or indirectly, to developers of successful antibiotics after they reach the market. Federal funding for antibiotic R&D. Several federal agencies award grants or contracts, create public-private partnerships, or use other approaches to provide researchers the funding for R&D of new treatments for antibiotic-resistant infections (see table 3). This type of pre- market R&D support is considered a push incentive. See appendix III for additional examples of efforts to support antibiotic R&D by NIH and DOD. Among the products in the CARB-X portfolio, 12 would represent a new antibiotic class (if approved) and 14 target a novel molecular bacterial target. Awardees were based in six countries. Issued guidance to support clinical trials. FDA has implemented programs and issued guidance that help address some regulatory challenges and encourage antibiotic development. In 2012, through the Generating Antibiotic Incentives Now provisions of the Food and Drug Administration Safety and Innovation Act, Congress created the Qualified Infectious Disease Product (QIDP) designation. Drugs that FDA designates as QIDPs, which include antibiotics and antifungals, may qualify for 5 years of additional exclusivity and fast-track or priority review designation during the FDA review process. The additional exclusivity conferred to QIDP designees is a type of pull incentive, because it offers the potential for enhanced financial gain after a drug receives FDA approval and reaches the market. According to FDA officials, as of September 2019, FDA had granted 192 QIDP designations, 24 of which it has approved for marketing. Also in response to the Generating Antibiotic Incentives Now Act, FDA released final guidance in August 2017 to streamline clinical development of antibiotics for patients with an unmet medical need that is, those with a serious bacterial disease that has few or no treatment options. FDA explains in this guidance that it may consider drugs for these patients that have higher risks than would be acceptable for a broad patient population and provides information on types of antibiotics that could be eligible for approval based on smaller, shorter, or fewer as few as only one clinical trials. The 21st Century Cures Act required FDA to establish a Limited Population Pathway for Antibacterial and Antifungal Drugs (LPAD). In June 2018, FDA issued draft LPAD guidance, as required by the Act. Under LPAD, eligible products which are drugs and biologics intended to treat a serious or life-threatening infection in a limited population of patients with unmet needs may follow a streamlined development program, similar to the approaches described in its earlier unmet medical need guidance. A biotechnology association noted in its public comments to the draft LPAD guidance the need for FDA to issue additional guidance to clarify its expectations for acceptable types of efficacy data when clinical trials are small and to clarify its interpretation of a limited population of patients for the purpose of the LPAD pathway. An expert who attended our meeting later told us there is a great need to address how to develop narrow-spectrum antibiotics those designed to treat a single or small number of bacterial pathogens using LPAD. FDA held a public meeting in July 2019 to solicit stakeholder comments on the draft LPAD pathway guidance, and FDA officials told us they expect to finalize the guidance by February 2020. However, as of March 17, 2020, FDA had not yet issued final guidance. In addition to issuing guidance, and to help inform future guidance, FDA engages with industry stakeholders to discuss and identify possible solutions to challenges related to the clinical development of antibiotics and nontraditional products. For example, FDA has held multiple public workshops, including one in November 2019 with experts from NIH s National Institute of Allergy and Infectious Diseases, the Infectious Disease Society of America, and The Pew Charitable Trusts to better understand the current state of antibiotic clinical trials in the United States, and how to enhance enrollment and research in these trials. FDA officials told us they believe it is too early to issue guidance that would be broadly applicable and useful to nontraditional product developers. They explained that for certain types of nontraditional products, the approaches and specifics of product development are varied and evolving quickly. Instead, FDA s Center for Biologics Evaluation and Research has a program in place that allows developers to meet with FDA prior to beginning clinical trials to obtain advice on a wide range of development-related topics. Implemented Medicare payment mechanisms. CMS uses Medicare payment mechanisms to help increase reimbursement to hospitals for certain antibiotics. For qualifying antibiotics, these payments are a form of indirect pull incentive because they have the potential to increase the demand for the new antibiotics after they reach the market, which could in turn improve their financial performance. Beginning in fiscal year 2020, CMS updated how it will pay hospitals for treating Medicare patients who have an antibiotic-resistant infection. Specifically, CMS changed the eligibility criteria and payment amount for antibiotics that qualify for new technology add-on payments and how it pays hospitals for treating Medicare patients with antibiotic-resistant infections. These payment changes are: Revised eligibility criteria for and amount of add-on payments. New technology add-on payments provide hospitals with additional compensation for a period of 2 or 3 years when they use qualifying new technologies or drugs that offer substantially improved clinical treatment, and when regular Medicare payments for the hospital stay are inadequate to cover the cost of the new technology or drug. Generally, medical services and technologies must be new and must demonstrate a substantial clinical improvement over existing services or technologies to receive the additional payment. However, CMS has acknowledged the difficulty antibiotic developers face in demonstrating such substantial clinical improvement due to manufacturers seeking FDA approval for most antibiotics on the basis of noninferiority clinical trials, as described above. To make it easier for antibiotics to qualify for the additional payments, under the revisions to the CMS payment policy beginning in fiscal year 2021, CMS will consider all antibiotics with a QIDP designation from FDA to be new for purposes of the add-on payment, and these antibiotics will not have to meet the substantial clinical improvement criteria. In addition, CMS has increased the amount of the temporary add-on payment for qualifying antibiotics. Prior to this change, the add-on payments for qualifying antibiotics were limited to 50 percent of the cost of the drug. Under the new policy, the payment percentage increased to a maximum of 75 percent of the cost of the drug. CMS has specified that two antibiotics are eligible for new technology add- on payments in fiscal year 2020. Increased payment for hospital stays. CMS changed the severity level designation for certain antibiotic resistance-related diagnosis codes, in recognition of the added clinical complexity and cost of treating patients with antibiotic resistance. This change in severity level can result in higher payments to hospitals when treating patients diagnosed with antibiotic resistance, which, according to the Administrator of CMS in an August 2019 blog post, will create financial flexibility for physicians to prescribe the appropriate new antibiotics. The Administrator also noted that CMS made this policy change because it recognized that new technology add-on payments are temporary and further action was needed to realign financial incentives for antibiotics for the long-term. See appendix III for additional examples of efforts to support antibiotic R&D by these and other federal agencies. <4.3. Federal Efforts Have Not Fully Incentivized Antibiotic Development, and HHS Lacks a Strategy to Develop New Incentives> Experts and antibiotic developers told us that the economic challenges have remained despite the available federal push and pull incentives for antibiotic R&D. Currently available premarket push incentives include grants and awards from NIH and BARDA that fund antibiotic R&D; currently available postmarket pull incentives include the additional market exclusivity available through QIDP designation and Medicare add- on payments for antibiotics. (See fig. 3.) Both of the antibiotic companies that declared bankruptcy in 2019 had received push incentives from BARDA and pull incentives through Medicare New Technology Add-on Payments and the QIDP 5-year extension of market exclusivity. While experts at our meeting and antibiotic developers told us that push incentives have been helpful, they also said push incentives alone are not sufficient to sustain antibiotic development. For example, two antibiotic developers we spoke with explained that push incentives have provided needed funding for conducting R&D, but said that push incentives will not help cover the costs they will incur after their drug reaches the market for example, to manufacture and market their product. Experts and antibiotic developers have indicated that the effects of the existing pull incentives, QIDP market exclusivity, and Medicare add-on payments on stimulating development of new antibiotics have been limited for the following reasons: QIDP and market exclusivity. As we previously reported, several pharmaceutical companies told us that the market exclusivity incentive may not stimulate the development of new antibiotics, because the extension is unlikely to extend past the typical patent life of a new drug. In addition, a representative from The Pew Charitable Trusts said that, while the passage of the Generating Antibiotic Incentives Now Act initially bolstered private investments in antibiotics, it did not ultimately stabilize the pipeline of antibiotics in development, noting that since then, several large pharmaceutical companies have discontinued their antibiotics R&D programs. Medicare updates to hospital payments. While CMS recently increased new technology add-on payments for certain antibiotics beginning in fiscal year 2020 to help improve access to antibiotics, these payments are limited to antibiotics used to treat Medicare patients. In addition, although Medicare increased the add-on payment amount to up to 75 percent of the estimated costs of qualifying antibiotics in excess of the regular Medicare payment, hospitals could still face costs for providing these drugs that are not covered by the Medicare payment. Furthermore, representatives from an antibiotic company and a biotechnology trade association told us the add-on payments do not directly incentivize hospital pharmacies to purchase the drug, because the add-on payment may not flow back to the pharmacy department s budget. For these reasons, it remains to be seen whether the Medicare new technology add-on payments to hospitals for inpatient antibiotics will help improve the return on investment for antibiotic developers and further stimulate the antibiotic development pipeline. Similarly, it remains to be seen how CMS s policy change that provides increased payments for hospital stays when Medicare patients have been diagnosed with certain types of antibiotic-resistant infections will affect hospitals use of new antibiotics. In light of the limitation of existing incentives for antibiotic development, experts, federal officials, and antibiotics developers have called for additional postmarket pull incentives to reinvigorate the pipeline of antibiotics under development. For example, PACCARB issued recommendations to the Secretary of HHS in September 2017 and July 2019 for the adoption of pull incentives, calling for the development of market entry rewards and options for plausible business models. In addition TATFAR of which officials from BARDA, CDC, FDA, and NIH are members reported that it is critical to develop a pull incentive strategy now to ensure that enough antibiotics are available in the future. Former FDA Commissioner Dr. Scott Gottlieb also stated in 2018 that he was deeply concerned that without stronger pull incentives that encourage more R&D, we ll see a far less robust pipeline of products than we need to address antimicrobial resistance. Eight of the antibiotic developers we interviewed told us they think additional financial incentives are needed. For example, one developer said that sales revenues from antibiotics will never be sufficient to justify R&D investments, and another noted that financial incentives are needed during the first few years after a new antibiotic reaches the market to cover not only these costs, but also to conduct additional clinical trials to help expand the drug s possible market. Finally, several experts at our expert meeting noted that, without pull incentives, most of the small companies currently developing antibiotics are unlikely to survive, and large pharmaceutical companies will likely continue to exit the antibiotic market. Advisory groups and others have identified multiple options for how postmarket pull incentives could be designed, including market entry rewards either in the form of lump sum payments or transferable vouchers that could be sold to confer additional market exclusivity to other pharmaceutical drugs or reimbursement reform, such as licensing arrangements or add-on payments for hospital-administered antibiotics. (See fig. 4.) The four advisory groups whose papers we reviewed each recommended market entry rewards as effective pull incentive options. While Commissioner of the FDA, Dr. Scott Gottlieb proposed an antibiotics licensing arrangement, which he called a subscription model, in a 2018 speech. Views on the utility of reimbursement reform as a pull incentive strategy are mixed. For example, representatives from The Pew Charitable Trusts stated their view that, while CMS s recent changes to Medicare payment for antibiotics will likely be helpful to some degree, no reimbursement policy on its own would be able to increase antibiotic sales revenues sufficiently to transform the business model for antibiotics. An antibiotic developer we spoke to also told us that reimbursement policies would not be sufficient to support their business model because of low sales volumes for new antibiotics. The developer explained that it can take 2 or 3 years of antibiotic sales to recoup their R&D costs and finance their ongoing business operations, and that while larger pharmaceutical companies can rely on other profitable drugs to offset those costs, they could not because they did not have other drugs on the market. However, a representative from a biotechnology trade association told us that increasing reimbursement could help alleviate some of the economic challenges faced by developers of antibiotics that are already on or about to reach the market while policy makers explore longer-term pull incentive strategies. TATFAR cautioned that simply increasing reimbursement for antibiotics could potentially limit patient access, particularly for patients without health insurance including those in low-and middle-income countries and it could incentivize only antibiotics for common types of infections with a large market potential, rather than for rare, yet dangerous, types of pathogens. Advisory groups and others have evaluated potential market entry reward models, taking into consideration factors such as format, value, funding sources, and eligibility criteria. Some have proposed that receipt of a market entry reward should be delinked, fully or partially, from sales revenues that is, the developer would have to forgo some or all sales revenue as a condition of receiving the reward. Proponents of delinkage believe that separating revenues from antibiotics sales volumes would discourage aggressive sales that could lead to overuse. An expert who attended our meeting later told us that policies to incentivize use of new antibiotics must be balanced with policies to monitor prescribing of new drugs to prevent inappropriate use. Generally, advisory groups stipulate that to maximize the public health benefit, only antibiotics that treat what are deemed to be high priority bacteria should be eligible for a reward. Specific recommendations and conclusions included the following: TATFAR concluded in 2017 that a partially delinked market entry reward of approximately $500 million would be the least disruptive option but noted that additional assessment would be necessary to select the most appropriate model and determine governance and other design elements. PACCARB expressed support for a delinked model, in which a company accepting a market entry reward would be required to forgo marketing activities and profits based on sales volume. In addition, they suggested the establishment of an antibiotic incentive fund supported by an antibiotic usage fee or the sale or auction of transferable exclusivity vouchers as plausible options for financing pull incentives. The Duke University Margolis Center for Health Policy recommended in 2017 a delinked, public-private market entry reward model. This model was comprised of publicly funded market entry rewards for qualifying antibiotics for the first 5 or 6 years, followed by privately funded value-based contracts between antibiotic developers and health care payors, in which the payor could agree, for example, to pay a predetermined amount for full access to the antibiotics for a given population. The Duke-Margolis Center proposal did not specify a funding source, but it noted multiple options for consideration, including general government funds, antibiotic use taxes, or the sale of transferable exclusivity vouchers. The European DRIVE-AB project recommended in 2018 an internationally funded, partially delinked market entry reward valued at approximately $1 billion per antibiotic, paid over the course of 5 or more years. Recipients of a market entry reward would be allowed to sell their drug on the private market, but they would agree to certain marketing restrictions to discourage inappropriate use. HHS may need to request authority and appropriations to create and implement certain types of market entry rewards. For example, HHS does not currently have authority to offer transferable exclusivity vouchers to antibiotic developers, since that would require a change in statute. Advisory groups also noted that the various pull incentive approaches would require additional public or private expenditures and offered possible sources of funding. For example, in addition to general fund revenues, PACCARB suggested that pull incentives could be funded through antibiotic usage fees, the auctioning of transferable exclusivity vouchers, or by allowing developers of new antibiotics to earn a transferable exclusivity voucher. The Duke-Margolis Center suggestions included funding market entry rewards through a yearly per-member fee for all health insurance plans. Transferable exclusivity vouchers may not require an independent funding source, because the value of the reward is based on the sale of the voucher to another drug developer. However, vouchers would still increase public and private health care expenditures, because expenditures would likely increase for drugs for which the extra period of exclusivity was purchased due to the delayed entry of lower- priced generics. Finally, reimbursement reform could increase health care expenditures for health care payors, including Medicare and private health insurance carriers. Although PACCARB, TATFAR, and other experts have called for additional postmarket pull incentives to increase the antibiotic pipeline, as of January 2020 HHS has not developed a strategy for creating these incentives. HHS officials told us that the department created an interagency workgroup within HHS in spring 2019 to identify possible pull incentive options, among other things. The recently convened HHS interagency workgroup is a step in the right direction toward exploring options for new antibiotic development incentives. Through this workgroup, HHS has an opportunity to determine which types of postmarket incentives it believes would most effectively incentivize the development of new treatments for antibiotic-resistant infections. However, it is unclear whether the HHS interagency workgroup s efforts will include consideration of such incentives because, according to HHS officials in January 2020, the interagency workgroup was still considering possible recommendations for HHS leadership and had not produced any specific documents to share with us. The Government Performance and Results Act of 1993 (GPRA) and the GPRA Modernization Act of 2010, which significantly enhanced agencies responsibilities under GPRA, include principles for federal agencies to consider related to developing strategies for achieving results, among other principles. We have previously reported that these principles can serve as leading practices for planning at lower levels within agencies, such as individual programs or initiatives. Our past work has shown that strategic frameworks can serve as a basis for guiding policy makers, including congressional decision makers and agency officials, when making decisions about resources, programs and activities, particularly in relation to issues that are national in scope, such as antibiotic development. Developing a strategic framework that outlines new postmarket pull incentives and their key design elements such as monetary value, eligibility criteria, and guidelines to prevent overuse would be a first step toward identifying potential authorities and resources that may be needed to create the incentives, and toward determining agency roles for implementation and oversight of the incentives. Until such incentives are developed, more drug companies may exit the antibiotic development sector, and the pipeline of new treatments for antibiotic-resistant infections may continue to decrease. Furthermore, the current significant federal investment in push incentives to support antibiotic R&D will remain a high-risk enterprise, if companies receiving large R&D grants are unable to sustain their business once their treatment reaches the market. <5. Federal Agencies Have Undertaken Several Efforts to Promote the Appropriate Use of Antibiotics, but Key Challenges Remain> Federal agencies have undertaken several efforts to promote the appropriate use of antibiotics through stewardship programs and activities. However, four key challenges remain that have limited this progress. <5.1. Federal Agencies Have Undertaken Several Efforts to Promote the Appropriate Use of Antibiotics through Stewardship Programs> To promote the appropriate use of antibiotics across health care settings through antibiotic stewardship programs and activities, federal agencies have undertaken several efforts that aim to reduce inappropriate antibiotic use, reduce health care costs, improve patient outcomes, and combat antibiotic resistance. Selected examples of these efforts are discussed below. (For more detailed information on agencies efforts to promote the appropriate use of antibiotics, see app. IV.) <5.1.1. Published Requirements for Hospitals, Long-Term Care, and DOD and VA Facilities to Implement Antibiotic Stewardship Programs> Federal agencies require certain types of health care facilities to implement antibiotic stewardship programs, as follows: CMS. In September 2019, CMS finalized new health and safety requirements for hospitals and critical access hospitals to implement antibiotic stewardship programs by March 30, 2020, as a condition of their participation in the Medicare and Medicaid programs. Under these requirements, hospitals and critical access hospitals are required, among other things, to implement these programs facility- wide (which includes emergency departments) and to adhere to nationally recognized antibiotic prescribing guidelines. Nearly 3 years prior, CMS published similar requirements for nursing homes and skilled nursing facilities collectively known as long-term care facilities to establish antibiotic stewardship programs by December 4, 2017. Experts, including those at our meeting and the PACCARB, credit these requirements with being a powerful lever for promoting the appropriate use of antibiotics; Medicare comprises a significant portion of the nation s health care expenditures $741 billion in 2018, covering 59.9 million beneficiaries. DOD. DOD published a policy, effective October 2017, requiring the establishment of antibiotic stewardship programs within its military medical treatment facilities and, one year later, issued guidance for implementation. Among other things, the policy specified that these facilities antibiotic stewardship programs include components such as (1) leadership commitment by each facility; (2) accountability; (3) pharmacy expertise, including antibiotic prescribing and use evaluation; (4) implementation of action for change that would demonstrate commitment to the program; and (5) training for clinicians regarding antibiotic resistance and prescribing practices. DOD officials told us that all of these facilities (both inpatient and outpatient) were in different stages of implementing the antibiotic stewardship policy. VA. In January 2019, VA updated its 2014 policy directive for the implementation and maintenance of antibiotic stewardship programs in its health care facilities, which provide both inpatient and outpatient services to veterans. This policy directive includes requirements for its facilities to develop a written policy, conduct an annual evaluation of stewardship activities, ensure that adequate staff and resources are in place, and identify medical and pharmacy personnel as stewardship champions. According to department officials, VA has successfully implemented antibiotic stewardship programs in all of its health care facilities. <5.1.2. Developed Incentives for Clinicians to Implement Antibiotic Stewardship Activities> CMS has developed incentives for eligible clinicians in any type of health care facility to improve antibiotic use and stewardship, as part of the agency s broader efforts to improve care for Medicare patients. Through the Merit-based Incentive Payment System (MIPS) launched in 2017, CMS offers hundreds of quality measures and nearly 100 improvement activities on a wide range of topics including the appropriate use of antibiotics on which eligible clinicians can choose to report their performance to the agency. CMS then adjusts payments higher for clinicians who report data and achieve a performance-based, final score above a certain threshold and penalizes clinicians who do not achieve that threshold with lower payments. <5.1.3. Published Guidance on Implementing Antibiotic Stewardship Programs> Federal agencies have published guidance for health care facilities on how to implement antibiotic stewardship, as follows: AHRQ. Through a 5-year nationwide project, the AHRQ Safety Program for Improving Antibiotic Use has provided technical assistance and CDC s guidance to hospitals, long-term care settings, and physicians offices to promote implementation of antibiotic stewardship activities and help clinicians select optimal antibiotic treatment regimens. In December 2018, AHRQ completed implementation of this guidance in more than 400 hospitals, which included six DOD facilities and 79 critical access hospitals, according to AHRQ officials. CDC. Since 2014, CDC has published a series of guidance documents called the Core Elements of Antibiotic Stewardship (Core Elements) to promote the appropriate use of antibiotics in health care. The Core Elements are tailored to hospitals, nursing homes, outpatient settings, small and critical access hospitals, and low- and middle-income countries with limited resources. Common elements in these guidance documents include (1) leadership commitment, (2) implementation of policies and interventions to improve antibiotic use, (3) tracking and reporting antibiotic use, and (4) education to providers on appropriate antibiotic use. <5.1.4. Expanded the Collection of Antibiotic Use Data> For more information on MIPS, see GAO, Health Care Quality: HHS Should Set Priorities and Comprehensively Plan Its Efforts to Better Align Health Quality Measures, GAO-17-5 (Washington, D.C.: Oct. 13, 2016) and Medicare: Small and Rural Practices Experiences in Previous Programs and Expected Performance in the Merit-based Incentive Payment System, GAO-18-428 (Washington, D.C.: May 31, 2018). other sources. In particular, CDC has focused its efforts to expand antibiotic use data collection from hospitals, where an estimated one in two patients receives an antibiotic for at least one day during an average hospital stay. CDC launched its AU Option in 2011 as a voluntary, electronic reporting tool added on to the pre-existing NHSN. The AU Option allows the nation s 6,849 hospitals that are already reporting to the NHSN to submit their antibiotic use data in a standardized format. CDC then aggregates such data to calculate national benchmarks and allows hospitals to compare their actual antibiotic use against those benchmarks. In addition, CDC has periodically conducted prevalence surveys through the EIP to gather data on health care-associated infections and antibiotic use in about 200 hospitals and 161 nursing homes in 10 states. With regard to outpatient settings, CDC has acquired, through a proprietary source, 8 years of pharmacy data on antibiotic prescriptions since 2011, which the agency is using to better characterize patterns in outpatient prescribing and to develop targeted interventions for high-prescribing areas. <5.1.5. Developed Antibiotic Stewardship Training for Various Health Care Settings> Federal agencies have developed training on antibiotic stewardship, as follows: CDC. In 2018, CDC launched a free, online training course for various types of clinicians including physicians, dentists, pharmacists, physician assistants, and nurses to inform them about proper antibiotic prescribing and strategies for communicating with patients. Clinicians can receive credit for partial completion (at least 50 percent) or full completion of this training as improvement activities under MIPS in 2019. CMS. CMS has provided training, technical assistance, and other learning opportunities to more than 4,000 hospitals, 2,400 nursing homes, and 7,600 outpatient settings on best practices for antibiotic stewardship and guidance on C. difficile prevention. In addition, CMS and CDC have developed and launched free, online training to help nursing homes implement antibiotic stewardship and prevent and manage C. difficile infections. DOD and VA. These departments have also offered antibiotic stewardship training to their health care facilities through webinars, workshops, or briefings. <5.1.6. Funded Research> Federal agencies have funded research on antibiotic stewardship, as follows: AHRQ. Since 2015, AHRQ has increased its support for research to develop improved methods to combat antibiotic resistance and promote antibiotic stewardship, including through grants for research that will total more than $57 million, according to AHRQ officials. This research includes studies on the role of diagnostic tools in improving antibiotic use and reducing antibiotic resistance. AHRQ has also published numerous research studies on antibiotic or antimicrobial stewardship that the agency funded or authored. CDC. CDC supports research to identify, develop, and implement practices to stop the spread of resistance and to promote appropriate use of antibiotics in health care. CDC also supports research to fill gaps in knowledge related to aspects of antibiotic use and resistance that have public health impact. According to agency officials, CDC has provided approximately $110 million since 2016 to support this research through cooperative agreements and contracts. <5.1.7. Continued National Public Awareness Campaign> In 2017, CDC revised a national campaign to promote public awareness about appropriate antibiotic use. The campaign, called Be Antibiotics Aware: Smart Use, Best Care, is aimed at both health care providers and the general public and refines the message from CDC s earlier campaign ( Get Smart: Know When Antibiotics Work ). <5.1.8. Collaborated Internationally> HHS s Office of Global Affairs has collaborated with other countries, including those participating in the TATFAR program, to promote the appropriate use of antibiotics internationally. In addition, CDC and the Office of Global Affairs launched the Antimicrobial Resistance Challenge at the United Nations General Assembly in September 2018 to catalyze global action against antibiotic resistance. A year later, CDC announced this challenge had resulted in nearly 350 commitments from government health officials, pharmaceutical and health insurance companies, and others from 33 countries to make formal commitments that further the progress against antimicrobial resistance, such as by improving appropriate antibiotic use. <5.2. Four Key Challenges Have Limited Federal Efforts to Promote the Appropriate Use of Antibiotics> We identified four key challenges that have limited progress in federal efforts to promote the appropriate use of antibiotics, based on our analysis of documents, interviews with agency officials and experts, and other information. First, federal requirements for antibiotic stewardship programs apply only to certain types of health care facilities, and federal incentives for clinicians to adopt antibiotic stewardship activities are optional, limiting implementation of antibiotic stewardship across the health care spectrum. Second, CDC faces challenges in collecting complete antibiotic use data, limiting the agency s ability to monitor and improve antibiotic use. Third, the CARB Task Force has not identified and reported on agencies plans to address the challenges related to expanding antibiotic stewardship programs and antibiotic use data collection across health care settings, so these plans are not publicly known. Fourth, antibiotic stewardship training for health care providers may have limited success in improving antibiotic prescribing behavior, and federal agencies indicate that it is challenging to evaluate the effectiveness of such training. <5.2.1. Federal Requirements and Incentives Are Limited> Federal requirements for antibiotic stewardship programs are limited to certain types of health care facilities, and federal incentives for antibiotic stewardship activities are optional and limited to eligible Medicare clinicians, such as physicians. Federal requirements for antibiotic stewardship programs are limited to certain types of health care facilities. As previously noted, federal requirements for antibiotic stewardship programs are currently limited to hospitals and critical access hospitals, long-term care facilities such as nursing homes, and DOD and VA health care facilities. However, CMS has not yet developed requirements for ambulatory surgery centers or dialysis centers to implement antibiotic stewardship programs, which the National Action Plan called for being implemented by March 2018. CMS officials told us that the agency would develop those requirements once the rule for hospitals and critical access hospitals which was delayed was finalized. In addition, CMS s health and safety requirements do not extend to other types of outpatient settings such as physicians offices, retail clinics, and urgent care centers where inappropriate antibiotic use has been found to be high. In the absence of regulatory levers, CDC and AHRQ encourage those types of facilities to establish antibiotic stewardship programs on a voluntary basis. Experts, including those at our meeting, indicate that expansion of antibiotic stewardship across the health care spectrum is likely to remain limited without additional federal requirements or other meaningful incentives thus hindering the nation from fully achieving the benefits of appropriate antibiotic use. Such benefits include better patient outcomes, lower health care costs, and slower growth of antibiotic resistance. CMS incentives for clinicians to improve antibiotic use are optional, and implementation has been limited. The MIPS program s effect on incentivizing appropriate use of antibiotics is limited, in part, because the incentives are available only to clinicians who meet MIPS eligibility criteria and because eligible clinicians can choose not to report data to CMS. In addition, participating clinicians have a wide range and number of quality measures and improvement activities, beyond those related to antibiotics, from which the clinicians can choose to report data to CMS to meet program requirements; thus, the likelihood that clinicians will choose to report on antibiotics- related measures or activities may remain low. For example, in 2017, MIPS-eligible clinicians were generally required to select and submit data to CMS on six out of 271 available quality measures; we identified nine of those measures as being related to antibiotics. MIPS-eligible clinicians were also generally required to select and submit data that year for up to four out of 93 available improvement activities; we identified one such activity as being related to antibiotics. Our analysis of CMS data on MIPS participation in 2017, the program s first performance year and the most recently available data, indicates that implementation of the antibiotics-related quality measures and improvement activities was limited. According to a CMS report, a total of 1,057,824 clinicians were eligible for MIPS in 2017, of which 1,006,319 clinicians, or 95 percent, reported data. Based on our analysis of data contained in the CMS report s appendix, the number of 2017 MIPS-participating clinicians who reported to CMS on the nine antibiotics-related quality measures ranged from 844 clinicians to 33,631 clinicians; the measure on appropriate treatment for children with an upper respiratory infection was the most reported antibiotics-related measure. By contrast, the most frequently reported quality measures overall in 2017 were controlling high blood pressure (510,723 clinicians), preventive care and screening for tobacco use (492,357), and breast cancer screening (473,819). CMS s data also show that for the 2017 MIPS improvement activities, 47,645 of the 1,006,319 participating clinicians reported on the one improvement activity related to antibiotics that year: implementation of an antibiotic stewardship program. Specifically, this activity referred to implementation of an antibiotic stewardship program that measured the appropriate use of antibiotics for several different conditions (upper respiratory infections in children, pharyngitis, and bronchitis in adults), according to clinical guidelines for diagnostics and therapeutics. <5.2.2. CDC Faces Challenges in Collecting Complete Antibiotic Use Data, Limiting the Agency s Ability to Monitor and Improve Appropriate Use> CDC s ability to monitor and improve appropriate antibiotic use is limited by challenges it faces in collecting complete antibiotic use data across health care settings. According to CDC, experts we interviewed, and documents we reviewed, more data are needed to identify the extent of antibiotic use, including inappropriate use. In turn, CDC and experts say that more antibiotic use data would enable health care providers, federal agencies, and others to identify and target areas for improvement, track results over time, and adjust antibiotic stewardship activities as needed. We have also previously reported that monitoring antibiotic use over time in both inpatient and outpatient settings is important for understanding patterns in antibiotic resistance and for targeting stewardship activities. In addition, WHO notes that data on global antibiotic use is essential for obtaining a comprehensive picture of antibiotic resistance and for identifying areas where actions are needed. Despite progress in collecting antibiotic use data (as previously discussed), CDC faces several challenges in its efforts to collect complete antibiotic use data. For example, health care providers across various inpatient and outpatient settings do not record such data in one centralized, electronic database. In addition, CDC officials told us that there are no uniform requirements at the federal level (with the exception of DOD and VA hospitals) for providers to report their antibiotic use data to a centralized database such as the NHSN AU Option, and, according to CDC officials and experts we interviewed, data collection can be costly for CDC and health care providers. Because of these and other challenges, CDC relies on data voluntarily reported by hospitals through the AU Option, and the agency collects its own data or purchases proprietary pharmacy data to estimate antibiotic use and, to some degree, to assess appropriateness of use across health care settings. However, these data are incomplete owing to several limitations, as described by type of setting below. Hospitals. Our analysis of CDC data shows that although the number of hospitals participating in the AU Option has gradually risen since its launch in 2011, participation remains limited, with 1,561, or 23 percent, of the 6,849 eligible hospitals reporting at least one month of antibiotic use data as of January 1, 2020. (See fig. 5 for a map showing the percentage of U.S. hospitals reporting antibiotic use data to the AU Option, by state, plus the District of Columbia and Puerto Rico, as of August 2019.) While CDC officials told us they considered this level of participation to be an accomplishment given that participation is voluntary, the National Action Plan set 95 percent participation in the AU Option by 2020 as a significant outcome to support the plan s goal to strengthen national surveillance efforts to combat resistance. Experts, including those at our meeting, cite multiple challenges that CDC faces in collecting hospitals antibiotic use data through the AU Option. For example, The Pew Charitable Trusts has stated that current, voluntary data are limited and that mandatory reporting would provide the data needed to establish a more accurate baseline of antibiotic use, identify stewardship interventions that would be most effective, and measure progress toward reducing inappropriate prescribing. An expert who attended our meeting later suggested that CMS could implement a pay-for-reporting program to incentivize hospitals to report data to the AU Option, and that the program could transition to a pay-for-performance program over time. In addition, experts we interviewed told us that a participating hospital must be willing to spend as much as tens of thousands of dollars for a vendor to customize software for their electronic health record systems to use the AU Option, in addition to investing time training staff on how to use it. CDC officials also told us that the agency lacks the authority to require hospitals to report their antibiotic use data, and that there is currently no federal funding available to assist hospitals with the investment needed to participate in the AU Option. Furthermore, hospitals voluntary participation in the AU Option may remain limited until CDC s benchmark measures are adequately risk- adjusted for different locations and patient populations. For example, one expert we interviewed said that because the AU Option currently aggregates data on the volume of antibiotics used without adequate risk adjustment, a hospital with a patient population that might warrant higher use of antibiotics may be reluctant to report its antibiotic use data to avoid looking like an unnecessarily high prescriber. Regarding another data source for antibiotic use in hospitals, CDC s EIP provides more granular data at the patient level that allows CDC to assess the appropriateness of antibiotic use. However, CDC officials told us that the agency has been unable to repeat its hospital prevalence survey since 2015 due to insufficient resources (the next survey is expected in 2020) and that the survey encompasses a limited number of hospitals, patients, and states. Nursing homes. According to CDC, nursing homes may be the most challenging health care setting from which the agency collects antibiotic use data; CDC officials stated that this is because electronic health record systems, from which data could be easily accessed, are less common in nursing homes. In addition, CDC officials stated that the agency s collection of antibiotic use data through the EIP nursing homes prevalence survey has been limited in scope and frequency due to insufficient resources. Outpatient settings. Collecting data for outpatient settings, such as retail pharmacies, is also challenging. For example, CDC officials stated that one proprietary source from which CDC purchases data reflects the volume of pharmacy antibiotic prescriptions, but the data do not contain diagnostic information, preventing the agency from evaluating the appropriateness of those prescriptions. Other CDC or proprietary data sources from which the agency collects or purchases antibiotic use data are limited by the frequency with which those sources release such data, the age range of patients included in the data (i.e., whether they are over or under 65 years), or other characteristics. As previously noted, approximately 85 to 95 percent of the nation s antibiotic use, by volume, occurred in outpatient settings from 2010 through 2015. <5.2.3. The CARB Task Force Has Not Identified Plans to Address Challenges Related to Expanding Stewardship Programs and Antibiotic Use Data Collection> The National Action Plan calls for strengthening antibiotic stewardship and for the timely reporting of antibiotic use data across health care settings. Executive Order No. 13676, as previously noted, directs the CARB Task Force to provide annual updates to the President on federal government actions to combat antibiotic resistance, including progress made in implementing the National Action Plan and plans for addressing any barriers preventing its full implementation. These annual updates are to include specific goals, milestones, and metrics for proposed actions and recommendations, taking into consideration federal resources. However, in its progress reports covering the first four years of the National Action Plan s implementation which were provided to the President and the public the CARB Task Force has not identified plans to address barriers that agencies face in expanding antibiotic stewardship programs across health care settings. For example, the task force did not include in the progress reports CMS s plans to address barriers to expanding its requirements for antibiotic stewardship programs in hospitals, which were delayed, or in certain other types of health care facilities. In addition, in its progress reports to date, the CARB Task Force has not identified plans to address the barriers to expanding the collection of antibiotic use data across health care settings. For example, the task force did not include in the progress reports CDC s plans to address barriers to achieving the significant outcome of 95 percent of eligible hospitals participating in the AU Option by 2020, although participation was only 23 percent as of January 1, 2020. The CARB Task Force coordinators said, in response to our inquiries during this review, that the task force intends to identify agencies plans for addressing barriers in the Year 5 progress report to be published in fall 2020. However, the coordinators also stated that the progress reports to date have not identified plans to address barriers largely because the task force focused on reporting the agencies accomplishments in implementing the National Action Plan. Until the CARB Task Force identifies and reports on agencies plans to address barriers related to the expansion of antibiotic stewardship programs and the collection of antibiotic use data across health care settings to the extent feasible, the federal government will not have reasonable assurance that it is fully implementing the National Action Plan and addressing antibiotic resistance. <5.2.4. Antibiotic Stewardship Training May Have Limited Success in Improving Prescribing Behavior> While training is recognized as one component of an antibiotic stewardship program, such training may have limited success in improving antibiotic prescribing behavior, and federal agencies indicate that it is challenging to evaluate the training s effectiveness. CDC officials and experts say that inappropriate antibiotic use could be improved through stewardship training, but it is challenging because antibiotic prescribing behavior is driven by multiple factors and can be difficult to change. For example, a PACCARB report stated that prescribers often feel pressure to prescribe antibiotics even when antibiotics may not be warranted because of their perception that a patient is demanding such a prescription, or a patient s actual demand. In addition, CDC notes that antibiotics are frequently prescribed for respiratory conditions most commonly caused by viruses such as the common cold, against which antibiotics are ineffective. Other factors that drive antibiotic prescribing behavior, as cited by experts, include habit, which may stem from what physicians and other prescribers learn during their residencies or observe in the workplace; the time it takes to explain to a patient why an antibiotic is inappropriate; and decision fatigue caused by tiredness or hunger. (See table 4 for examples of factors that drive or deter antibiotic prescribing behavior.) Nevertheless, federal agencies plan to evaluate the effectiveness of their antibiotic stewardship training programs to some extent, although the National Action Plan does not require the agencies to do so. For example, CDC officials told us that their online training course for various types of clinicians allows participants to fill out an evaluation that includes questions about whether the participant will be able to apply knowledge gained from the course, which the agency will use to refine and update the course. In addition, for the antibiotic stewardship training for nursing homes that CDC and CMS jointly developed, CDC officials told us that participants will be asked 6 months after the training whether participants implemented stewardship practices and whether there have been reductions in antibiotic use as a result of the training. However, CMS, DOD, and VA officials noted that it is difficult to isolate and measure the effectiveness of antibiotic stewardship training specifically on antibiotic prescribing behavior compared to other, concurrent federal efforts, such as requirements and guidance to promote appropriate antibiotic use. For example, DOD officials told us that their department has looked at antibiotic use data from DOD health care facilities as a surrogate to evaluate whether antibiotic stewardship in general has been effective but noted that is an imperfect measure since there are many factors that affect antibiotic prescribing behavior, and training is only one of several interventions aimed at reducing inappropriate antibiotic use. <6. Conclusions> Antibiotic resistance has been characterized as one of the greatest public health threats the world faces. A concerted effort involving coordination of multiple stakeholders and countries and across health fields is critical to helping ensure that bacterial infections remain treatable. Steps by federal agencies to expand surveillance, facilitate the development and use of new diagnostic tests, fund R&D for the development of new treatments, and issue requirements and guidance for antibiotic stewardship programs are important efforts toward addressing the problem of antibiotic resistance and implementing the National Action Plan. Significant challenges to conducting surveillance remain. For example, CDC has not determined the participation rates or appropriate distribution of participating hospitals needed by the voluntary antibiotic-resistance reporting option to achieve CDC s goal of conducting regional and national assessments of resistance. By taking steps to determine the participation rates and distribution needed for this option, CDC would have more reasonable assurance that it can achieve its goal. CDC classified gonorrhea as one of the most urgent resistant threats in the nation, but collects limited specimens representing an estimated 1 to 2 percent of the reported cases in the United States for GISP, its primary surveillance system for resistant gonorrhea. However, CDC has not fully evaluated the representativeness of the trends identified by this surveillance system. By evaluating GISP to ensure that it includes measures of its representativeness, such as comparing the trends in the sample population with those in the overall U.S. population, using specially designed studies if needed, CDC would have better assurance that the trends detected in GISP accurately reflect the characteristics of the health-related outcome the system is designed to monitor. Further, neither the 2013 nor the 2019 Threats Reports provided quantitative measures of uncertainty for CDC s estimates of morbidity and mortality resulting from antibiotic-resistant infections. Providing such measures, such as standard errors or confidence intervals, as appropriate, in its Threats Reports would help CDC and others compare information within and across reporting efforts, and draw appropriate conclusions about the characteristics of antibiotic resistance in the United States, including limitations associated with reported findings and conclusions. Finally, there has been a 6-year interval between CDC s reports on antibiotic resistance threats. By developing a plan for more frequent dissemination of consolidated reporting on priority pathogens at regular intervals, CDC would have more timely trend data and other information necessary for users of the data, including policymakers, to prioritize, plan, implement, and evaluate public health actions to address antibiotic resistance. HHS has funded some studies to assess whether certain tests for antibiotic resistance lead to improved clinical outcomes, including more effective treatment for patients or more judicious use of antibiotics. However, HHS agencies that are in a position to conduct or fund such studies have not identified leadership, roles, and responsibilities to help further such efforts. By taking steps to identify leadership, roles, and responsibilities, agencies could more effectively address the need for clinical outcomes studies, potentially increasing test use, improving patient care, and enhancing stewardship efforts. In addition, for its part, FDA has not regularly monitored tests for antibiotic resistance to assess breakpoint updates or evaluated any effects of using tests for antibiotic resistance with out-of-date breakpoints. By regularly monitoring and evaluating FDA-authorized tests that rely on breakpoints, FDA would be able to determine whether test manufacturers are updating breakpoints as needed and help ensure that patient care and infection control efforts are effective. While government push incentives to support antibiotic R&D have been helpful, experts and antibiotic developers have indicated that push incentives alone are not sufficient to sustain antibiotic development. PACCARB, TATFAR, and other experts have called for additional postmarket pull incentives to increase the antibiotic pipeline, but HHS does not have a strategy for doing so. Developing a strategic framework that outlines key design elements of new incentives would be a first step toward identifying potential authorities and resources that may be needed and determining agency roles for implementation and oversight of the incentives. Until such incentives are developed, more drug companies may exit the antibiotic development sector, and the pipeline of new treatments may continue to decrease. Finally, in its progress reports covering the first four years of the National Action Plan s implementation, the CARB Task Force did not identify plans, as required by the Executive Order, to address barriers that agencies face in fully implementing the National Action Plan, such as expanding (1) a CDC program designed to strengthen the U.S. response to resistant gonorrhea; (2) antibiotic stewardship programs across health care settings; and (3) antibiotic use data collection, to the extent feasible. Without identifying plans to address these and other challenges, the federal government cannot assure that the country is prepared to overcome the urgent health consequences of antibiotic resistance. Until the CARB Task Force, which is coordinated by HHS officials, identifies and reports on agencies plans to address barriers preventing full implementation of the National Action Plan, the federal government will not have reasonable assurance that it is fully implementing the National Action Plan and addressing antibiotic resistance. <7. Recommendations for Executive Action> We are making a total of eight recommendations, including four to CDC, three to HHS, and one to FDA. Specifically: The Director of CDC should take steps to determine participation rates and distribution needed in the AR Option of the National Healthcare Safety Network for conducting regional and national assessments of antibiotic resistance of public health importance. (Recommendation 1) The Director of CDC should ensure that CDC s evaluation of its surveillance system for antibiotic-resistant gonorrhea includes measures of its representativeness, such as comparison of the trends in the sample population with those in the overall U.S. population, using specially designed studies if needed. (Recommendation 2) The Director of CDC should provide information on uncertainties for antibiotic resistance estimates in its consolidated Threats Reports, including standard errors or confidence intervals, as appropriate. (Recommendation 3) The Director of CDC should develop a plan for timely, consolidated reports of antibiotic resistance in priority pathogens at regular intervals. (Recommendation 4) The Secretary of HHS should identify leadership and clarify roles and responsibilities among HHS agencies to assess the clinical outcomes of diagnostic testing for identifying antibiotic-resistant bacteria. (Recommendation 5) The Commissioner of FDA should direct the Center for Devices and Radiological Health to conduct additional monitoring and evaluation of the status of FDA-authorized tests that rely on breakpoints, on a regular basis, to determine whether test manufacturers are updating breakpoints, seeking additional resources as needed. (Recommendation 6) The Secretary of HHS should develop a strategic framework to further incentivize the development of new treatments for antibiotic-resistant infections, including through the use of postmarket financial incentives, and, if appropriate, make recommendations to Congress for necessary authority. (Recommendation 7) The Secretary of HHS should direct the CARB Task Force to include in its annual updates to the President plans for addressing any barriers preventing full implementation of the National Action Plan and, as appropriate, make recommendations for new or modified actions. Specifically, the CARB Task Force should identify plans to address barriers, such as those related to expanding (1) a CDC program designed to strengthen the U.S. response to resistant gonorrhea; (2) antibiotic stewardship programs across health care settings; and (3) antibiotic use data collection across health care settings, to the extent feasible. (Recommendation 8) <8. Agency Comments and Our Evaluation> We provided a draft of this report to DOD, VA, and HHS for review and comment. DOD and VA did not provide formal comments but generally agreed with our report. In its comments, reproduced in appendix V, HHS generally concurred with our findings and seven of our recommendations, and did not concur with one of our recommendations, as discussed below. HHS identified several actions it intends to take to address our recommendations. DOD and HHS also provided technical comments, which we incorporated as appropriate. In response to our first recommendation, HHS concurred and CDC stated it is working with public health partners to promote the voluntary use of the AR Option, providing technical support to states that may be considering a state or local mandate to require AR and AU reporting, and developing pilot programs to assess AR Option data and other data sources for certain types of antibiotic resistance. While these actions are helpful, we believe taking additional steps, such as determining goals for participation rates and distribution for AR Option reporting, would give CDC more reasonable assurance that hit can conduct regional and national assessments of resistance. In response to our second recommendation, HHS concurred and CDC stated it is taking additional efforts to examine the representativeness of data collected through its primary surveillance system for resistant gonorrhea, including working to develop laboratory methods to reduce dependence on cultured isolates. CDC stated that steps to refine and improve collection of resistant gonorrhea data require additional resources. We believe that CDC requesting such resources would help ensure that such data are representative of the overall U.S. population. HHS generally concurred with our third recommendation. CDC stated it feels that it is critical to publish the data after peer review and then plans to link the publications back to online resources of the 2019 Threats Report. We believe that peer-reviewed publication is important, but it is also important for CDC to take additional steps to establish and report uncertainties for the national estimates or summary data that would help CDC and others draw appropriate conclusions about the characteristics of antibiotic resistance in the United States. In response to our fourth recommendation, HHS concurred and CDC stated it has plans to update its enterprise-wide AR Threats Report every three years, and that it also issues regular reports on specific groups of pathogens. In response to our fifth recommendation, HHS concurred and stated that the CARB Task Force leadership will work with relevant HHS agencies to clarify roles and responsibilities and identify leadership, if appropriate, for supporting research on clinical outcomes delated to diagnostic tests. HHS concurred with our sixth recommendation, and FDA concurred with conducting additional monitoring and evaluation of tests relying on breakpoints when FDA identifies or recognizes new breakpoints. FDA stated that it has taken major steps to help address challenges associated with updating such tests to reflect the most current breakpoints. We believe that in addition to these steps, monitoring and evaluation of current FDA-authorized tests that may still be using out-of- date breakpoints will enhance FDA s ability to provide assurance that patient care and infection control efforts are effective. HHS did not concur with our seventh recommendation that HHS should develop a strategic framework to further incentivize the development of new treatments for antibiotic-resistant infections, including through the use of postmarket financial incentives. HHS noted that, while it agrees that additional incentives are needed to address the limited pipeline for novel and innovative treatments to combat antibiotic resistance, it is still conducting analyses to understand whether postmarket incentives should be included as a component of its forthcoming strategic framework to further incentivize the development of new treatments. However, HHS did not specify when its framework would be released. We support HHS s efforts to develop such a framework, as this is a complex issue with multiple factors to consider. However, we believe our recommendation is still warranted. Antibiotic resistance is one of the greatest global public health threats, and experts, including the WHO, have warned that the pipeline of new antibiotics in development is insufficient to combat the threat. Without an adequate arsenal of treatments, we are likely to see increasing mortality caused by these deadly infections. As we reported, experts, advisory groups, federal officials, and antibiotic developers have all called for additional postmarket incentives to reinvigorate the pipeline of antibiotics under development. The current significant federal investment in push incentives to support antibiotic R&D is helpful but will ultimately be ineffective if companies receiving this investment are unable to sustain their business once their treatment reaches the market. Therefore, we maintain that it is important that HHS not delay the development of a strategic framework that includes postmarket incentives, which is just an initial step toward the creation of these incentives. Until additional postmarket incentives are developed, more drug companies may exit the antibiotic development sector, and the pipeline of new treatments for antibiotic-resistant infections may continue to decrease. In response to our eighth recommendation, HHS concurred and stated that beginning in 2020 and continuing annually thereafter, the CARB Task Force s progress reports will include discussion of any barriers preventing full implementation of the National Action Plan, including, as appropriate, barriers that GAO has identified. We emphasize that the CARB Task Force should also identify plans to address such barriers and, as appropriate, make recommendations for new or modified actions in future progress reports, in accordance with Executive Order No. 13676. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to appropriate congressional committees; the Secretaries of DOD, HHS, and VA; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact us at (202) 512-6888 or personst@gao.gov, or (202) 512-7114 or deniganmacauleym@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology This report examines: (1) the Centers for Disease Control and Prevention s (CDC) efforts to address surveillance of antibiotic resistance and any challenges to these efforts; (2) federal efforts to advance the development and use of diagnostic tests for identification and characterization of resistant bacteria and to address barriers to the development of diagnostic tests; (3) challenges to developing new treatments for antibiotic-resistant infections and federal efforts to address the challenges; and (4) federal efforts to promote the appropriate use of antibiotics and any challenges that remain. We focused our review primarily on agency actions since 2015, when the National Action Plan for Combating Antibiotic-Resistant Bacteria (National Action Plan) was published. We also focused our review on human health, as we have reported on federal efforts to address the use of antibiotics in food animals and recommended actions to improve these efforts for more than 20 years. Additionally, we focused our review on antibiotic-resistant bacteria. We generally excluded federal efforts related to infection prevention and control in human health care, on which we have previously reported. To address all four objectives, we reviewed relevant agency reports and documents, such as CDC s report, Antibiotic Resistance Threats in the United States, 2013 (2013 Threats Report); conducted interviews with officials from federal agencies, experts, and stakeholder organizations; and we reviewed relevant literature, policy papers, and GAO reports. We interviewed officials from federal agencies responsible for implementing the aspects of the National Action Plan related to our research objectives: the Department of Health and Human Services (HHS) Office of the Assistant Secretary for Planning and Evaluation, the Biomedical Advanced Research and Development Authority (BARDA), CDC, the Centers for Medicare & Medicaid Services (CMS), the Food and Drug Administration (FDA), the National Institutes of Health (NIH), and the Office of Global Affairs; as well as the Department of Defense (DOD) and the Department of Veterans Affairs. We also interviewed experts and representatives from organizations involved in public health and epidemiology, infectious diseases and microbiology, antibiotic research and development (R&D), antibiotic stewardship, and other issues relating to antibiotic resistance. Because antibiotic resistance is a global problem, we also interviewed officials from the World Health Organization (WHO), the European Centre for Disease Prevention and Control, the European Medicines Agency, the Wellcome Trust, Public Health England, and the Surveillance and Epidemiology of Drug-Resistant Infections Consortium about various aspects of our review; and we reviewed relevant documents from these entities. We identified experts and organizations through literature and other documents we reviewed and through referrals from agency officials and other experts we interviewed. In addition, we attended several meetings and reviewed summaries of meetings held by the Presidential Advisory Council on Combating Antibiotic-Resistant Bacteria (PACCARB). Furthermore, we attended two conferences related to antibiotic resistance: the World Anti-Microbial Resistance Congress and the Gordon Research Conference on chemical and biological threat defense, the latter of which had a session devoted to antibiotics and antibiotic resistance. For each of our objectives, we identified and reported on actions taken by federal agencies and key challenges that the agencies face in addressing antibiotic resistance. We evaluated the actions taken by federal agencies against relevant criteria, as applicable. In addition, in September 2018, we convened a meeting of experts in antibiotic resistance epidemiology, diagnostic testing, antibiotic development, and antibiotic stewardship. This meeting of experts was planned and convened with the assistance of the National Academy of Sciences to better ensure that a breadth of expertise was brought to bear in its preparation; however, all final decisions regarding meeting substance and expert participation are the responsibility of GAO. Any conclusions and recommendations in GAO reports are solely those of the GAO. The Board on Population Health and Public Health Practice within the National Academy of Sciences solicited expert nominations from academia, public health laboratories, industry, and other organizations working in topics relating to antibiotic resistance. From their list of 51 nominees, and additional nominees we independently identified, we convened a meeting of 18 experts selected for their knowledge and expertise related to antibiotic resistance epidemiology, diagnostic testing, antibiotic development, and antibiotic stewardship. Eleven of the 18 experts who participated in our meeting also reviewed and provided comments on a draft of our report. We refer to such experts in this report as experts at our meeting; appendix II contains a list of the expert participants. To examine CDC s efforts to address surveillance for antibiotic resistance and any challenges to these efforts, we reviewed documentation and conducted interviews with agency officials and other key stakeholders on each of the surveillance systems across CDC that track antibiotic resistance and reviewed CDC s 2013 Threats Report and CDC s Antibiotic Resistance Threats in the United States, 2019 data. We further focused our review on the 17 priority disease-causing bacteria listed in CDC s 2013 Threats Report. The CDC surveillance systems included: Antibiotic Resistance Laboratory Network Emerging Infections Program (EIP) Gonococcal Isolate Surveillance Program (GISP) National Antimicrobial Resistance Monitoring System (NARMS) National Healthcare Safety Network (NHSN) National Notifiable Diseases Surveillance System National Tuberculosis Surveillance System For NHSN, we also assessed health care facility participation data by state and territory. We assessed the reliability of these data by reviewing them for any outliers or anomalies and by inquiring with agency officials about their source and any known reliability issues. We determined that these data were sufficiently reliable for assessing facility participation rates by U.S. state and territory. Stakeholder organizations we interviewed represented state and territorial epidemiologists and other public health officials (the Council of State and Territorial Epidemiologists and the Association of State and Territorial Health Officials) and an international consortium to address challenges in surveillance of antibiotic resistance (the Surveillance and Epidemiology of Drug-resistant Infections Consortium). We also reviewed reports on antibiotic resistance surveillance challenges from the Public Health Informatics Task Force and the Antibiotic Resistance Surveillance Task Force. We also reviewed documents from WHO s global surveillance system and interviewed WHO and CDC officials to identify challenges that limit CDC s ability to assess threats from abroad. We evaluated challenges and steps CDC has taken against CDC s Updated Guidelines for Evaluating Public Health Surveillance Systems; Standards for Internal Control in the Federal Government; prior GAO work; the Government Performance and Results Act of 1993 (GPRA) and the GPRA Modernization Act of 2010; the Office of Management and Budget Circular No. A-11 and Standards and Guidelines for Statistical Surveys; relevant National Action Plan objectives, aims, and milestones; and Executive Order No. 13676, September 2014. To examine federal efforts to advance the development and use of diagnostic tests, we also interviewed representatives from a nongeneralizable selection of six diagnostic test manufacturers to identify challenges they face in developing tests for antibiotic resistance and challenges in increasing user adoption of their tests. We further focused our review on the 17 priority disease-causing bacteria listed in CDC s 2013 Threats Report. The six manufacturers we interviewed were Accelerate Diagnostics, Beckman Coulter, BioFire and its parent company, BioMerieux, Bruker, Cepheid, and Roche Diagnostics. We identified these manufacturers by compiling a list based on previous work we conducted, interviews with select experts, and internet search. We selected six manufacturers that were identified by more than one source while encompassing different types of tests (culture and genotypic). We limited our scope to FDA-authorized tests that is, tests that have been reviewed and cleared by FDA for marketing in the United States that can identify resistance in at least one type of bacteria categorized as priority bacteria in CDC s 2013 Threats Report. Some of these tests are called antibiotic susceptibility tests, but we refer to the entire class of such tests as tests. We included in our scope tests that can differentiate between viral and bacterial infection because these types of tests are included in the National Action Plan. We evaluated the actions taken by federal agencies against the Standards for Internal Control in the Federal Government, relevant National Action Plan objectives, aims, and milestones under Goal 3, and relevant sections in the PACCARB Recommendations for Incentivizing the Development of Vaccines, Diagnostics, and Therapeutics to Combat Antibiotic Resistance. We also evaluated federal agency actions against the leadership and clarity of roles and responsibilities leading practices from GAO s Managing for Results: Key Considerations for Implementing Interagency Collaborative Mechanisms. We focused on these key practices when there was a lack of specifically assigned roles in either the National Action Plan or the PACCARB report for key activities. To identify challenges to developing new treatments for antibiotic- resistant infections and examine federal efforts to address these challenges, we also interviewed 11 randomly selected companies that conduct research and development on new treatments for bacterial infections. We included companies that are researching or developing both traditional antibiotics and alternatives to antibiotics which we call nontraditional products in this report and we included companies that had and had not received funding from the Combating Antibiotic- Resistant Bacteria Biopharmaceutical Accelerator (CARB-X) and companies that do and do not have existing FDA-approved drugs on the market. We asked company representatives about challenges in developing new antibiotics they have identified, support they may have received from federal agencies, how effective the support has been to them, and their views on additional incentives that would promote the development of new antibiotics. We also interviewed experts on the topic of antibiotic development and industry stakeholders, specifically The Pew Charitable Trusts and the Biotechnology Innovation Organization. We interviewed federal officials from BARDA, CMS, DOD, FDA, and NIH to learn about their programs and actions to support the development of treatments for antibiotic-resistant infections and requested information about funding for antibiotic R&D from BARDA, DOD, and NIH. We included relevant agency actions that began before the National Action Plan was issued in 2015 if they continued after 2015. Finally, we reviewed literature related to antibiotic development and reports about antibiotic pull incentives written by health policy advisory groups, including the PACCARB, the Transatlantic Taskforce on Antimicrobial Resistance (TATFAR), the DRIVE-AB project, and the Duke Margolis Center for Health Policy. We evaluated the actions taken by federal agencies to help address the challenges to developing new treatments against experts and advisory groups views on additional actions needed and against the principles related to developing strategies outlined in GPRA and the GPRA Modernization Act of 2010. We did not assess challenges to developing products designed to prevent infections, such as vaccines, nor federal actions related to these types of products. To examine federal agency efforts to promote the appropriate use of antibiotics and any challenges that remain, we also analyzed CMS data and related documentation on the quality measures and improvement activities related to antibiotics as part of CMS s Merit-based Incentive Payment System (MIPS) in 2017. Specifically, we identified CMS s antibiotics-related quality measures and improvement activities by conducting a search for the words antibiotic, antimicrobial, bacteria, resistance, and resistant on CMS s MIPS website. We then reviewed CMS s data on the number of MIPS-eligible clinicians who selected and reported on these measures and activities in 2017, the most recently available data. In 2017, there were nine MIPS quality measures related to antibiotics, as follows: (1) acute otitis externa: systemic antimicrobial therapy - avoidance of inappropriate use; (2) adult sinusitis: antibiotic prescribed for acute sinusitis (overuse); (3) adult sinusitis: appropriate choice of antibiotic: amoxicillin with or without Clavulanate prescribed for patients with acute bacterial sinusitis (appropriate use); (4) appropriate testing for children with pharyngitis; (5) appropriate treatment for children with upper respiratory infection; (6) appropriate treatment of Methicillin-sensitive Staphylococcus aureus bacteremia; (7) avoidance of antibiotic treatment in adults with acute bronchitis; (8) perioperative care: selection of prophylactic antibiotic first- or second-generation Cephalosporin; and (9) total knee replacement: preoperative antibiotic infusion with proximal tourniquet. In addition, there was one MIPS improvement activity related to antibiotics in 2017: implementation of antibiotic stewardship program. We reviewed the MIPS data for any obvious outliers or anomalies, and we determined that these data were sufficiently reliable for reporting the number of clinicians who reported implementing these quality measures and improvement activities. In addition, we reviewed aggregated data from CDC on the total number of eligible U.S. hospitals voluntarily reporting their antibiotic use data to a CDC system (the NHSN s Antimicrobial Use Option); we then calculated the percentage of eligible hospitals reporting such data as of January 1, 2020. We assessed the reliability of the aggregated data by reviewing them for any obvious errors or missing data totals and inquiring with CDC officials about their source and any known reliability issues. We determined that these data were sufficiently reliable for reporting hospital participation rates in the system. We also reviewed selected articles on antibiotic use and stewardship compiled from a variety of sources, including CDC documents and experts we interviewed published in literature. In addition, we interviewed experts on antibiotic use and stewardship, including representatives from PACCARB, Emory University s School of Medicine, the University of Minnesota s Center for Infectious Disease Research and Policy, The Joint Commission, the Society of Infectious Diseases Pharmacists, The Pew Charitable Trusts, and the Association for Professionals in Infection Control and Epidemiology. We evaluated federal efforts and challenges against relevant National Action Plan objectives and milestones and Executive Order No. 13676. We focused on antibiotic use in the United States, rather than global antibiotic use. We conducted this performance audit from February 2018 to March 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Expert Meeting Participant List We collaborated with the National Academy of Sciences to convene a two-day meeting of experts to inform our work on federal efforts to address antibiotic resistance; the meeting was held on September 17 and 18, 2018. The experts who participated in this meeting are listed below. Many of these experts gave us additional assistance throughout our work, including by providing additional technical expertise and answering questions, and 10 of these experts reviewed and provided comments on our draft report for technical accuracy. Appendix III: Additional Examples of Federal Efforts to Support Antibiotic Research and Development This appendix contains additional examples of efforts by agencies within the Departments of Health and Human Services, Defense, and Energy to provide support for antibiotic research and development beyond those mentioned in the report. These examples do not comprise the full extent of agencies efforts. Appendix IV: Additional Information on Federal Efforts to Promote Appropriate Antibiotic Use This appendix contains more detailed information on federal efforts to promote the appropriate use of antibiotics in health care through antibiotic stewardship programs and activities, organized by agency. These examples do not comprise the full extent of agencies efforts. Appendix V: Comments from the Department of Health and Human Services Appendix VI: GAO Contacts and Staff Acknowledgments <9. Staff acknowledgments> In addition to the contacts named above, John Neumann (Managing Director); Will Hadley, Anne K. Johnson, and Sushil K. Sharma, PhD, DrPH (Assistant Directors); Josey Ballenger, Hayden Huang, PhD, and Laura Tabellion (Analysts-in-Charge); and Amber Sinclair, PhD, made key contributions to this report. Nora Adkins, George Bogart, Jehan Chase, Anika McMillon, Laurie Pachter, Eric Peterson, Sarah Sheehan, Ben Shouse, Sara Sullivan, Walter Vance, Harris Weisz, and Emma Williams also made important contributions. | Why GAO Did This Study
Bacterial infections have become more difficult, and sometimes impossible, to treat due to antibiotic resistance, which occurs when bacteria develop the ability to defeat the available drugs designed to kill them. Concerns about rising rates of resistance to available treatment options prompted the federal government to create the 5-year National Action Plan in 2015. The plan called for federal agencies to strengthen surveillance, advance the development of diagnostic tests and new antibiotics, and slow the emergence of resistant bacteria, among other things.
GAO was asked to review federal efforts to address antibiotic resistance. This report examines federal efforts and challenges related to (1) surveillance of antibiotic resistance, (2) the development and use of diagnostic testing to identify antibiotic resistance, (3) the development of treatments for resistant infections, and (4) appropriate antibiotic use. GAO reviewed literature and agency documents; interviewed agency officials and health care industry, drug industry, and other stakeholders; and held a meeting of international and U.S. experts to obtain their views.
What GAO Found
The precise magnitude of the problem of antibiotic resistance is unknown. The Centers for Disease Control and Prevention (CDC) has made progress in expanding surveillance of infections from certain antibiotic-resistant bacteria in the United States and abroad but faces several challenges.
Note: This figure tracks a type of carbapenem-resistant Enterobacteriaceae (CRE), which, according to CDC, is a “nightmare bacteria” resistant to nearly all available antibiotics. Shading indicates CDC confirmed the presence of these bacteria within that state in that year or a previous one.
CDC faces challenges in conducting surveillance for antibiotic resistance due to the limited data it is able to collect through various surveillance systems. For example, CDC's primary surveillance system for gonorrhea—which CDC classified as an urgent antibiotic resistance threat affecting over half a million patients annually—currently tracks only an estimated 1 to 2 percent of all U.S. cases and only in males. CDC has not fully evaluated the representativeness of the gonorrhea surveillance system's results. However, it could do so, for example, by comparing the trends in their limited sample population with trends it can establish in the overall U.S. population via additional studies. Such an evaluation could give CDC more confidence that the system's data accurately reflect national trends.
Federal agencies have taken steps to advance the development and use of diagnostic tests to identify antibiotic-resistant bacterial infections, but these efforts have limitations. For example, agencies have conducted some studies to establish whether testing can lead to positive health care outcomes, such as reduced rates of antibiotic-resistant infections. However, more such studies are needed, according to experts and agency officials. Without information to guide test usage, clinicians may not be able to select appropriate treatments for their patients. One reason for the insufficient number of studies is that Department of Health and Human Services (HHS) agencies that are in a position to conduct or fund such studies—such as CDC and the Biomedical Advanced Research and Development Authority—disagree about what each agency should do. By clarifying roles and responsibilities, HHS agencies could more effectively address the need for more studies. The resulting studies could help demonstrate the value of diagnostic tests for antibiotic resistance, potentially increasing their use and improving patient care.
Experts warn that the current pipeline of antibiotics in development is insufficient to meet the threat of resistance. Several challenges impede the development of new treatments for resistant infections, notably inadequate return on investment for drug companies largely due to low prices and a limited patient population for whom these treatments would be appropriate. While HHS and Department of Defense agencies have provided financial premarket incentives to support antibiotic research and development, experts, federal officials and antibiotic developers agree that more postmarket incentives are needed to overcome the economic challenges. Advisory groups, including a presidential advisory council, and others have called for new postmarket incentives and identified multiple options for their design, including market entry rewards and reimbursement reform (see figure). However, HHS has not developed a strategy to further incentivize development of new treatments for antibiotic-resistant infections, and it may need to request authority and appropriations to create and implement certain types of incentives. Until such incentives are developed, more drug companies may exit the antibiotic development sector, and the pipeline of new treatments may continue to decrease.
Federal agencies have made several efforts to promote the appropriate use of antibiotics across health care settings through antibiotic stewardship—giving patients the right antibiotic at the right time, in the right dose, and for the right duration. However, key challenges remain. For example, federal agencies require only certain types of health care facilities to implement stewardship programs. In addition, CDC is limited in its ability to monitor and improve appropriate antibiotic use, in part because providers are not generally required to report antibiotic use data to a centralized database. The 5-year National Action Plan for Combating Antibiotic-Resistant Bacteria (National Action Plan) calls for strengthening antibiotic stewardship and for the timely reporting of antibiotic use data across health care settings. An executive order directs an interagency task force—the Combating Antibiotic-Resistant Bacteria (CARB) Task Force, coordinated by HHS—to provide annual updates to the President on, among other things, plans for addressing any barriers to full implementation of the National Action Plan. However, in its progress reports covering the first 4 years of the National Action Plan's implementation, the task force did not identify plans to address barriers to expanding antibiotic stewardship programs or the collection of antibiotic use data. Until it does so, the government will not have reasonable assurance that it is fully implementing the National Action Plan and addressing antibiotic resistance.
What GAO Recommends
GAO is making eight recommendations to strengthen the federal response to combating antibiotic resistance. HHS concurred with seven recommendations and did not concur with one. More details are provided on the next page.
In response to the findings presented in this Highlights, GAO recommends that:
CDC ensure that its evaluation of its surveillance system for antibiotic-resistant gonorrhea includes measures of the system's representativeness of the U.S. population;
HHS identify leadership and clarify roles and responsibilities to assess the clinical outcomes of diagnostic testing;
HHS develop a strategy to further incentivize the development of new treatments for antibiotic-resistant infections, including through the use of postmarket financial incentives;
HHS direct the CARB Task Force to include in its annual updates to the President plans for addressing any barriers preventing full implementation of the National Action Plan.
In addition, GAO is making four recommendations to address other CDC efforts in surveillance and reporting and to address FDA efforts in monitoring diagnostic tests.
HHS did not concur with the recommendation that it develop a strategy that includes the use of postmarket financial incentives to encourage the development of new treatments for antibiotic-resistant infections, citing its ongoing analysis to understand whether postmarket incentives should be included in such a strategy. GAO recognizes the complexity of this issue and maintains that this recommendation is warranted given that experts and others have called for additional postmarket incentives and the insufficiency of the current pipeline of new treatments for antibiotic-resistant infections.
or Mary Denigan-Macauley at (202) 512-7114 or deniganmacauleym@gao.gov . |
gao_GAO-19-430 | gao_GAO-19-430_0 | <1. Background> <1.1. Private Student Loan Market> Private student loans are not guaranteed by the federal government. Generally, private lenders underwrite the loans based on the borrower s credit history and ability to repay, and they often require a cosigner. Private student loans generally carry a market interest rate, which can be a variable rate that is higher than that of federal student loans. As of September 30, 2018, five banks held almost half of all private student loan balances. Other private student loan lenders include credit unions and nonbanks: Credit unions originate private student loans either directly or indirectly through a third party. Nonbanks include both for-profit nonbank lenders and nonbank state lenders. For-profit nonbank lenders can originate, service, refinance, and purchase loans. Nonbank state lenders promote affordable access to education by generally offering low, fixed-rate interest rates and low or no origination fees on student loans. As of September 2018, outstanding private student loan balances made up about 8 percent of the $1.56 trillion in total outstanding student loans (see fig. 1). The volume of new private student loans originated has fluctuated, representing about 25 percent of all student loans originated in academic year 2007 2008, 7 percent in 2010 2011 (after the financial crisis), and 11 percent in 2017 2018. <1.2. Consumer Reporting for Private Student Loans> FCRA, the primary federal statute that governs consumer reporting, is designed to promote the accuracy, fairness, and privacy of information in the files of CRAs. FCRA, and its implementing regulation, Regulation V, govern the compilation, maintenance, furnishing, use, and disclosure of consumer report information for credit, insurance, employment, and other eligibility decisions made about consumers. The consumer reporting market includes the following entities: CRAs assemble or evaluate consumer credit information or other consumer information for the purpose of producing consumer reports (commonly known as credit reports). Equifax, Experian, and TransUnion are the three nationwide CRAs. Data furnishers report information about consumers financial behavior, such as repayment histories, to CRAs. Data furnishers include credit providers (such as private student loan lenders), utilities, and debt collection agencies. Credit report users include banks, employers, and others that use credit reports to make decisions on an individual s eligibility for products and services such as credit, employment, housing, and insurance. FCRA imposes duties on data furnishers with respect to the accuracy of the data they furnish. Data furnishers are required to, among other things, refrain from providing CRAs with information they know or have reasonable cause to believe is inaccurate and develop reasonable written policies and procedures regarding the accuracy of the information they furnish. The Act entitles financial institutions that choose to offer a private student loan rehabilitation program that meets the Act s requirements a safe harbor from potential inaccurate information claims under FCRA related to the removal of the private student loan default from a credit report. To assist data furnishers in complying with their responsibilities under FCRA, the credit reporting industry has adopted a standard electronic data-reporting format called the Metro 2 Format. This format includes standards on how and what information furnishers should report to CRAs on private student loans. The information that private student loan lenders furnish to CRAs on their borrowers includes consumer identification; account number; date of last payment; account status, such as in deferment, current, or delinquent (including how many days past due); and, if appropriate, information indicating defaults. An account becomes delinquent on the day after the due date of a payment when the borrower fails to make a full payment. Private student loan lenders policies and terms of loan contracts generally determine when a private student loan is in default. While private student loan lenders may differ in their definitions of what constitutes a default, federal banking regulator policy states that closed- end retail loans (which include private student loans) that become past due 120 cumulative days from the contractual due date should be classified as a loss and charged off. Private student loan lenders can indicate that a loan is in default and they do not anticipate being able to recover losses on it by reporting to CRAs one of a number of Metro 2 Format status codes. Participation in a private student loan rehabilitation program entitles borrowers who successfully complete the program to request that the indicator of a student loan default be removed from their credit report, but the delinquencies leading up to the default would remain on the credit report. Figure 2 shows an example of credit reporting for a borrower who defaults on a private student loan and completes a rehabilitation program. <1.3. Credit Scoring> A credit score is a measure that credit providers use to predict financial behaviors and is typically computed using information from consumer credit reports. Credit scores can help predict the likelihood that a borrower may default on a loan, file an insurance claim, overdraw a bank account, or not pay a utility bill. FICO and VantageScore are the two firms that develop credit score models with nationwide coverage. FICO develops credit score models for distribution by each of the three nationwide CRAs, whereas VantageScore s models are developed across the three CRAs resulting in a single consistent algorithm to assess risk. FICO and VantageScore each have their own proprietary statistical credit score models that choose which consumer information to include in calculations and how to weigh that information. The three nationwide CRAs also develop credit score models derived from their own data. There are different types of credit scores, including generic, industry- specific, and custom. Generic scores are based on a representative sample of all individuals in a CRA s records, and the information used to predict repayment is limited to the information in consumer credit records. Generic scores are designed to predict the likelihood of a borrower not paying as agreed in the future on any type of credit obligation. Both FICO and VantageScore develop generic credit scores. FICO and VantageScore generic scores generally use a range from 300 to 850, with higher numbers representing lower credit risk. For example, VantageScore classifies borrowers in the following categories: subprime (those with a VantageScore of 300 600), near prime (601 660), prime (661 780), and super prime (781 850). A prime borrower is someone who is considered a low-risk borrower and likely to make loan payments on time and repay the loan in full, whereas a subprime borrower has a tarnished or limited credit history. FICO and VantageScore generic scores generally use similar elements in determining a borrower s credit score, including a borrower s payment history, the amounts owed on credit accounts, the length of credit history and types of credit, and the number of recently opened credit accounts and credit inquiries. FICO has developed industry-specific scores for the mortgage, automobile finance, and credit card industries. These scores are designed to predict the likelihood of not paying as agreed in the future on these specific types of credit. In addition, credit providers sometimes use custom credit scores instead of, or in addition to, generic credit scores. Credit providers derive custom scores from credit reports and other information, such as account history, from the lender s own portfolio. The scores can be developed internally by credit providers or with the assistance of external parties such as FICO or the three nationwide CRAs. <1.4. Federal Oversight of Private Student Loans> CFPB has supervisory authority over certain private student loan lenders, including banks and credit unions with over $10 billion in assets and all nonbanks, for compliance with Federal consumer financial laws. CFPB also has supervisory authority over the largest CRAs and many of the entities that furnish information about consumers financial behavior to CRAs. To assess compliance with Federal consumer financial laws, CFPB conducts compliance examinations. According to CFPB, because of its mission and statutory requirement regarding nonbank supervision, it prioritizes its examinations by focusing on risks to consumers rather than risks to institutions. Given the large number, size, and complexity of the entities under its authority, CFPB prioritizes its examinations by focusing on individual product lines rather than all of an institution s products and services. CFPB also has enforcement authority under FCRA regarding certain banks, credit unions, and nonbanks and broad authority to promulgate rules to carry out the purposes of FCRA. The prudential regulators FDIC, Federal Reserve, NCUA, and OCC oversee all banks and most credit unions that offer private student loans. Their oversight includes routine safety and soundness examinations for all regulated entities. These examinations may include a review of operations, including policies, procedures, and practices, to ensure that private student loans are not posing a risk to the entities safety and soundness. Prudential regulators also have supervisory authority for FCRA compliance for banks and certain credit unions with $10 billion or less in assets. <2. No Banks Are Offering Rehabilitation Programs, and Authority Is Unclear for Other Lenders> <2.1. Banks and Credit Unions Are Not Offering Rehabilitation Programs, but Federal Banking Regulators Have Established Approval Processes> As of January 2019, none of the five banks with the largest private student loan portfolios that we contacted offered rehabilitation programs for defaulted private student loans. In addition, officials from the federal banking regulators told us that as of March 2019, no banks had submitted applications to have rehabilitation programs approved. Representatives from three of the five banks we contacted told us they had decided not to offer a rehabilitation program, and the other two had not yet made a final determination. Representatives from these five banks provided several reasons they were not offering rehabilitation programs for private student loans. Low delinquency and default rates. All five banks representatives stated that they had low default rates for private student loans, so the demand for these programs would be low for each bank. Availability of predefault payment programs. Representatives of all five banks said they already offer alternative payment programs, such as forbearance, to help prevent defaults, and two of them explicitly noted this as a reason that a rehabilitation program was unnecessary. Operational uncertainties. Most of the banks representatives were not sure how they would operationalize rehabilitation programs. One bank s representatives said that they sell defaulted loans to debt purchasers and that it would be difficult to offer rehabilitation programs for loans that had been sold. Representatives of two other banks said that the banks systems are not able to change the status of a loan once it has defaulted, so they were not certain how their systems would track rehabilitated loans. Another bank s representatives said that they did not know how rehabilitated loans would be included for accounting purposes in developing their financial statements. Reduced borrower incentives to avoid default. Representatives from two banks said they believed the option to rehabilitate a defaulted loan might reduce borrowers incentives to avoid default or to enter a repayment program before default. Risk of compliance violations. One bank representative said a rehabilitation program could put the bank at risk for violations of unfair and deceptive acts and practices if borrowers misunderstood or misinterpreted how much the program would improve their credit scores. Representatives from this bank and another explained that they did not know how much the program would improve credit scores, limiting their ability to describe the program s benefit to borrowers. Representatives from three of these banks and other organizations, however, noted that there could be advantages for banks to offer private student loan rehabilitation programs. Representatives from the banks said these programs could help banks recover some nonperforming debt, and one of these representatives stated the program could be marketed to borrowers as a benefit offered by the bank. A representative of a consumer advocacy group said a rehabilitation program could improve a bank s reputation by distinguishing the bank from peer institutions that do not offer rehabilitation for private student loans. Because NCUA is not one of the federal banking regulators by statutory definition, officials said the Act does not require credit unions to seek approval from the agency before offering a rehabilitation program. NCUA officials told us examiners would likely review private student loan rehabilitation programs for the credit unions that choose to offer them as part of normal safety and soundness examinations. The two credit unions we spoke with which are among the largest credit union providers of private student loans told us they do not plan to offer rehabilitation programs. One of these credit unions cited reasons similar to those offered by banks, including a low private student loan default rate that suggested there would be a lack of demand for a rehabilitation program. The other credit union explained that it was worried about the effect of removing defaults from credit reports on its ability to make sound lending decisions. NCUA officials also noted that as of January 2019, they had not received any inquiries from credit unions about these programs. OCC, FDIC, and the Federal Reserve have issued information regarding the availability of private student loan rehabilitation programs to their regulated entities, including how they would review applications. In doing so, the agencies informally coordinated to ensure that the statements issued would contain similar information on rehabilitation programs. The three agencies statements explained that their regulated entities must receive written approval to begin a program and that the relevant agency would provide feedback or notify them of its decision within 120 days of receiving a written application. The agencies will review the proposed program to ensure that it requires borrowers to make a minimum number of consecutive, on-time, monthly payments that demonstrate renewed ability and willingness to repay the loan. <2.2. Uncertainty Exists about Nonbank Lenders Authority and What Information Should Be Removed from a Credit Report> Uncertainty exists regarding two issues with private student loan rehabilitation programs. First, some nonbank private student loan lenders are not certain that they have the authority to implement these programs. Second, the Act does not explain what constitutes a default for the purposes of removing information from credit reports. <2.2.1. Uncertainty about Nonbank State Lenders Authorities> With regard to nonbank state lenders, uncertainty exists about their authority under FCRA to offer private student loan rehabilitation programs that include removing information from credit reports. As discussed previously, for financial institutions such as banks and credit unions, the Act provides an explicit safe harbor to request removal of a private student loan default from a borrower s credit report and remain in compliance with FCRA. However, the Act does not specify that for-profit nonbank lenders and nonbank state lenders have this same authority. Representatives of the five nonbank state lenders we spoke with had different interpretations of their authority to offer rehabilitation programs. At least two nonbank state lenders currently offer rehabilitation programs, and their representatives told us they believed they have the authority to do so. Another nonbank state lender told us its state has legislation pending to implement such a program. In contrast, representatives of two other nonbank state lenders told us they were interested in offering a rehabilitation program but did not think that they had the authority to do so. In addition, representatives from a trade association that represents nonbank state lenders noted that confusion exists among some of their members and they are seeking a way to obtain explicit authority for nonbank lenders to offer rehabilitation programs for their private student loans. Two trade associations that represent nonbank state lenders also told us that some of their members would be interested in offering these programs if it was made explicit that they were allowed to do so. CFPB officials told us the agency has not made any determination on whether it plans to clarify for nonbanks including for-profit nonbank lenders and nonbank state lenders if they have the authority under FCRA to have private student loan defaults removed from credit reports for borrowers who have completed a rehabilitation program. CFPB officials said that the agency does not approve or prevent its regulated entities from offering any type of program or product. Unlike for the federal banking regulators, the Act did not require CFPB to approve rehabilitation programs offered by the entities it regulates. However, CFPB does have general FCRA rulemaking authority. It generally also has FCRA enforcement and supervisory responsibilities over its regulated entities, which includes certain entities that originate private student loans. This authority allows the agency to provide written clarification of provisions or define terms as needed. As a result, CFPB could play a role in clarifying for nonbanks whether they are authorized under FCRA to offer private student loan rehabilitation programs. Federal internal control standards state that management should externally communicate the necessary quality information to achieve the entity s objectives. Without clarification from CFPB on nonbanks authority to offer private student loan rehabilitation programs that allow them to delete information from the borrower s credit report, there will continue to be a lack of clarity on this issue among these entities. Providing such clarity could depending on CFPB s interpretation result in additional lenders offering rehabilitation programs that would allow more borrowers the opportunity to participate, or it could help ensure that only those entities CFPB has interpreted as being eligible to offer programs are doing so. <2.2.2. No Standard for What Constitutes a Default > Statutory changes made to FCRA by the Act do not explain what information on a consumer s credit report constitutes a private student loan default that may be removed when a borrower successfully completes a rehabilitation program. According to the three nationwide CRAs and a credit reporting trade association, the term default is not used in credit reporting for private student loans. As discussed previously, private student loan lenders use one of a number of Metro 2 Format status codes to indicate that a loan is in default (i.e., they do not anticipate being able to recover losses on the loan). Representatives of the CRAs and a credit reporting trade association said that private student loan lenders will need to make their own interpretation of what information constitutes a default for the purposes of removing information from a credit report following successful completion of a private student loan rehabilitation program. The statements issued by FDIC, the Federal Reserve, and OCC on rehabilitation programs do not explain what information constitutes a private student loan default that may be removed from borrowers credit reports upon successful completion of a rehabilitation program. Officials from FDIC, the Federal Reserve, and OCC explained that they do not have the authority to interpret what constitutes a private student loan default on credit reports because the responsibilities for interpreting FCRA fall under CFPB. CFPB officials told us they are monitoring the issue but have not yet determined if there is a need to address it. Given CFPB s rulemaking authority for FCRA, it could clarify the term default for private student loan lenders. In doing so, CFPB could obtain insight from the prudential regulators and relevant industry groups on how private student loan lenders currently report private student loan defaults on credit reports and on how to develop a consistent standard for what information may be removed. According to federal internal control standards, management should externally communicate the necessary quality information to achieve objectives. This can include obtaining quality information from external parties, such as other regulators and relevant industry groups. Without clarification from CFPB, there may be differences among private student loan lenders in what information they determine constitutes a default and may be removed from a credit report. Variations in lenders interpretations could have different effects on borrowers credit scores and credit records, resulting in different treatment of borrowers by credit providers. This could affect borrowers access to credit or the terms of credit offered, such as interest rates or the size of down payments required on a variety of consumer loans. In addition, as mentioned previously, the credit reporting industry follows a standard reporting format to help ensure the most accurate credit reporting information possible. Without clarification on what information may be removed from credit reports following successful completion of rehabilitation programs, differences in lenders interpretation could introduce inconsistencies in credit reporting data that may affect their accuracy. <3. Private Student Loan Rehabilitation Programs Would Likely Pose Minimal Risks to Financial Institutions> <3.1. Programs Are Expected to Pose Little Safety and Soundness Risk for Banks and Credit Unions> Rehabilitation programs for private student loans are expected to pose minimal additional risk to banks and credit unions safety and soundness. Prudential regulators require that banks and credit unions underwrite student loans to mitigate risks and ensure sound lending practices, and OCC guidance specifies that underwriting practices should minimize the occurrence of defaults and the need for repayment assistance. Lenders generally use underwriting criteria based on borrowers credit information to recognize and account for risks associated with private student loans. According to officials from OCC, FDIC, and the Federal Reserve and representatives from the major bank and credit union private student loan lenders we spoke with, lenders participating in private student loan rehabilitation programs would face minimal additional risks for several reasons: Loans are already classified as a loss. Loans entering a rehabilitation program are likely to be 120 days past due and to have been charged off, and thus they would have already been classified as a loss by banks and credit unions. OCC officials told us a program to rehabilitate these loans would, therefore, pose no additional risks to the safety and soundness of institutions that offer them. Default rates are low, and loans typically use cosigners. Representatives from the five major banks and two credit unions told us that private student loans generally perform well and have low rates of delinquencies and defaults. Aggregate data on the majority of outstanding loan balances show that the default rate for private student loans was below 3 percent from the second quarter of 2014 through the third quarter of 2018. Lenders also generally require borrowers of private student loans to have cosigners someone who is liable to make payments on the loan should the student borrower default which helps reduce the risk of the loan not being repaid. Since the academic year 2010 2011, the rate of undergraduate private student loan borrowers with cosigners has exceeded 90 percent. Private student loan portfolios are generally small. Private student loans make up a small portion of the overall loan portfolios for most of the banks and credit unions we spoke with. For four of the five major banks with the largest portfolios of private student loans, these constituted between about 2 percent to 11 percent of their total loan portfolio in 2017. The fifth bank s entire portfolio was education financing, with private student loans accounting for about 93 percent of its 2017 portfolio. For the two credit unions we contacted, private student loans constituted about 2 percent and 6 percent of their total assets in 2018. Private student loan rehabilitation programs may create certain operational costs for banks or credit unions that offer them. However, no representatives of the five banks and two credit unions with whom we spoke were able to provide a cost estimate since none had yet designed or implemented such a program. Representatives from four banks and one credit union we spoke with said that potential costs to implement a rehabilitation program would be associated with information technology systems, designing and developing new systems to manage the program, increased human resource needs, additional communications with borrowers, credit reporting, compliance, monitoring, risk management, and any related legal fees. In addition, like any other type of consumer loan, banks and credit unions could face potential risks with private student loan rehabilitation programs, including operational, compliance, or reputational risks. For example, a representative of one bank cited operational risks such as those that could stem from errors in credit reporting or inadequate collection practices for rehabilitated private student loans. <3.2. Rehabilitation Programs Are Expected to Have Little Effect on Financial Institutions Ability to Make Prudent Lending Decisions> One concern about removing information from credit reports as authorized in connection with the Act s loan rehabilitation programs is that it could degrade the quality of the credit information that credit providers use to assess the creditworthiness of potential borrowers. However, the removal of defaults from credit reports resulting from loan rehabilitation programs is unlikely to affect financial institutions ability to make sound lending decisions, according to prudential regulator officials and representatives from three private student lenders and three other credit providers with whom we spoke. OCC and FDIC officials and representatives from two of these private student lenders noted that because rehabilitation programs leave the delinquencies leading up to the default on borrowers credit reports, lenders would still be able to adequately assess borrower risk. In addition, representatives from one automobile lender and one mortgage lender said that over time, the methods they use to assess creditworthiness would be able to detect whether rehabilitated private student loans were affecting their ability to identify risk patterns in credit information and they could adjust the methods accordingly. Representatives from the Federal Reserve provided three additional reasons why they expected that rehabilitation programs would have little effect on banks and credit unions lending decisions. First, under the statutory requirement for private student loan rehabilitation, removal of a default from a borrower s credit report can only occur once per loan. A single default removal would be unlikely to distort the accuracy of credit reporting in general. Second, they said that borrowers who have successfully completed a rehabilitation program by making consecutive on-time payments have demonstrated a proven repayment record, and therefore they likely represent a better credit risk. Finally, because participation in the private student loan rehabilitation program is expected to be low, its effect on the soundness of financial institutions lending decisions is expected to be minimal. <4. Private Student Loan Rehabilitation Programs Would Likely Result In Minimal Improvements in Borrowers Access to Credit> <4.1. Effect of Rehabilitation Programs on Most Borrowers Access to Credit Would Likely Be Small> The effects of private student loan rehabilitation programs on most borrowers access to credit would likely be minimal. A simulation conducted by VantageScore found that removing a student loan default increased a borrower s credit score by 8 points, on average. An 8 point rise in a borrower s credit score within VantageScore s range of 300 to 850 represents only a very small improvement to that borrower s creditworthiness. Therefore, most borrowers who successfully completed a private student loan rehabilitation program would likely see minimal improvement in their access to credit, particularly for credit where the decision-making is based solely on generic credit scores. Factors Credit Providers Consider Prior to Lending Credit providers assess a borrower s creditworthiness based on several factors, including the following: Generic credit scores: Credit providers can rely solely on generic credit scores, such as those developed by Fair Isaac Corporation and VantageScore Solutions, LLC, to make lending decisions. Credit providers generally do not provide credit to borrowers whose scores do not meet a minimum threshold. Industry-specific credit scores: Certain types of credit providers, such as mortgage lenders, automobile loan lenders, and credit card issuers, may use industry-specific credit scores rather than generic credit scores to make lending decisions. This is because these scores may help them better predict lending risks specific to their industry. Internal credit reviews: Credit providers can customize methods unique to their institution that review different aspects of borrowers credit information, such as debt-to-income ratios, employment history, and borrowers existing relationships with the institution. Credit providers may also develop custom credit scores that are tailored to their specific needs and include factors they have deemed important in predicting risks of nonpayment. Credit providers incorporate their own internal data in these scores as well as information contained in borrowers credit reports. The effect of a rehabilitation program on credit scores will likely be somewhat greater for borrowers with lower credit scores, and smaller for borrowers with higher credit scores. For example, the VantageScore simulation suggests that borrowers in the subprime range (with scores of 300 600) could see score increases of 11 points, on average, while borrowers in the prime (661 780) and super prime (781 850) ranges could see increases of less than 1 point, on average (see fig. 3). The effect of removing a default from a credit report varies among borrowers because a credit score is influenced by other information in a borrower s credit report, such as other outstanding derogatory credit markers, the length of time since the default, and other types of outstanding loans. Reasons that removing a student loan default may improve a borrower s credit score and access to credit only minimally include the following: Delinquencies remain in the credit report. A key reason that removing a student loan default has a small effect on a credit score, according to VantageScore officials, is that the delinquencies leading to that default remain in the credit report for borrowers who successfully complete rehabilitation programs. Adding a delinquency in the simulation decreased a credit score by 61 points, on average. Thus, the simulation suggests that the increase in a credit score from removing a student loan default is not as substantial as the decrease from adding the initial delinquency. Credit scoring treats student loans differently. Some credit score models place less emphasis on student loans than on other types of consumer loans in predicting the risk of nonpayment. One credit scoring firm and two CRAs we spoke with said that student loans have a lower weight than other types of consumer loans in their generic credit scoring algorithms. They explained that there are fewer student loans than other types of consumer loans in the sample they use to develop the score, and student debt has proved to be less important statistically at predicting credit risk in their models. Student loans also may have less weight in predicting defaults in industry- specific or custom models of scores. A representative of one credit scoring firm said the algorithm for an industry-specific credit score that predicts the risk of nonpayment on a credit card may place less emphasis on a student loan than the algorithm for a generic credit score that is meant to predict risk more broadly. Further, CRA officials we spoke with said that because their custom credit scoring models are specific to clients needs, the models may not include student loans as a predictor of default at all, or they may place greater emphasis on student loans, depending on the clients needs. Borrowers in default typically already have poor credit. Borrowers who complete a rehabilitation program have a high likelihood of having other derogatory credit items in their credit report, in addition to the student loan delinquencies that led to the default, according to a study conducted by a research organization, several CRAs, and one credit provider with whom we spoke. The VantageScore simulation also showed that borrowers who had at least one student loan delinquency or default in their credit profile had an average of five derogatory credit items in their profile. Because student loan defaults and student loan delinquencies are both negative credit events that affect credit providers credit assessment methods, the removal of one student loan default from a borrower s credit report likely will not make a large difference in how credit providers evaluate the borrower. <4.2. Programs May Hold Additional Benefits as Well as Disadvantages for Borrowers> Consumer advocates and academic studies cited potential benefits of rehabilitation programs apart from their effect on credit scores and access to credit: Borrowers defaulting on private student loans issued by nonbank state lenders could have wage garnishments stopped after successfully completing a rehabilitation program. Rehabilitation would stop debt collection efforts against a private student loan borrower. Participating in a loan modification program for one loan may help borrowers better meet their other loan obligations, according to studies we reviewed. For example, one study found that participation in mortgage modification programs was associated with lower delinquency rates on nonmortgage loans. However, programs may also have some disadvantages or pose challenges to borrowers, according to representatives from consumer advocacy groups and academic sources: A rehabilitation program may restart the statute of limitations on loan collections, according to representatives of consumer advocacy groups. Borrowers who redefault following entry into a rehabilitation program near the end of the statute of limitations on their debt could have collection efforts extended on these loans. Programs may extend adverse credit reporting. Generally, negative credit information stays on consumer reports for 7 or 10 years; therefore, depending on when a borrower enters into a rehabilitation program, a payment on the loan might prolong the adverse credit reporting for that account. The lack of income-driven repayment programs offered to borrowers in the private student loan market means that borrowers who complete rehabilitation programs may have a high likelihood of redefaulting on their loans. Because removing adverse information from credit reports does not change a borrower s underlying creditworthiness, improved credit scores and access to credit may cause borrowers to borrow too much relative to their ability and willingness to pay. For example, one study found that for consumers who had filed for bankruptcy, their FICO scores and credit lines increased within the first year after the bankruptcy was removed from their credit report. However, the study found the initial credit score increase had disappeared by about 18 months after the bankruptcy was removed and that debt and delinquency were higher than expected, increasing the probability of a future default. <5. Conclusions> Private student loan rehabilitation programs can provide an opportunity for private student loan borrowers to help repair their credit reports. However, some nonbank state lenders have different interpretations of whether FCRA authorizes them to offer such programs. During our review, CFPB had not determined if it would clarify these uncertainties for nonbank state lenders and other nonbank private student loan lenders. Providing such clarity could depending on CFPB s interpretation result in additional lenders offering rehabilitation (allowing more borrowers the opportunity to participate), or help to ensure that only entities deemed eligible by CFPB to offer programs are doing so. In addition, the Act does not explain what information on a consumer s credit report constitutes a private student loan default that may be removed following the successful completion of a private student loan rehabilitation program. Without clarification from CFPB after consulting with the prudential regulators and relevant industry groups on what information in a credit report constitutes a private student loan default that may be removed, lenders may be inconsistent in the credit report information they remove. As a result, variations in lenders interpretations could have different effects on borrowers credit scores and credit records, which could affect how they are treated by credit providers and could also result in inconsistencies that affect the accuracy of credit reporting data. <6. Recommendations for Executive Action> We are making the following two recommendations to CFPB: The Director of CFPB should provide written clarification to nonbank private student loan lenders on their authorities under FCRA to offer private student loan rehabilitation programs that include removing information from credit reports. (Recommendation 1) The Director of CFPB, after consulting with the prudential regulators and relevant industry groups, should provide written clarification on what information in a consumer s credit report constitutes a private student loan reported default that may be removed after successful completion of a private student loan rehabilitation program. (Recommendation 2) <7. Agency Comments and Our Evaluation> We provided a draft copy of this report to CFPB, the Department of Education, FDIC, the Federal Reserve, the Federal Trade Commission, NCUA, OCC, and the Department of the Treasury for review and comment. We also provided FICO and VantageScore excerpts of the draft report for review and comment. CFPB and NCUA provided written comments, which have been reproduced in appendixes II and III, respectively. FDIC, the Federal Trade Commission, OCC, and the Department of the Treasury provided technical comments on the draft report, which we have incorporated, as appropriate. The Department of Education and the Federal Reserve did not provide any comments on the draft of this report. FICO and VantageScore provided technical comments, which we have incorporated, as appropriate. In its written response, CFPB stated that it does not plan to act on our first recommendation to provide written clarification to nonbank private student loan lenders on their authorities under FCRA to offer private student loan rehabilitation programs. CFPB stated and we agree that the Act does not regulate the authority of private student loan lenders that are not included in FCRA s definition of a financial institution, nor direct financial institutions that are not supervised by a federal banking agency to seek CFPB s approval concerning the terms and conditions of rehabilitation programs. However, CFPB s written response does not discuss the authority of private student loan lenders that potentially fall outside FCRA s definition of a financial institution to offer rehabilitation programs that include removing information from credit reports. As we discuss in the report, uncertainty exists among nonbank private student loan lenders regarding their authority to implement such programs. We maintain that although the Act does not require CFPB to act on this issue, CFPB could play a role in clarifying whether FCRA authorizes nonbanks to offer rehabilitation programs that enable the lender to obtain legal protection for removal of default information from a credit report. CFPB intervention is warranted given the lack of clarity in the private student lending industry and is consistent with CFPB s supervisory authority over nonbank financial institutions and its FCRA enforcement and rulemaking authorities. We do not suggest that CFPB play a role in approving rehabilitation programs. As we note in the report, clarification of nonbanks authorities could result in additional lenders offering rehabilitation programs and providing more consistent opportunities for private student loan borrowers, or it could help ensure that only those entities authorized to offer programs are doing so. With respect to our second recommendation on providing written clarification on what information in a consumer s credit report constitutes a private student loan reported default that may be removed after successful completion of a private student loan rehabilitation program, CFPB s letter states that such clarification is premature because of ongoing work by the Consumer Data Industry Association. The letter states that after that work is completed, CFPB will consult with the relevant regulators and other interested parties to determine if additional guidance or clarification is needed. As we stated in the report, we are aware of the work of the Consumer Data Industry Association to update the credit reporting guidelines for private student loans. We maintain that this work presents a good opportunity for CFPB to participate in these discussions and to work in conjunction with the industry and other relevant regulators to help alleviate any contradiction between what CFPB would determine in isolation from any determination made by industry. Further, such participation would allow CFPB to weigh in on legal and policy issues from the start, potentially avoiding any need for future rulemaking. In addition, CFPB s involvement in this determination and issuance of clarification would help ensure more consistent treatment among borrowers participating in private student loan rehabilitation programs, as well as consistency in credit reporting information. NCUA s written response stated that federal credit unions were authorized to offer rehabilitation programs for private student loan borrowers prior to the Act and that federal credit unions are not required to obtain review and approval from NCUA to offer such programs. The letter notes, however, that the Act requires federal credit unions that offer such programs to remove private student loan defaults from consumer credit reports if borrowers successfully complete a rehabilitation program. NCUA noted that even though removal of the default may result in a relatively small credit score increase, this can benefit credit union members. NCUA stated that it stands ready to assist CFPB in implementing the report s two recommendations. We are sending copies of this report to CFPB, the Department of Education, FDIC, the Federal Reserve, the Federal Trade Commission, NCUA, OCC, the Department of the Treasury, the appropriate congressional committees and members, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our objectives were to examine (1) the factors affecting financial institutions participation in private student loan rehabilitation programs, (2) the risks that these programs may pose to financial institutions, and (3) the effects that these programs may have on student loan borrowers access to future credit. To examine the factors that affect financial institutions participation in private student loan rehabilitation programs and how the federal banking regulators are implementing the Economic Growth, Regulatory Relief, and Consumer Protection Act s (the Act) provisions on private student loan rehabilitation programs, we reviewed the statements issued by the three regulators tasked with approving the loan rehabilitation programs of their regulated entities the Board of Governors of the Federal Reserve System (Federal Reserve), Federal Deposit Insurance Corporation (FDIC), and Office of the Comptroller of the Currency (OCC) as well as OCC s examiner guidance. We also interviewed officials from these regulators about their time frames for issuing statements, what topics the statements cover, and how they coordinated in issuing the statements. We reviewed the legal authorities of the Consumer Financial Protection Bureau (CFPB) and National Credit Union Administration (NCUA) which oversee nonbank private student loan lenders and most credit unions that issue private student loans, respectively concerning private student loan rehabilitation programs and the legislative history of the Act s provisions on the programs. Finally, we interviewed officials from NCUA and CFPB about their authorities related to implementing the Act s provisions on private student loan rehabilitation programs and whether they planned to take any actions related to the provisions. In addition, we interviewed representatives from a nongeneralizable sample of 15 private student loan lenders: the five largest bank lenders, two of the largest credit union lenders, and eight nonbank financial institutions (nonbank). The eight nonbank lenders included three for-profit nonbank lenders and five nonprofit state-affiliated lenders (nonbank state lenders). We asked these lenders about their decisions to offer private student loan rehabilitation programs, risks and costs associated with the programs, and the effects that such programs could have on their lending decisions. We identified the five largest bank lenders by reviewing data from MeasureOne a private data analytics company that studies the private student loan market and discussions with officials from the Federal Reserve, FDIC, OCC, and CFPB. We assessed the reliability of data from MeasureOne through discussions with representatives from the company on the methodology used to develop its estimates and its internal controls. We determined that this data source was sufficiently reliable for selecting a sample of private student lenders to interview about participation in rehabilitation programs. We reviewed these five banks 2017 10-K reports (annual financial filings with the Securities and Exchange Commission) to verify the size of their student loan portfolios. We selected the two credit unions to interview by reviewing 2018 NCUA data on credit unions portfolios to identify two credit unions that were among the largest credit union private student loan lenders. To select the for-profit nonbank lenders, we used suggestions from officials at CFPB, OCC, and the Department of Education, as well as reports from private sources that contained information on nonbank private student loan lenders. We selected nonbank state lenders based on information that indicated they were operating or interested in offering rehabilitation programs. Sources of this information included the Education Finance Council s 2018 2019 NonProfit & State-Based Education Loan Handbook, an interview with the Education Finance Council, and information received from a 2013 CFPB Request for Information Regarding an Initiative to Promote Student Loan Affordability. Because this sample is nongeneralizable, our results cannot be generalized to all private student loan lenders. To examine the risks, if any, that private student loan rehabilitation programs pose to financial institutions, we reviewed bank and credit union regulator policies and guidance on private student lending. We also analyzed data on delinquency and default rates of private student loans. To do this, we reviewed industry data from MeasureOne and the 2017 10- K filings for the five banks whose representatives we interviewed. We assessed the reliability of MeasureOne s performance data through discussions with representatives from the company on the methodology it uses to develop these metrics and its internal controls. We determined that this data source was sufficiently reliable for assessing the performance of banks portfolios of private student loans. For these five banks, we also used the 10-K filings to estimate the volume of the portion of their portfolios that was composed of private student loans. We also compared private student loan default rates to default rates of other types of consumer loans, including mortgages, credit cards, and automobile loans. To do this, we used data from FDIC s Statistics on Depository Institutions database to analyze indicators of asset quality for mortgages, credit cards, and automobile loans from 2013 through 2017. We assessed the reliability of FDIC s Statistics on Depository Institutions database by reviewing related documentation and conducting testing for missing data, outliers, or any obvious errors. We determined that this data source was sufficiently reliable for assessing the performance and risk of banks portfolios of private student loans and other types of consumer loans. We also interviewed officials from the Federal Reserve, FDIC, NCUA, and OCC about the types of costs and risks that could be associated with private student loan rehabilitation programs. In addition, we interviewed representatives of our nongeneralizable sample of 15 private student loan lenders about the potential risks and costs of offering rehabilitation programs. To assess potential risks of private student loan rehabilitation programs for other types of financial institutions, we interviewed a nongeneralizable sample of seven credit providers about how these programs could affect their ability to make sound lending decisions. We focused on financial institutions that offer mortgage loans, automobile loans, and credit cards. According to data from the 2016 Survey of Consumer Finances, these are the most common types of debt consumers hold. We selected a nongeneralizable sample of banks and nonbank financial institutions that provide these types of credit. We selected the bank credit providers using data from FDIC s Statistics on Depository Institutions by identifying the mortgage and automobile loan lenders and credit card issuers that were among the largest holders of assets in these lending categories as of the fourth quarter 2017. To identify nonbank financial institution lenders, we reviewed an industry report to identify some of the larger nonbank mortgage lenders, and we reviewed a list prepared by CFPB of larger industry participants in the automobile finance market industry. We judgmentally selected the final sample of these credit providers based on their size and, to the extent applicable, their federal regulator to obtain a diversity of opinions. We determined that industry reports, CFPB s list of larger industry participants, and 10-K filings were sufficiently reliable for selecting a sample of nonbank financial institutions to interview about risks posed by rehabilitation programs. Because this sample is nongeneralizable, our results cannot be generalized to all credit providers. We also interviewed representatives of four industry groups and two trade associations that work with these credit providers and student loan borrowers on the types of risks and costs that rehabilitation programs could create for lenders. To examine the effects that private student loan rehabilitation programs may have on student loan borrowers access to future credit, we conducted a literature search for studies that empirically analyzed the effects on credit scores and access to credit of adverse credit events, such as foreclosures or bankruptcies; loan modifications, broadly defined; and removal of accurate but adverse information from credit reports, such as a bankruptcy. We identified these studies through our initial background search, targeted searches of the EconLit database, and a search of the Federal Reserve Bank of New York Center for Microeconomic Data publications, and through bibliographies of studies we reviewed. We also asked VantageScore Solutions, LLC (VantageScore) a credit scoring firm to conduct a quantitative analysis simulating the effect of adding a student loan delinquency to and removing a student loan default from a borrower s credit profile on its VantageScore 3.0 credit score. The analysis was conducted using a sample of VantageScore s data that it obtained from the three nationwide CRAs and that represents actual credit profiles of borrowers. VantageScore analyzed data for borrowers with at least one outstanding student loan with a balance greater than $0. Table 1 contains the results of the simulation and information on the number and characteristics of borrowers whose credit profiles were analyzed. The results of the simulation are specific to changes in the VantageScore 3.0 credit score. The simulated results represent averages for borrowers whose credit profiles were analyzed and are meant to be illustrative. Additionally, because this was a simulation, it is unlikely that any one borrower s credit profile exactly matches the average profiles used in the simulations. The results of the VantageScore analysis only apply to VantageScore 3.0 credit scores in the 2014 2016, 2015 2017, and 2016 2018 cohorts of borrowers and may not be generalized to other VantageScore credit scores, to Fair Isaac Corporation (FICO) credit scores, or for different cohorts in different years. While we present only the results of the most recent cohort (2016 2018) in our report, VantageScore simulated the analysis across three cohorts to determine whether the results varied substantially over time. The results for all three cohorts were similar. Through reviewing documentation and conducting interviews, we determined that the data used by VantageScore to conduct this analysis were sufficiently reliable for simulating the effects of derogatory credit marks on borrowers credit scores. FICO declined our request to develop a similar analysis. To examine how a rehabilitation program may affect borrowers future access to credit, we interviewed officials from CFPB, the Department of Education, FDIC, the Federal Reserve, Federal Trade Commission, NCUA, OCC, and the Department of the Treasury. We also interviewed representatives of the four consumer reporting agencies that collect and report information on student loans (Equifax, Experian, Innovis, and TransUnion) and the two credit scoring firms that develop credit score models with nationwide coverage (FICO and VantageScore). We also interviewed representatives from the 15 private student loan lenders and seven credit providers described above, as well as banking, credit reporting, and student loan lending and servicing industry groups and consumer advocacy organizations. We conducted this performance audit from July 2018 to May 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Consumer Financial Protection Bureau Appendix III: Comments from the National Credit Union Administration Appendix IV: GAO Contact and Staff Acknowledgments <8. GAO Contact> <9. Staff Acknowledgments> In addition to the contact named above, Jill Naamane (Assistant Director), Christine McGinty (Analyst-in-Charge), Jill Lacey, Courtney LaFountain, Jon D. Menaster, Tovah Rom, Jessica Sandler, Eric Schwab, and Aisha Shafi made key contributions to this report. Also contributing to this report were Melissa Emrey-Arras, Debra Prescott, and Jena Sinkfield. | Why GAO Did This Study
The Economic Growth, Regulatory Relief, and Consumer Protection Act enabled lenders to offer a rehabilitation program to private student loan borrowers who have a reported default on their credit report. The lender may remove the reported default from credit reports if the borrower meets certain conditions. Congress included a provision in statute for GAO to review the implementation and effects of these programs.
This report examines (1) the factors affecting financial institutions' participation in private student loan rehabilitation programs, (2) the risks the programs may pose to financial institutions, and (3) the effects the programs may have on student loan borrowers' access to credit. GAO reviewed applicable statutes and agency guidance. GAO also asked a credit scoring firm to simulate the effect on borrowers' credit scores of removing student loan defaults. GAO also interviewed representatives of regulators, some of the largest private student loan lenders, other credit providers, credit bureaus, credit scoring firms, and industry and consumer advocacy organizations.
What GAO Found
The five largest banks that provide private student loans—student loans that are not guaranteed by the federal government—told GAO that they do not offer private student loan rehabilitation programs because few private student loan borrowers are in default, and because they already offer existing repayment programs to assist distressed borrowers. (Loan rehabilitation programs described in the Economic Growth, Regulatory Relief, and Consumer Protection Act (the Act) enable financial institutions to remove reported defaults from credit reports after borrowers make a number of consecutive, on-time payments.) Some nonbank private student loan lenders offer rehabilitation programs, but others do not, because they believe the Act does not authorize them to do so. Clarification of this matter by the Consumer Financial Protection Bureau (CFPB)—which oversees credit reporting and nonbank lenders—could enable more borrowers to participate in these programs or ensure that only eligible entities offer them.
Private student loan rehabilitation programs are expected to pose minimal additional risks to financial institutions. Private student loans compose a small portion of most banks' portfolios and have consistently low default rates. Banks mitigate credit risks by requiring cosigners for almost all private student loans. Rehabilitation programs are also unlikely to affect financial institutions' ability to make sound lending decisions, in part because the programs leave some derogatory credit information—such as delinquencies leading to the default—in the credit reports.
Borrowers completing private student loan rehabilitation programs would likely experience minimal improvement in their access to credit. Removing a student loan default from a credit profile would increase the borrower's credit score by only about 8 points, on average, according to a simulation that a credit scoring firm conducted for GAO. The effect of removing the default was greater for borrowers with lower credit scores and smaller for borrowers with higher credit scores (see figure). Reasons that removing a student loan default could have little effect on a credit score include that the delinquencies leading to that default—which also negatively affect credit scores—remain in the credit report and borrowers in default may already have poor credit.
What GAO Recommends
GAO is making two recommendations, including that CFPB provide written clarification to nonbank private student loan lenders on their authority to offer private student loan rehabilitation programs. CFPB does not plan to take action on this recommendation and stated that it was premature to take action on the second recommendation. GAO maintains that both recommendations are valid, as discussed in this report. |
gao_GAO-19-693T | gao_GAO-19-693T_0 | <1. Background> The vast majority of the 42 railroads subject to the statutory mandate to implement PTC including 30 commuter railroads, Amtrak, seven Class I and four Class II and III freight railroads are implementing one of three types of PTC systems. These systems include the Interoperable Electronic Train Management System (I-ETMS), the Advanced Civil Speed Enforcement System II (ACSES), and Enhanced Automated Train Control (E-ATC). While these PTC systems are functionally similar, the technologies they use differ. For example, to determine a train s location, ACSES and E-ATC rely on equipment embedded on the track while I- ETMS uses Global Positioning System information. ACSES and E-ATC both supplement existing train control systems to provide all required PTC functionality, while I-ETMS was designed as a new system to provide this functionality. As noted above, testing is one of the many steps to achieving full implementation. Through multiple stages of testing, which are summarized below, railroads must demonstrate that the PTC system meets functional requirements. Laboratory testing: locomotive and wayside equipment testing in a lab environment to verify that individual components function as designed. Field testing: includes several different tests of individual components and the overall system, such as testing each locomotive type to verify that it meets functional requirements and field integration testing a key implementation milestone to verify that each PTC component is integrated and functioning safely as designed. Revenue service demonstration (RSD): an advanced form of field testing in which the railroad operates PTC-equipped trains in regular service under specific conditions. RSD is intended to validate the performance of the PTC system as a whole and to test the system under normal, real-world operations. Using results from field and RSD testing, combined with other information, host railroads must then submit a safety plan to FRA for system certification and approval. We previously reported that these safety plans have been up to 5,000 pages in length. Once FRA approves a safety plan, the railroad receives system certification, which is required for full implementation, and is then authorized to operate the PTC system in revenue service. According to FRA officials, the FRA may impose conditions to the PTC safety plan approval as necessary to ensure safety, resulting in a conditional certification. Interoperability is achieved when the locomotives of any host railroad and tenant railroad operating over the same track segment can successfully communicate with and respond to the other railroad s PTC system, allowing uninterrupted movements over property boundaries. For example, when a locomotive enters another railroad s territory as a tenant, it immediately needs information about the upcoming track such as any temporary speed restrictions in place due to track work (see fig. 1). To achieve interoperability, railroads have to complete a series of steps including (1) additional installation work (such as installing equipment on a tenant railroad s locomotives) and scheduling (such as coordinating with the relevant railroad to reach any needed agreements and identify dates for testing), (2) laboratory testing, (3) field testing, and (4) RSD or revenue service operations. Many railroads will complete much of the implementation for their own PTC systems, such as starting RSD on some or most of their track, before they begin to take steps to achieve interoperability with other railroads. However, a railroad can take steps to achieve interoperability with other railroads while simultaneously completing field testing or other stages of testing on its own PTC system. FRA is responsible for overseeing railroads implementation of PTC, and the agency monitors progress and provides direct assistance to railroads implementing PTC. For example, each railroad had to develop an FRA- approved PTC implementation plan that includes project schedules and milestones for certain activities, and a railroad is required to report quarterly and annually to FRA on its PTC implementation status relative to its implementation plan. FRA also provides technical assistance to railroads, addresses questions, and reviews and approves railroads documentation, including test and safety plans. FRA has a national PTC project manager, designated PTC specialists in the eight FRA regions, and approximately a dozen engineers, test monitors, and other staff responsible for overseeing technical aspects of implementation. FRA also has oversight tools, which includes authority to impose civil penalties when a railroad fails to meet certain statutory PTC requirements. Since 2017, FRA reports that it has assessed nearly $400,000 in civil penalties against railroads that failed to comply with their implementation plan milestones or reporting requirements. <2. Railroads Continue to Make Progress Implementing PTC, but Significant Work Remains to Achieve Interoperability> <2.1. Railroads Implementation of Their Own Systems Has Advanced, but Some Commuter and Smaller Freight Railroads Remain in the Early Stages of Testing> Since the end of 2018, some railroads have reported making progress on testing and implementation of their own PTC systems. Figure 2 shows the 42 railroads reported progress by PTC implementation stage. Six railroads two Class Is and four commuters reported to FRA that they had implemented PTC on all of their own territories but had not completed interoperability as of March 31, 2019, and almost all these railroads reported being in this stage at the end of 2018. In addition, as of March 31, 2019, no additional railroads beyond the four that were complete at the end of 2018 reported reaching full implementation. Nearly all railroads still implementing PTC plan to reach full implementation in the last quarter of 2020, based on our analysis of railroads extension requests. Few railroads reported moving into RSD during the first quarter of 2019, and the extent of RSD testing being conducted by railroads in this stage varied considerably. Of the 19 railroads that reported RSD testing on some portion of their own track as of March 31, about half (9 of 19) reported RSD testing on more than 75 percent of their total route miles, while about a quarter (5 of 19) reported RSD testing on less than 25 percent of their total route miles. RSD testing also varied between Class I railroads and commuter railroads. On average, the 5 Class I railroads in this stage reported RSD on 86 percent of route miles, while commuter railroads reported an average of 39 percent of route miles in RSD. Moreover, based on our analysis, 11 railroads 7 commuters and 4 Class II and III railroads reported that they remained in field testing as of March 31, 2019. Similar to railroads in RSD testing, the extent of field testing reported by railroads varied. Of the 11 railroads in field testing, most (7) reported field testing on the majority or all of their route miles, whereas 4 railroads all commuters reported conducting field testing on less than half of their route miles. Based on railroads responses to our questionnaire, railroads PTC implementation status did not change significantly as of May 31, 2019; two additional railroads both commuters began RSD testing on some portion of their track, and one commuter railroad began field testing. <2.2. Host Railroads Have Achieved Interoperability with Less Than 20 Percent of Tenants, but Nearly All Railroads Have Started Interoperability Planning> As of March 31, 2019, 11 of the 31 host railroads that must have interoperable PTC systems reported to FRA that they had achieved interoperability with at least 1 of their tenant railroads. Collectively, of the 227 unique host-tenant relationships that require interoperability, FRA reported that railroads had achieved interoperability for 38 (17 percent) of these relationships. The number of tenants each railroad must work to achieve interoperability with ranges from 1 to 31 railroads, based on railroad reports to FRA. For example, Class I railroads, as host railroads, average about 18 tenants, while commuter railroads average about 3 tenants. A railroad does not generally start work to achieve interoperability with all the railroads it interoperates with at once, according to FRA; instead a railroad will prioritize its interoperability work. For example, representatives from one Class I railroad we interviewed said it prioritized achieving interoperability in the following sequence: first with commuter-railroad tenants given the need to ensure passenger safety; second with other Class I railroads given the high total miles of track they share; and finally with smaller Class III railroads. In addition, a railroad may be in multiple interoperability steps (e.g., installing, testing) with different tenants at the same time. FRA counts a relationship as having achieved interoperability if the tenant is operating PTC on all of the host s track miles. This binary measure for interoperability that is, achieved or not does not describe the extent to which railroads have started work on interoperability or, according to representatives from two railroads we interviewed, reflect when interoperability has been achieved along most but not all of its host s track. Railroads reported to FRA that they had begun work on interoperability for more than 90 percent of the remaining host-tenant relationships that need to achieve interoperability. In particular, based on their quarterly reports, railroads were installing for 82 host-tenant relationships and testing for 89 host-tenant relationships as of March 31, 2019. Overall, the status of interoperability work did not vary much among Class I, commuter, and Class II and III railroads. FRA officials and others we spoke with could not provide an estimate of how long it takes on average for two railroads to complete the individual steps to achieve interoperability. Representatives from industry associations we interviewed said that it can vary. An FRA specialist we interviewed agreed, explaining that interoperability field testing, for example, varies based on track availability. One railroad might complete testing in 4 days while another railroad might need weeks because it can only test at specific times. In its quarterly reports, FRA asks host railroads to provide the scheduled date for completing interoperability testing with each tenant railroad. As of March 31, 2019, seven railroads reported that they did not anticipate completing interoperability testing with at least one tenant until the last quarter of 2020. <2.3. Railroads Continue to Report Challenges with Vendors and Software, and Face New Interoperability Challenges> In responding to our May 2019 questionnaire, most railroads reported that vendor and software issues remain major or moderate challenges for PTC implementation. As part of our ongoing work related to PTC, we have reported that railroads have faced challenges associated with the limited number of vendors that design PTC systems, provide the software and hardware, and conduct testing. However, as representatives of half of the railroads we interviewed emphasized, vendor and software issues are more acute now because as the 2020 deadline nears, less time remains to address these issues and associated delays. Software and vendor issues can be interrelated as a small pool of vendors develop and update the software that supports railroads PTC systems. Representatives from several railroads and FRA specialists we interviewed said that software issues routinely arise in lab testing, field testing, and RSD that require vendor revisions before a railroad s PTC implementation can continue. For example, representatives from one railroad said that existing software defects affecting its PTC system must be addressed and a new version of the software is needed before they can start RSD. They added that they had no control over this process, as they must rely on the vendor to provide reliable software. Representatives from this railroad also noted that resolving software issues is often not entirely within a railroad s control due to the need for vendor support, in contrast to some earlier challenges leading up to the 2018 deadline, where, for example, the railroad itself had more control as it was installing equipment and could more clearly track progress. Moreover, the limited supply of vendors and high demand for services as railroads work simultaneously to implement PTC by the 2020 deadline continue to pose problems. For example, representatives from one railroad said their vendor has consistently had issues meeting milestones and delivering on its commitments. Representatives from a small railroad said they had limited internal resources to implement PTC, making the railroad s progress heavily reliant on its vendor. Representatives from two other railroads and FRA officials also highlighted implementation delays caused by recalls for some locomotive equipment, which has caused additional work for railroads as well as the vendor. Specifically, the equipment had to be removed, sent in for repair, and then re-installed. More than half of the railroads implementing PTC also responded to our questionnaire that interoperability was a major or moderate challenge. Railroads said that interoperability can be complicated by software issues and coordinating host and tenant railroad schedules, when asked to describe the biggest challenges to achieving interoperability. Fifteen railroads specifically mentioned software issues, and representatives from several railroads noted that interoperability will require reliable software. For example, one railroad reported that certain software functionality remains to be developed, tested, and implemented to facilitate interoperability and to address software reliability issues that have caused system disruptions. Also, 14 railroads noted that scheduling time with other railroads to begin interoperability testing can be cumbersome and time consuming. For example, several railroads that we interviewed and that responded to our questionnaire said that scheduling can be complicated by whether other railroads have made enough progress on their own PTC implementation to begin work on interoperability. According to FRA officials, interoperability challenges also differ across PTC systems and geographic areas. Below, we use the Northeast Corridor and the Chicago metropolitan area where most railroads are implementing ACSES and I-ETMS, respectively to illustrate the challenges faced in working to achieve interoperability. However, railroads in other areas or implementing other PTC systems may face some of these same challenges or face additional different challenges. <2.3.1. Northeast Corridor and Surrounding Area> Over a dozen railroads operating on the Northeast Corridor and in the surrounding area are required to implement PTC. The Northeast Corridor runs from Washington, D.C., to Boston, Massachusetts, and Amtrak predominantly owns track on the corridor. Eight commuter railroads, Amtrak, and most freight railroads are implementing a form of the ACSES system on at least a portion of their equipment and track. In some cases, railroads in the Northeast will be operating two different PTC systems concurrently on the same track, which will add to the complexity of interoperability, according to FRA. Examples of interoperability challenges faced in the Northeast include: Software issues. PTC software presents particular challenges in the Northeast because software is being supplied by multiple vendors and has been developed to accommodate railroads existing systems that have different configurations. Therefore, according to FRA officials, ACSES does not have a common set of requirements or specifications. Also, even if two railroads use the same vendor for their locomotive equipment or software, each railroad may use a different version of software. In addition, representatives from two railroads that operate in the Northeast told us they built different software functionality into their PTC systems to accommodate their own operations, so additional work is needed to resolve such differences to achieve interoperability. In light of these software issues, representatives from one industry association and one railroad we interviewed said that Northeast Corridor railroads are discussing creating a software management process to aid interoperability. Boundary issues. A train needs to seamlessly operate PTC when it crosses the boundary between two railroads territories, as previously described. According to a rail industry association, as of June 2019, there are about 20 boundaries on the Northeast Corridor where more work is needed to ensure seamless operation. FRA officials and one industry association said boundary issues are complex and time- consuming to resolve but not insurmountable. For example, FRA officials said a railroad could install its own equipment such as transponders and wayside devices across the boundary to create an overlap between their system and that of the other railroad. Securing PTC wireless communication. FRA requires that PTC wireless railroad communications be encrypted. However, a solution that aims to encrypt all PTC wireless communication and data transmittal among railroads operating ACSES in the Northeast is currently in lab development. In August 2016, Amtrak received a grant from FRA to create this solution for ACSES. Amtrak originally planned to implement this solution in December 2018, but Amtrak has experienced delays and currently estimates that it will implement the solution by January 2020. However, Amtrak has reported several risks that it will need to overcome to meet this implementation deadline. Further delays could affect railroads ability to fully implement PTC in the Northeast by the December 2020 deadline. FRA noted it will continue to monitor and support the railroads as they implement security measures in the Northeast. <2.3.2. Chicago Area> Ten I-ETMS railroads that operate in the greater Chicago metropolitan area received extensions to implement PTC. Throughout PTC implementation, FRA, industry associations, and railroads have identified Chicago as a place where interoperability would be challenging due to the dense freight, passenger, and commuter operations in the area. Examples of such challenges include: Software issues. According to FRA and railroads we interviewed, software issues have slowed interoperability work by railroads implementing I-ETMS. The underlying problem is the memory available on the locomotive equipment, which is needed to store its railroad s track data, according to FRA and railroads we interviewed. To be interoperable, the locomotive equipment also needs to store and exchange multiple railroads track data, causing the memory to fill up very quickly. According to railroad representatives, memory limitations for I-ETMS locomotive equipment prohibited railroads with large track data files mainly the Class I freight railroads from being able to interoperate. The vendor for this equipment has been working on a software solution for this problem, and according to a few railroads we interviewed, the vendor delivered an interim software solution in March 2019 that allowed the four largest Class I railroads to achieve interoperability. However, this software was delivered 7 months later than initially planned, and an additional software solution is still needed to allow the locomotive equipment s memory to store the data of all railroads operating I-ETMS, according to representatives from two railroads and an industry association we interviewed. Other technical issues. Railroads in the Chicago area conducted modeling to help ensure that sufficient communications capacity (e.g., spectrum and radio capacity) would be available to support PTC interoperability in the region. According to one industry association, while actual PTC operations in the area are minimal right now relative to full expected operations, railroads must continue to monitor the communications capacity as more railroads progress with their own PTC implementation and start to interoperate. For example, railroads may have to re-engineer their radio networks, such as re-routing certain communications through different radio towers and other network connections, if issues are subsequently identified. Scheduling interoperability work with other railroads. Within the Chicago area, the total number of railroads and the number of railroads that have to be interoperable on a single line complicates interoperability. Chicago is the busiest rail hub in North America and handles one-fourth of the nation s freight rail traffic. Nearly 500 freight trains and over 700 passenger trains travel through the area on tracks owned by several different railroads every day. For example, one commuter railroad, for one of its lines, operates over track owned by four host railroads that alternates with its own track. Achieving interoperability for this line will involve sequencing and scheduling with multiple railroads to activate PTC along the entire line, including across the numerous boundaries between different railroads territories, according to representatives from that railroad. According to one FRA specialist, work to achieve interoperability in the Chicago area will ramp up in late 2019 or early 2020. As a result, many railroads will have to coordinate schedules to sequence interoperability work across the dozens of host-tenant relationships in the area. <3. FRA Is Assisting Railroads with Testing and Interoperability while PTC Workload Challenges Persist> FRA officials told us that the agency continues to provide assistance to railroads on interoperability and to support railroads through the testing process. In summer 2019, FRA began an effort to meet with all freight, non-Class I tenant railroads that have to be interoperable with host railroads required to implement PTC. FRA officials said they will use meetings with these 72 individual tenant railroads to discuss PTC requirements and review the railroads plans for implementing PTC with their host railroads. FRA officials said they have also continued to meet regularly with railroads still in field testing or starting RSD on their own PTC systems. For example, FRA officials said the agency meets weekly or monthly with each railroad that has not yet initiated RSD to provide targeted technical assistance to resolve any issues. FRA and representatives from one railroad also told us that FRA has met with vendors to discuss specific equipment or software issues and to stress the importance of resolving these issues. FRA also participates in meetings held by the railroad industry s PTC working groups, including those focused on the Northeast Corridor and Chicago area, as needed. In addition, FRA officials told us that they are working with industry to improve the safety plan review process. Specifically, according to a June FRA presentation, FRA is working with two railroads and an industry association to create templates for streamlined, more consistent safety plans for two types of PTC systems I-ETMS and E-ATC. The goal of the template is to reduce the burden on both railroads and FRA by using a shorter format and, where possible, relying on standardized system documents. FRA officials anticipate that the templates will be ready for other railroads to use in fall 2019. In addition, FRA has contracted for help in reviewing safety plans. However, representatives from four railroads and two industry associations we interviewed noted that they remained concerned about the amount of time it has taken FRA to review safety plans. FRA reported in February 2019 that it took on average 331 days to review a safety plan. While it is too early to determine the effect of FRA s efforts to improve the safety plan review process, much work remains for FRA in the next 18 months. According to FRA, 23 railroads will be submitting safety plans in the next 12 months. While FRA has conditionally certified 13 PTC systems as of March 31, 2019, these railroads, too, are required to continue to work with FRA to provide additional documents to respond to FRA s conditions. Some of these railroads also plan to resubmit safety plans for FRA to review, hoping to receive an unconditional certification before the December 2020 deadline. In March 2018, we reported that railroads had expressed a need for additional clarification about applying for an extension and that FRA could provide more consistent information to railroads. We recommended that FRA identify and adopt a method for systematically communicating extension-related information to railroads. In 2018, FRA held three symposiums for railroads to consistently communicate information to help railroads prepare to qualify for an extension and to understand what was required to have a fully implemented PTC system. FRA held two similar sessions in February and June 2019. Representatives from most of the railroads we interviewed (six of eight) said they have been happy with the communication with FRA, via these sessions as well as regular meetings with FRA s PTC field specialists and other staff. For example, representatives of two railroads said it was helpful to have the FRA Administrator attend the sessions with railroads and talk directly to railroad representatives. In addition, clarity of information from FRA was the lowest rated challenge in response to our questionnaire, with 29 railroads reporting this as a minor challenge or not at all a challenge. While FRA has made improvements, the extended 2020 deadline for full PTC implementation is less than 18 months away, and FRA and railroads have substantial work to complete and challenges to address before that deadline. Moreover, unlike the 2018 deadline, no additional extensions are available beyond December 2020. In March 2018, we recommended that FRA develop an approach to use the information it gathers on railroads PTC implementation progress to prioritize the allocation of resources to address the greatest risk. FRA agreed with this recommendation, and while FRA officials have described testing and interoperability as areas of focus in 2018 and 2019, they have not articulated or demonstrated how, within these broad areas, they are monitoring risk and prioritizing resources. For instance, FRA plans to meet with all 72 tenant railroads in over 30 meetings rather than use the data it collects from host railroads to target this outreach. In addition, while FRA will have to review dozens of new and resubmitted safety plans in the coming months, FRA officials have not identified how they will prioritize these reviews relative to other reviews (e.g., other documentation that railroads submit as they continue testing on their own systems and for interoperability). According to FRA, it has communicated to railroads in industry-wide meetings that conditional certification for a PTC system is generally sufficient to meet the statutory requirement for full implementation; FRA noted this would only not be sufficient if a railroad s PTC system did not otherwise meet the technical requirements in regulations and one or more of the conditions related to such non-compliance. However, representatives from two railroads we interviewed also said it was unclear whether conditional certification would be enough for a railroad to comply with the 2020 deadline, and uncertainty remains about which conditions must be addressed to meet the statutory requirement for full implementation. Related to system certification, representatives from three railroads and one industry association we interviewed also said FRA still needed to clarify how it would handle situations where a host or tenant railroad is not fully implemented by the 2020 deadline. Although the FRA Administrator has publicly said he will enforce the implementation deadline (which is December 31, 2020, for most railroads) and recommend assessing the maximum civil penalty against a railroad that did not meet its deadline, FRA has not clarified if this would apply in situations where a host or tenant relationship affects another railroad s implementation. We continue to see value in FRA developing a risk-based approach to allocating its limited resources and will continue to monitor FRA s actions on this recommendation. Going forward, FRA will also need to transition to overseeing PTC as a routine part of railroad operations after the 2020 deadline. Similarly, railroads will need to transition from implementation largely done by contractors to operating and maintaining their own PTC systems. Several railroads, in response to our questionnaire, said that they anticipate difficulties funding ongoing operations and maintenance as well as managing software and other updates. Therefore, December 31, 2020, represents not only the deadline for full PTC implementation but also a point after which railroads and FRA will face a new operational and oversight environment. Chairman Wicker, Ranking Member Cantwell, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. <4. GAO Contact and Staff Acknowledgments> If you or your staff have any questions about this testimony, please contact Susan Fleming, Director, Physical Infrastructure at (202) 512- 2834 or FlemingS@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Susan Zimmerman (Assistant Director); Katherine Blair Raymond; Delwen Jones; Emily Larson; Joanie Lofgren; Shannin G. O Neill; Josh Ormond; Madhav Panwar; Marcus Robinson; Maria Wallace; Crystal Wesco; and Elizabeth Wood. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
Forty-two railroads are currently subject to the statutory mandate to implement PTC, a communications-based system designed to automatically slow or stop a train that is not being operated safely. Railroads were required to implement PTC by December 31, 2018, but would receive extensions up to December 31, 2020, if specific statutory requirements were met.
GAO was asked to review railroads' PTC implementation progress. This statement discusses (1) railroads' implementation progress and any related implementation challenges and (2) FRA's plans for overseeing railroads' implementation. GAO analyzed railroads' most recent quarterly reports covering activities through March 31, 2019; received responses from all 42 railroads on a brief questionnaire; and interviewed officials from FRA and 8 railroads, selected to include variation in implementation status and type of railroad, among other criteria.
What GAO Found
Amtrak, commuter railroads, and freight railroads continue to make progress implementing positive train control (PTC), but significant work remains to achieve interoperability among the railroads' individual PTC systems. Since the end of 2018, many railroads reported making progress on testing and implementation of their own PTC systems. Four railroads reported reaching full implementation as of March 31, 2019, the same number in this stage at the end of 2018. However, many railroads remained in earlier stages of implementation, such as the 11 railroads that reported being in field testing. Nearly all railroads plan to complete full PTC implementation in the last quarter of 2020.
Full implementation with interoperability is achieved when the PTC system on the locomotive of a “tenant” railroad and the PTC system of a “host” railroad whose track is being used can successfully communicate, allowing uninterrupted movements over property boundaries. As of March 31, 2019, 11 of the 31 host railroads that must have interoperable PTC systems reported that they had achieved interoperability with at least 1 of their tenant railroads. Collectively, 38 of the 227 unique host-tenant relationships that require interoperability have been completed (17 percent), according to the Federal Railroad Administration (FRA). Most railroads reported to GAO that vendor and software issues were currently major or moderate challenges for PTC implementation. Over half of railroads also reported that interoperability was a major or moderate challenge, and can be complicated by software issues and coordinating host and tenant schedules, among other issues. For example, one railroad said that certain software functionality still had to be developed, tested, and implemented to address reliability issues and facilitate interoperability.
FRA continues to provide assistance and support to railroads on PTC interoperabilty and the testing process, but workload challenges for the agency persist. FRA will continue to face a substantial workload through 2020 as it oversees railroads' PTC implementation and reviews documents, including lengthy safety plans required for railroads to obtain PTC system certification. While FRA officials have described supporting interoperability and testing as areas of focus, they have not demonstrated how, within these broad areas, they are monitoring risk and prioritizing resources, as GAO recommended in March 2018. GAO continues to see value in FRA developing a risk-based approach to allocate its resources to oversee PTC.
What GAO Recommends
In March 2018, GAO recommended FRA take steps to systematically communicate information to railroads and to use a risk-based approach to prioritize agency resources and workload. FRA concurred with these recommendations. FRA has taken actions to systematically communicate information to railroads. GAO will continue to monitor FRA actions with regard to allocating agency resources to oversee PTC. |
gao_GAO-19-564 | gao_GAO-19-564_0 | <1. Background> School-age children can access the internet in a number of ways. Their households may subscribe to in-home fixed internet, which is generally provided by cable television or telephone companies. School-age children, and other users, can connect a variety of devices to in-home fixed service through a wired connection or a Wi-Fi connection. They may also access the internet through mobile wireless service, which is provided through cell towers, with data transmitted over radio frequency spectrum. Mobile service providers usually sell internet access as an option in mobile telephone-service plans. A number of devices may connect to mobile wireless, such as smart phones, tablets, and mobile devices that enable laptops to connect to mobile wireless service. Finally, school-age children and others may access the internet outside the home through other ways, including publicly available Wi-Fi access at places such as libraries and coffee shops. FCC has found that Americans in lower-income areas are less likely to have access to both in-home fixed and mobile wireless internet than those in higher-income areas. Similarly, according to our analysis of data from the November 2017 CPS: Computer and Internet Use Supplement, among all school-age children, those in lower-income households are less likely to use the internet at home than those in higher-income households (see fig. 1). A number of factors explain the digital divide, or the varying levels of access among different populations. For example, as we have reported in the past, rural areas tend to have conditions such as low population density or difficult terrain that can increase the costs for internet providers to deploy and maintain internet networks. Furthermore, lower-income households with access to the necessary infrastructure for internet service may not be able to afford it. (See fig. 2.) While some in-home fixed internet providers offer low-cost service for lower-income households with school-age children, according to a 2016 survey, an estimated 5 percent of households with school-age children ages 6 to 13 and incomes at or below the federal poverty guidelines had ever signed up for such programs. Lower rates of internet access by lower-income households may make it more difficult for school-age children in those households to do homework. According to a 2018 Pew Research Center survey, a higher percentage of surveyed teens in lower-income households said that the lack of a dependable computer or internet connection sometimes prevents them from finishing their homework compared to teens in higher- income households. In addition, according to the Consortium for School Networking, the lack of in-home access makes it more difficult for parents to support their children academically. Specifically, as much communication between schools and parents has moved online, the lack of access may make it difficult for parents to stay connected to teachers and be informed about school notices, homework assignments, and other important information. FCC, which regulates commercial and other nonfederal spectrum, conducts activities that affect the ability of schools to address the homework gap. Specifically, it plays a role in expanding internet access by assigning licenses for Educational Broadband Service (EBS) spectrum, which permits schools and other eligible entities to transmit educational materials electronically. Currently, EBS license holders are allowed to lease excess capacity to others, including commercial wireless providers, for up to 30 years as long as the license holder has 20 hours of educational use per week per licensed channel and reserve the right to access 5 percent of the capacity for educational use. Schools that have such leases may need to wait years to regain full use of their EBS license. Furthermore, the last opportunity for school districts to apply for new EBS licenses was in 1995, and according to FCC, EBS licenses cover about half the geographic area of the United States, with rural areas west of the Mississippi River generally lacking licenses. However, FCC recently adopted a Report and Order with rules that, once effective, will change the eligibility requirements for EBS licenses, among other things. In addition, FCC supports internet investments at schools through the E- rate program, which provides discounts on telecommunications and internet access services, internal connections, and basic maintenance of internal connections. This program provides schools with higher percentages of lower-income students greater discounts on these services; for example, the most disadvantaged schools, where at least 75 percent of students are eligible for free or reduced price school lunch, receive a 90 percent discount. All services supported by the E-rate program must be used primarily for educational purposes, which FCC has defined as meaning activities that are integral, immediate, and proximate to the education of students. Education s Office of Educational Technology also plays a role related to internet access for students by developing national educational- technology policies and providing guidance to schools and school districts on technology use in schools. For example, in January 2017 the office issued a letter to schools and school districts about Education grant funds that could be used to support the use of technology to improve instruction and student outcomes. It also issued a report in 2017 on the use of technology in schools; the report provided guidance on how to modernize the technology needed for digital learning, such as schools internet networks and internet-enabled devices. Education also collects, analyzes, and reports on a range of data from schools and school districts. For example, every year from 1994 to 2005 (except 2004 due to a lack of funding according to Education officials), the department collected data on internet access in schools and classrooms. In 2008, Education conducted three similar surveys at the district, school, and teacher levels on the availability and use of a range of educational technology resources, such as networks, computers, devices that enhance the capabilities of computers for instruction, and computer software. Due to a lack of funding, Education did not conduct additional similar surveys. However, the department recently finished administering a different survey effort, funded from different sources, that we discuss later in this report. <2. School-Age Children in Lower-Income Households Face Challenges in Doing Homework Involving Internet Access and May Be More Likely to Rely on Mobile Wireless> According to our analysis of November 2017 CPS: Computer and Internet Use Supplement data, lower-income households with school-age children may be more likely than those in higher-income households to be reliant on mobile wireless service, such as through smart phones, for internet access. As seen in figure 3, among all households with school-age children, an estimated 22 percent with incomes of less than $25,000 per year use mobile wireless to access the internet but not in-home fixed high-speed internet service, in contrast to 8 percent with incomes of $75,000 or more per year. School-age children whose households only have mobile wireless internet access may face challenges in using it for homework, including: Device limitations. Students in mobile wireless-only households may have to rely on devices like smartphones that may not be well suited for academic tasks. A recent Pew survey found that an estimated 45 percent of teenagers in lower-income households say they sometimes have to do homework on a smartphone. However, most of the stakeholders we interviewed told us that smartphones are not adequate for doing homework for various reasons, including that they are too small for typing papers and that not all educational websites are compatible with smartphones. According to these stakeholders, other devices such as desktops or laptops are better suited for homework; however, among all school-age children, those in lower- income households are less likely than those in higher-income households to use these devices (see fig. 4). Data limitations. A majority of the stakeholders we interviewed said that wireless plans data caps a limitation on the amount of data the subscriber can download and upload per month could make it difficult for school-age children to do homework, because, for example, once the data cap is reached, the provider may decrease connection speeds or impose additional costs for further data use, which could hinder completion of homework. A 2016 survey found that an estimated 39 percent of lower-income households with school- age children in this case those with incomes less than the federal poverty guidelines had reached a data cap, compared to 25 percent of higher-income households. Varying service quality. Mobile wireless may be less reliable and slower than in-home fixed service, which can make doing homework more challenging. In 2018, FCC concluded that mobile wireless services are not full substitutes for in-home fixed service, because mobile wireless quality can be affected by user location, indoor obstructions, outdoor foliage, and weather, among other factors. In addition, we reported in 2015 that the availability and quality of mobile wireless service connections vary based on location and terrain. For example, according to officials with Albemarle County Public Schools in Virginia, while most students who participated in a recent survey indicate that they have mobile wireless internet access at home, that access may only offer poor quality connections and slow speeds due to mountainous terrain. As a result, mobile wireless access may have limited usefulness for homework purposes. A 2018 survey by the Pew Research Center found that about 20 percent of teens from lower-income households say that they sometimes have to use public Wi-Fi for homework given a lack of access at home. As shown in figure 5, stakeholders we interviewed and literature we reviewed identified a number of potential challenges students may encounter in using methods to access the internet outside the home to do their homework. <3. Efforts by Selected School Districts to Increase Wireless Internet Access for Underconnected Students Varied, with Limited Federal Involvement> <3.1. School Districts, with Limited Federal Involvement, Have Taken Various Steps to Increase Wireless Internet Access for Underconnected Students> The six selected school district projects we reviewed have taken various approaches to address the homework gap by providing wireless internet service to students who may lack access at home. Most of these projects provide wireless internet access to students who lack in-home fixed internet and do not necessarily limit it to students in lower-income households. In addition, all but one of these projects provide filtered access, meaning that students using these services are subject to the same usage restrictions as if they were on-site in school. Approaches included: Provide wireless hot-spot devices. The Green Bay Area Public School District in Wisconsin loans out mobile wireless hot-spot devices to students throughout the district who do not have access at home, providing them filtered internet access in their homes or elsewhere in the community. The hot-spot devices are available on loan from school libraries to any student who claims a need for one regardless of household income. Students may use district-issued Chromebooks or other internet-enabled devices, which then connect to the district s internet resources via the hot-spot device using service provided by a commercial mobile-wireless provider. Build or use a private network. Some districts have built new or expanded existing networks to provide internet access to students using a variety of approaches. Albemarle County Public Schools in Virginia uses EBS spectrum to provide access to students in community centers in mobile home parks in this mountainous district where, according to school district officials, many students lack service at home. The district also plans to install wireless receiver devices in selected students homes through which those students will be able to connect internet-enabled devices via Wi-Fi. Desert Sands Unified School District in California also built out an EBS network to provide internet access to students who lack service at home. According to officials with that district, the benefit of this approach is that it involved only a one-time cost to build the network, rather than recurring annual payments to a commercial mobile-wireless provider for service. Two rural, low-income school districts in Virginia Charlotte County Public Schools and Halifax County Public Schools partnered with Microsoft to provide service through unlicensed white space devices (which operate on frequencies not being used by television broadcasters or 600 MHz wireless providers) to students who lack access at home, regardless of income . According to Microsoft, the use of unlicensed white space devices is a good solution to providing wireless access in rural areas where other technologies may be uneconomical and such frequencies tend to be available. Students who use this service receive a device that is installed in their home that wirelessly connects to the district s network and transmits to other devices in the home via Wi-Fi. The Boulder Valley School District in Colorado allowed a local wireless provider to build antennas on some school buildings in order to serve its customers in exchange for providing free service to lower- income students, determined based on student eligibility for free or reduced price lunch. According to a school district official, the provider has installed antennas at three schools, providing access to students living within a 3-mile radius, and plans to install antennas at most remaining schools in the district. That official told us that this model may not work in many other school districts, as there may not be sufficient population density to make it economically beneficial for a commercial provider to agree to provide such service. Equip school buses with Wi-Fi. The Coachella Valley Unified School District, which covers a large geographic area in California where many students lack in-home fixed access, equipped its fleet of about 100 school buses with Wi-Fi in 2014, enabling students to do homework during long bus rides. A commercial mobile-wireless provider connected the Wi-Fi router on the bus to the district s network. In order to access Wi-Fi on the buses, students had to use district-issued devices that they were allowed to bring home after school. The district also parked Wi-Fi-equipped school buses and other district vehicles overnight in neighborhoods with a high proportion of students who brought district-issued tablets home in order to provide access to students who likely lacked internet at home. However, the district stopped this initiative in 2017 due to limited funding and is now seeking out alternative funding sources to reactivate the program. While none of the projects described above used any funding from Education, the department has identified six existing grants that schools and districts could use under certain conditions to support internet investments, although not necessarily wireless investments specifically. While the purpose of each of these grant programs isn t specific to internet investments, Education identified specific types of internet investments that these grant funds can be used for. We did not make a determination as to whether any of the grant funds could have supported the efforts we reviewed. Representatives of two of the school districts we met with stated that they would like to see additional information on Education grants that could be used to support internet investments. Education officials said the department has taken the first step to developing a strategy to share information about these grants by developing a coordinated communications strategy through its Office of Rural Engagement. They added that the department will then continue to build a broader strategy. Education is also finalizing data collection on a survey that will collect some data regarding the homework gap. As mentioned earlier, until 2008 Education collected survey data over a number of years about information technology and internet access in schools and classrooms. According to Education officials, the department stopped collecting such data due to a lack of funding. However, the department is now finalizing a survey that is collecting nationally representative data about public school teachers use of computers and the internet, and their knowledge of students access to computers and the internet outside the classroom. The survey is collecting data that pertain to the homework gap, including the extent to which schools provide wireless hot-spot devices to students to take home; the extent to which teachers think students access the internet outside of school, such as at home, libraries, or businesses; and the extent to which teachers think smartphones are useful for doing homework. According to Education, the department finished administering the survey in June 2019 and plans to release the results in April 2020. The survey data may provide Education and others, including FCC and Congress, with useful information that can inform policy and other decisions related to the homework gap, such as how best to support schools efforts to expand wireless access for underconnected students. FCC had a minor role in some of the school district projects by having previously granted EBS licenses to some districts that use EBS spectrum to provide wireless access. However, according to FCC documentation, many schools and school districts do not have EBS licenses such as those in rural areas in the western United States and some that have obtained a license now lease their capacity out on a long-term basis to commercial providers. As a result, school districts may be limited in using EBS to provide wireless access to students or have to take additional steps to use EBS. Desert Sands Unified School District officials said that the district did not have an EBS license and that the local license holder had leased it out to a commercial provider, so the district worked with that provider to build out its EBS network. Albemarle County Public Schools had leased out its EBS license to a commercial provider years ago, but because that provider was not utilizing that spectrum, the school district was able to reclaim it. FCC has taken recent steps that may affect the extent to which school districts are able to use EBS to provide wireless access. In May 2018, FCC issued a Notice of Proposed Rulemaking seeking comment on proposed changes to how it manages EBS to encourage and facilitate its efficient use. In July 2019, FCC adopted a Report and Order that makes a number of changes to the EBS spectrum and its use. Specifically, once effective, these rules will eliminate eligibility restrictions for EBS licenses and eliminate the educational use requirement of the spectrum. <3.2. FCC Has Not Fully Evaluated the Possibility of Expanding the E-Rate Program to Include Off- Premises Wireless Access> While FCC s E-rate program supports schools connectivity by providing discounts for eligible services, program rules may limit the ability of schools and school districts to address the homework gap. Specifically, program rules specify that off-premises use of such services is not eligible for E-rate support and require that any off-premises traffic must be cost allocated out of school districts E-rate discounts. For example, any off- premises traffic supported by existing E-rate-supported products or services requires a reduction in the E-rate discount for those existing E- rate supported products and services. This reduction may increase costs for school districts as they would no longer receive all their potential E- rate discounts. Officials representing all six of the school district projects we reviewed suggested that program rules limiting eligibility for off- premises use and requiring cost-allocation may inhibit the ability of school districts to expand off-premises wireless access, and thus address the homework gap. For districts that do provide wireless access off-premises, E-rate program restrictions may still pose challenges. For example, according to an official with Desert Sands Unified School District, the district had to buy a separate line of internet access to avoid having that off-premises traffic travel through the district s existing E-rate-supported network, which would have required cost-allocation and a reduction of the E-rate discount for that existing E-rate supported network. According to officials with Microsoft, Charlotte County Public Schools and Halifax County Public Schools had to separate their off-premises unlicensed white space device traffic from internet traffic that passed through E-rate-discounted access in the schools. An official with Boulder Valley School District said that the district had to terminate an earlier effort to extend access to students in a housing development after being told that it could not provide off- premises access with program-discounted equipment without cost- allocation. In September 2016, FCC issued a Public Notice requesting public comment on two petitions filed with the agency seeking to allow the petitioning school districts to use existing E-rate-program-supported services and equipment for off-premises access without having to cost- allocate that traffic out of their existing E-rate discounts. Cost allocating out that traffic would result in reduced E-rate discounts for school districts, and therefore higher costs, for existing services and equipment supported by E-rate. FCC rules allow parties to petition for waivers of rules if they can demonstrate that special circumstances warrant deviation from the existing rules and doing so serves the public interest. According to FCC officials, the petitions are pending and the agency has not yet taken further formal action on this Public Notice. The petitions are described in more detail below. In May 2016, the Boulder Valley School District filed a petition requesting a waiver of the cost allocation rules in order to use its E- rate-program-supported network to provide internet access to students at public housing facilities after school hours. In the petition, the district argued that because traffic on its E-rate program- supported network dramatically decreased after school hours, using that network to provide access during that time would not impose any additional costs on the E-rate program. Microsoft and others including the school districts in Charlotte and Halifax counties filed a petition in 2016 to obtain clarification that those school districts could provide wireless access to students homes for educational purposes by extending the districts existing E- rate-supported services using the districts unlicensed white space device network without cost allocating that traffic from the existing E- rate discounts. The petition stated that the infrastructure to provide service to unlicensed white space devices would not be funded with E-rate program funds, and that these districts were not well served by commercial internet providers. In comments filed with FCC, Microsoft argued that projects covered by both petitions would provide in-home access for students without imposing any additional costs to the E- rate program and that the projects would increase the productivity of E-rate by using existing resources more efficiently. Previously, FCC explored the possibility of making wireless off-premises access an allowable E-rate program expense which would eliminate the requirement to cost-allocate such traffic in a 2011 to 2012 pilot program. When establishing this pilot program, FCC noted commenter concerns regarding the potential administrative, legal, technological, and procedural challenges of expanding E-rate funding to off-campus premises. The pilot program provided funding from July 2011 to June 2012 and sought to investigate the merits and challenges of wireless off- premises connectivity services and to gain a better understanding of operation and administrative issues associated with off-premises use and connectivity, as well as the financial impact on the E-rate program overall. Furthermore, the pilot program sought to help FCC determine whether off-premises connectivity services should ultimately be eligible for E-rate support. FCC provided a total of $9 million in grants to 20 pilot-program participants 19 schools or school districts and one community library system to implement projects enabling innovation in learning outside the boundaries of school buildings and the traditional school day, including those that provided off-premises wireless access and wireless devices to students. Recipients were not required to cost allocate the off-premises traffic as part of the pilot. FCC required all pilot participants to file interim and final reports that included information about project benefits, such as the extent to which students provided with wireless devices used them and the effect of increased internet access on academic outcomes; project costs; the effectiveness of measures to prevent project waste, fraud, and abuse, to filter content, and to ensure that students only used the devices for educational purposes; and lessons learned. According to FCC, those reports would allow it to assess the impact of selected pilot projects on the schools and to gather lessons learned that would help others implement similar projects in the future. In addition, FCC said it would evaluate the effectiveness of the pilot program to determine whether off- premises wireless access should be eligible for E-rate program support. While FCC received interim and final reports from most pilot participants, it did not determine a methodology for evaluating the data provided in those reports. Furthermore, FCC did not publish a report evaluating the effectiveness of the pilot program, including the potential costs, benefits, and challenges of off-premises wireless access to make a determination regarding whether off-premises access should be eligible for E-rate program support. Although the order establishing the pilot did not require FCC to determine an evaluation methodology and publish a formal analysis, according to FCC officials, staff reviewed the interim and final reports prior to the Commission adopting a 2013 Notice of Proposed Rulemaking that sought input on ways to modernize the E-rate program, including input on using E-rate-supported wireless hot-spots for community use. In two subsequent E-rate program modernization orders in 2014, the Commission did not expand the E-rate program s support for off-premises access. FCC officials explained that given the changes in technology, costs, and student learning in recent years, the data collected from the pilot may have some limitations. FCC has not announced any plans to conduct another pilot program, and aside from its consideration of the petitions previously mentioned, FCC has not announced an intention to revisit whether off-premises wireless access should be eligible for E-rate support. Federal internal control standards state that agencies should use quality information to make decisions and communicate information to external parties. Specifically, agencies should collect data from reliable sources in a timely manner, process these data into quality information, and use that information to make informed decisions. Agencies should also communicate such information to external parties that can help the agencies achieve their objectives. Furthermore, in previous work we identified as pilot-program design best practices: determining a methodology for gathering and evaluating data, evaluating pilot results to make conclusions on whether to integrate pilot activities into broader efforts, and communicating with stakeholders such as by publishing results. As discussed earlier, school districts we met with said that existing E-rate program rules that require cost-allocation of off-premises access to E-rate discounts limit their ability to address the homework gap and providing off-premises access remains a challenge for schools and school districts. Determining and executing a methodology for collecting and analyzing data on the potential costs, benefits, and challenges of making schools efforts to expand off-premises wireless access eligible for program funding could help inform FCC decisions regarding the two pending petitions and any future petitions. As petitions may only cover petitioning entities, determining and executing such a methodology could also help inform more widespread changes to E-rate rules regarding off-premises access that would affect all E-rate program recipients. FCC could collect such data through another pilot program or from school districts now providing off-premises wireless access. Publishing the results of this analysis could help FCC ensure that such information will be accessible to inform future related efforts and provide transparency to external stakeholders, including school districts. <4. Conclusions> The differences in internet access and therefore in the ease of doing homework between school-age children in lower-income households and those in higher-income households that are more likely to be well connected has resulted in a homework gap that could inhibit the academic success of underconnected students. While school districts have made efforts to address the homework gap, such efforts may be inhibited by existing restrictions in FCC s E-rate program. Although FCC explored the possibility of making wireless off-premises access an allowable E-rate program expense in a 2011 to 2012 pilot program, FCC s lack of an analysis of the data it collected at the time or since then means that it may not have sufficient and relevant information to make a decision on pending petitions from local school districts regarding off-premises access. Determining the best way to collect and analyze data on the potential benefits, costs, and challenges of making off-premises wireless access eligible for E-rate program support; conducting such analysis; and publishing the results could provide relevant information and transparency to external stakeholders. Doing so could also enable FCC to make a determination on whether it would be appropriate to ease restrictions on off-premises access, a step that may give school districts more flexibility in addressing the homework gap. <5. Recommendation> We are making the following recommendation to FCC: The Chairman of the Federal Communications Commission should determine and execute a methodology for collecting and analyzing data such as conducting a new pilot program regarding off-premises wireless access or analyzing other data to assess the potential benefits, costs, and challenges of making off-premises wireless access eligible for E-rate program support, and publish the results of this analysis. (Recommendation 1) <6. Agency Comments> We provided a draft of this report to FCC, Education, and the Department of Commerce for review and comment. FCC provided written comments, which are reproduced in appendix II. In these written comments, FCC stated that it agreed with our recommendation and noted steps it plans to take to assess the potential benefits, costs, and challenges of making off- premises broadband access eligible for E-Rate program support. FCC also provided technical comments, which we incorporated as appropriate. Education provided written comments, which are reproduced in appendix III and also provided technical comments that we incorporated as appropriate. The Department of Commerce reviewed our report and told us it did not have any comments. We are sending copies of this report to interested congressional committees, the Chairman of the FCC, the Secretary of Commerce, and the Secretary of Education. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or vonaha@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Scope and Methodology Our objectives for this report were to examine: (1) challenges lower- income school-age children who lack in-home fixed internet face in doing homework that involves internet access and (2) what selected school districts are doing to expand wireless internet access for their students, and the federal role in such efforts. To examine challenges lower-income school-age children who lack in- home fixed internet face in doing homework that involves internet access, we analyzed data from the Census Bureau s November 2017 Current Population Survey: Computer and Internet Use Supplement, which is sponsored by the National Telecommunications and Information Administration (NTIA). The Computer and Internet Use Supplement collected household information from all eligible Current Population Survey households, as well as personal information from household members age 3 and older. The supplement provided data about households computer and internet use, and about each household member s use of the internet from any location during the previous six months. One member of a household was generally interviewed and answered questions on behalf of every other member. Interviews were conducted from November 12 18, 2017. The probability sample selected to represent the universe consisted of approximately 56,000 households. We included variables on ages of household members to determine if the household had one or more school-age children. We considered a household to have school-age children if it had any children between the ages of 6 and 17, an age range used in other analyses of internet use by school-age children, such as analyses by NTIA and Pew Research Center. We analyzed data on the use of in-home fixed and mobile- wireless internet, as well as of various computing devices. In our analysis we also included variables on household income, to allow us to report results based on different income ranges. When analyzing responses by household income, we grouped household income into similar ranges that NTIA publishes on its Data Explorer website, but we consolidated the top two ranges used by NTIA into one range. To determine the reliability of these data, we reviewed NTIA technical documentation on the survey, interviewed NTIA officials, and compared our estimates of selected variables with estimates presented by NTIA on its website. We found these data were sufficiently reliable for reporting on data on internet and computing device use by household income levels. In addition, we conducted a literature search to review challenges lower- income school-age children who lack in home internet face in doing homework that involves internet access. We searched multidisciplinary databases using relevant terms such as low-income, wireless, internet, and school-age children. We searched for scholarly articles, including working and conference papers, government reports, think tank publications, and trade publications published between 2013 and 2018. We reviewed the abstracts of results from the search for publications most relevant to our work and fully reviewed publications that, based on their abstract, were most suited to this engagement. We used relevant publications to support findings we collected from other sources, including interviews. We also conducted semi-structured interviews with a range of stakeholders, including education industry associations, researchers, and advocacy organizations we selected based on literature, internet searches, and recommendations from those we interviewed. Specifically, we interviewed eight education or technology industry associations or advocacy organizations, one education researcher, one technology industry researcher, and representatives of one technology company that provides internet services and products to schools. In addition, we interviewed officials with the Federal Communications Commission (FCC) and Department of Education (Education). We also reviewed a non-generalizable sample of six projects involving seven local school districts taking steps to provide wireless internet access outside of school for students who may lack internet at home. We identified these projects based on keyword searches and recommendations from other interviewed associations and researchers, as well as officials with FCC, NTIA, and Education. From this list, we then selected those projects that were frequently cited in the press or by others we interviewed; that covered a variety of geographic locations, including those in both urban and rural areas; and that included a variety of approaches to addressing the homework gap. During these interviews, we asked interviewees about a range of topics, including the extent to which school-age children have access to in-home and wireless internet and challenges faced by students who may only have mobile wireless access. In total we interviewed 17 stakeholders, including the industry associations, researchers, and school districts detailed above. We analyzed the content of the interviews to identify key challenges identified by stakeholders. These interviews did not provide a complete list of all challenges, and the results of these interviews are not generalizable but do provide insight into a range of issues. To determine what selected school districts are doing to expand wireless internet access for their students and the federal role in such efforts, we conducted semi-structured interviews with officials at the school districts listed above and officials at Microsoft regarding its efforts to expand wireless access for students who may lack internet at home. During these interviews, we asked the districts about what steps they are taking to expand wireless access, the goals and challenges of the relevant project, and the federal role in the effort. We analyzed the content of the interviews to identify key themes. We also interviewed officials with FCC and Education to determine and review federal efforts related to school initiatives to expand wireless access for students. We reviewed documentation from FCC and Education regarding relevant federal efforts including rulemaking documents such as FCC s 2018 Notice of Proposed Rulemaking and 2019 Report and Order regarding Educational Broadcast Service spectrum. We reviewed other relevant FCC documents related to the Schools and Libraries Universal Service Support Mechanism (also known as the E-rate program), which provides schools with discounts on telecommunications and internet services. E-rate documents we reviewed included reports related the 2011 E-rate pilot program exploring off- premises wireless access. We compared FCC efforts to federal internal control standards related to using quality information and communicating externally and pilot program design best practices. We reviewed information, provided to us by department officials, on existing Education grant programs that can be used by schools and school districts to support internet investments. We also reviewed information on Education s relevant survey efforts. We conducted this performance audit from May 2018 to July 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Federal Communications Commission Appendix III: Comments from the Department of Education Appendix IV: GAO Contact and Staff Acknowledgments <7. GAO Contact> Andrew Von Ah at (202) 512-2834 or vonaha@gao.gov. <8. Staff Acknowledgments> In addition to the contact above, Mark Goldstein (Director); Derrick Collins (Assistant Director); Matthew Rosenberg (Analyst in Charge); Dwayne Curry; Sherri Doughty; Rachel Frisk; Hayden Huang; Gina Hoover; Dan Luo; Josh Ormond; Cheryl Peterson; Matt Ray; Hai Tran; and Laurel Voloder made key contributions to this report. | Why GAO Did This Study
School-age children without internet access may have difficulty in completing homework. Those without in-home fixed access may go online wirelessly outside the home to do homework. A provision was included in statute for GAO to review wireless internet access for school-age children in lower-income households.
This report examines (1) challenges lower-income school-age children who lack in-home fixed internet face in doing homework involving internet access, and (2) selected school district efforts to expand wireless access for students and the federal role in those efforts. GAO analyzed 2017 CPS data; reviewed six local projects that were selected based in part on education industry stakeholders' recommendations, that included a range of geographic locations, and that took steps to address the homework gap; compared FCC efforts to federal standards for internal controls and pilot-program design best practices; reviewed FCC and Department of Education documents; and interviewed 17 stakeholders, including school districts.
What GAO Found
According to GAO's analysis of 2017 Census Bureau Current Population Survey (CPS) data, children ages 6 to 17 in lower-income households are more likely than peers in higher-income households to lack high-speed in-home internet and rely on mobile wireless service. GAO found that students who use mobile wireless for homework may face challenges, including slower speeds and limitations smartphones present in completing tasks like typing papers. These “underconnected” students may seek out ways to access wireless internet outside of the home to do homework; however, these methods also pose challenges (see figure). The inequity in internet access—and therefore in the ease of doing homework involving access—between students of varying income levels is known as the “homework gap.”
Efforts by six selected projects involving seven school districts expanding wireless access for students who may lack it at home varied. According to officials with most school district projects GAO reviewed, rules for the Federal Communications Commission's (FCC) E-rate program, which allows schools to purchase discounted internet equipment, may limit schools' ability to provide wireless access off-premises. Specifically, off-premises access is not eligible for E-rate support, and schools that provide such access using existing services supported by E-rate must reduce their E-rate discounts. FCC conducted a pilot project in 2011 and 2012 to help decide whether to make wireless off-premises access eligible for E-rate support, but FCC did not determine and execute a methodology to assess the potential costs, benefits, and challenges of doing so. In 2016, FCC received two requests from school districts seeking waivers of rules to allow them to use E-rate program support to provide off-premises access, but FCC has not made a decision on the waivers. Determining and executing a methodology to analyze data about the potential benefits, costs, and challenges of easing E-rate rules on off-premises use and publishing the results could provide transparency to stakeholders such as school districts. This step could also help FCC act on pending and future waiver-of-rule requests and broader changes to rules that may help schools address the homework gap.
What GAO Recommends
GAO recommends that FCC take steps to assess and publish the potential benefits, costs, and challenges of making off-premises wireless access eligible for E-rate support.
FCC agreed with GAO's recommendation. |
gao_GAO-19-335 | gao_GAO-19-335_0 | <1. Background> <1.1. The Legal Framework for Historic Preservation> The NHPA requires federal agencies to establish historic preservation programs to ensure the ongoing identification and protection of historic properties. A historic property is any building, structure, object, site, or district listed on or eligible for inclusion in the National Register of Historic Places (National Register). To be eligible for the National Register, a property must meet certain criteria, such as being associated with the lives of significant people from the past or yielding important information about prehistory or history, among others. Generally, properties that have achieved significance within the past 50 years are not considered eligible for the National Register unless they are of exceptional importance. The NHPA also established the ACHP, which advises the President and Congress on matters relating to historic preservation. The ACHP also recommends measures to coordinate activities of federal, state, and local agencies and private institutions and individuals relating to historic preservation. The ACHP can review the relevant policies and programs of federal agencies and make recommendations to improve their effectiveness, coordination, and consistency. Section 106 of NHPA requires federal agencies, including DOD, to take into account the effects of their undertakings (hereinafter referred to as projects) on historic properties, and to afford the ACHP a reasonable opportunity to comment on any such projects on historic properties by a federal agency. Part 800 of title 36, Code of Federal Regulations, establishes procedures to define how DOD and other federal agencies should meet these statutory responsibilities and how to accommodate historic preservation concerns with the mission of the agency, including DOD. Historic preservation concerns are reviewed in consultation with officials from the agency in question and other parties with an interest in the effects of the proposed project on historic properties. The goal of this consultation is to identify historic properties potentially affected by the project, assess its effects, and seek ways to avoid, minimize, or mitigate any adverse effects on historic properties. State Historic Preservation Offices each led by a State Historic Preservation Officer (SHPO) advise and assist federal agencies, including DOD, in carrying out their Section 106 responsibilities, and ensure that historic properties are taken into consideration during in project planning. A more detailed description of the relationship between DOD and SHPOs is presented in appendix II. A programmatic agreement is a document that federal agencies can, in consultation with the ACHP, SHPO, and/or other parties, negotiate and execute when a planned project will or may adversely affect historic properties and sets out the measures the federal agency will implement to resolve those adverse effects. Agencies can use programmatic agreements to satisfy their Section 106 responsibilities in the following circumstances: when effects on historic properties are similar and repetitive or are multi-state or regional in scope, when effects on historic properties cannot be fully determined prior to approval of a project, when nonfederal parties are delegated major decision-making where routine management activities are undertaken at federal installations, facilities, or other land-management units, or when other circumstances warrant a departure from the normal Section 106 process. Section 110 of the NHPA requires federal agencies to establish a preservation program to protect, identify, evaluate, and nominate historic properties to the National Register. Section 110 also states that agencies must designate qualified preservation officers to lead their respective agencies efforts to adhere to the NHPA, among other requirements. Further, Executive Order 13287, Preserve America, instructs all executive branch departments and agencies to maximize efforts to integrate the policies, procedures, and practices of the executive order and the NHPA into their program activities to advance historic preservation objectives. Preserve America also instructed executive branch departments and agencies to assess the current status of their historic property inventories (including general condition and management needs) and directs agencies with real property management responsibilities to report on efforts to identify, protect, and use historic properties every 3 years. <1.2. Roles and Responsibilities> DOD Instruction 4715.16 set forth the framework for a department-wide program that focuses on the management of cultural resources, which include historic properties. According to DOD officials, as part of DOD s program to preserve historic properties, each military department designates federal preservation officers to coordinate its own separate historic property programs. Each department has an office or division that handles cultural resources and historic preservation and has staff who are generally knowledgeable about NHPA and its requirements. The military departments also issue their own guidance that establishes policies on historic preservation and delineates responsibilities for cultural resources personnel at the service and installation level. Each military department also is responsible for ensuring that military installations with cultural resources under their purview prepare Integrated Cultural Resource Management Plans (ICRMPs). These plans should include an inventory of all known historic properties, an inventory of properties that may be eligible for listing on the National Register, and standard operating procedures covering certain maintenance aspects of historic properties. According to officials from the military departments, installations are responsible for setting up a process where all maintenance/work order requests are reviewed for further action. For example, the review process can take the form of a maintenance/work order request review board and typically includes the installation s cultural resources manager or members of the cultural resources manager s staff. If the maintenance/work order request involves a historic property, then additional steps are taken at the installation level to consult with the appropriate stakeholders. Once officials at an installation complete their evaluation of the potential impact a maintenance request/work order would have on a historic property, they consult with the SHPO on how to move forward with the proposed maintenance/work order, according to installation officials. A more detailed description of the review of maintenance/work order requests is presented in appendix III. <1.3. DOD s Use of Historic Properties> DOD generally uses its historic properties in one of two ways to support mission needs or to house service members and their families. Generally, after consultations with the SHPO, historic properties can be repurposed or renovated to fulfill current mission and housing needs. For example, a historic aircraft hangar could be converted into additional administrative space or historic homes could be renovated by a private housing partner to house service members and their families. Figure 1 is an example of how a historic property could be reused. <2. DOD Has Identified and Evaluated Some Properties as Historic, but Opportunities Exist to Enhance DOD s Efforts> <2.1. DOD Has Identified and Evaluated 60,000 Properties as Historic> In October 2017, DOD reported that, of its approximately 375,000 properties on installations in the U.S. and its territories, it has identified and evaluated about 60,000 as historic and about 57,000 as not being historic. DOD has not yet evaluated the remaining roughly 258,000 properties for historic significance. Approximately 41,000 of these properties are greater than or equal to 50 years of age, according to DOD. DOD s Cultural Resource Management Instruction requires DOD to conduct a survey of historic properties that includes the identification and evaluation of all cultural resources against the criteria of the National Register. According to ACHP officials, DOD does not routinely identify and evaluate every property under its purview for historic significance as those properties reach 50 years of age. Instead, DOD s practice is to identify and evaluate property for historic significance as installations have an identified need for or a project planned for the property, according to both DOD and ACHP officials. Officials said that, generally, federal agencies do not have the funding to proactively identify and evaluate properties for historic significance. Rather, funding to identify and evaluate properties is included within a project s funding; therefore, generally federal agencies cannot begin to identify and evaluate a property for historic significance until a project for that property is funded, according to officials from the ACHP. The initial process to identify, evaluate, and track real property, such as historic properties, occurs at the installation level. Installation officials are to record transactions; document new acquisitions, changes to existing facilities, and disposals; and collect information on the real property at each installation. Installation officials are then to enter this information into the corresponding military department or WHS real property data systems. The military departments and WHS use these databases to oversee and manage real property needs across DOD installations, such as how property is used to support the installations missions and how much to budget for required sustainment, restoration, or construction of real property. Figure 2 shows how data are intended to move from the installation level to the military department databases and then to the DOD-wide real property database, which DOD calls the Real Property Assets Database (RPAD). OSD requires that the military departments and WHS submit their real property inventories to be compiled into RPAD. DOD uses these data to provide information on its real property to Congress and other federal agencies, including the Office of Management and Budget and the General Services Administration, in order to assist in the oversight of federal real property. <2.2. DOD Lacks Complete and Consistent Data on Historic Properties, but Is Planning Actions to Improve Data Quality> We identified some gaps in data, as well as data discrepancies between the data reported at the installation level and the department level regarding historic properties for fiscal year 2017. For example, one of the 10 installations we visited could not generate a list of historic properties on the installation with corresponding data fields such as the facility condition, plant replacement value, and facility utilization rate. Officials at this installation told us they are working on a long-term project to update their data on historic properties. Additionally, data we collected from three of the 10 installations we visited were inconsistent with data in the installations respective military department-level databases. For instance: One installation had 150 more historic properties listed in its installation real property data than were listed in the corresponding military department database. The installation s data also showed 114 fewer properties coded as Not Yet Evaluated for historic significance than did the military department s database. Similarly, the data in the military department database showed twice the number of privatized homes than did the installation database. A second installation had 119 properties coded as Not Yet Evaluated for historic significance, but none with this designation in the data provided by the installation. The data provided by the installation also included 164 privatized homes, none of which were included in the military department database. Further, this installation had nine historic properties that were not included in the military department database but that were included in the installation data, as well as 26 historic properties that were included in the military department database but that were not included in the installation data. A third installation had fewer discrepancies, with two historic properties that were included in the installation data that were not in the military department database. The data in the military department database contained six assets that the installation data did not contain. There were also four discrepancies regarding privatized housing between the installation data and the military department database, with each database containing two entries the other did not include. We asked five installation cultural resource managers about these discrepancies, and they stated that the military department databases most likely had not been updated to reflect the correct installation numbers. In November 2018, we reported that RPAD contained inaccurate and incomplete data due to weaknesses in DOD s processes for recording and reporting real property, including historic property. The military services lacked complete data regarding real property transactions as well physical inventories of real property, to include historic properties. We also found that the military services have not consistently recorded real property transactions (i.e., the acquisition of, change to, and disposal of real property assets) and the results of physical inventories of assets. Finally, we found that the military services have not corrected previously- identified discrepancies in their data systems, such as missing entries for utilization and facility condition and overdue asset reviews. We recommended that each of the services develop monitoring processes for recording all real property (including historic properties) information. We also recommended that the Under Secretary of Defense for Acquisition and Sustainment work in collaboration with the services to develop corrective action plans to remediate inconsistencies in the data. DOD concurred with these recommendations and identified actions it plans to take to implement them. Implementing these recommendations would help DOD ensure more accurate and complete information on properties of historic significance and prevent further data discrepancies. Also, more accurate and complete information on the identification and evaluation of properties would help installations, military departments, and WHS oversee and manage their real property needs, including informing decisions regarding how much to budget for required sustainment, restoration, or construction of real property. We will continue to monitor DOD s progress in addressing these recommendations. <2.3. DOD Has Limited Visibility of Privatized Military Housing That Could Be Historic> DOD may transfer the responsibility to identify and evaluate homes for historic significance to the private developers. However, the military department officials we interviewed could not confirm that private developers were meeting those responsibilities. The military departments have flexibility in how they structure their privatized housing projects, but project structures share certain similarities. For a typical privatization project, a military department leases land to a developer for a 50-year term and conveys existing homes located on the leased land to the developer for the duration of the lease. Given the length of these lease agreements, homes may move beyond 50 years of age while being maintained by the private developer. Military department officials told us that when a lease or programmatic agreement is signed with a private developer, the responsibility to identify and evaluate homes for historic significance is generally transferred to the private developer. Navy and Marine Corps officials stated that, when the leases for privatized military homes were signed, a list of historic properties was provided to each private developer. According to Navy officials, those private developers are now responsible for identifying and evaluating privatized homes for historic significance once the lease is signed and the homes are transferred to the private developer. Similarly, Air Force officials stated that, prior to conveying homes to a private developer all homes encompassed in the lease agreement should have been identified and evaluated for historic significance by the Air Force. According to these officials, after the transfer of properties under the lease, the private developer is responsible for identifying and evaluating homes for historic significance. Army officials also stated that the responsibility to manage privatized homes and assess their historic value falls to the private developer. However, private developers at seven of the nine of installations we visited that had privatized historic military housing told us that they do not identify or evaluate additional homes for historic significance. The private developers at the remaining two installations said they hire a third- party to identify and evaluate homes on the installations for historic significance as they age. DOD s instruction on the management of cultural resources directs the establishment of a process to identify and evaluate cultural resources for historic significance. The need to identify and evaluate privatized military homes for historic significance would arise if a new project were planned for homes that could be of historic significance. Officials from all three military departments told us that they have addressed the identification and evaluation process by formally transferring those responsibilities to the private developers through documents such as land-lease agreements, installations programmatic agreements, and installations ICRMPs. However, DOD guidance also states that because privatization creates a long-term governmental interest in privatized housing, it is essential that the military departments attentively monitor these privatization projects. Taking steps to ensure that installation personnel verify that private developers are identifying and evaluating privatized properties for historic significance, as appropriate, could help to ensure that private developers do not make renovations or repairs to properties that could compromise their historic nature. <3. DOD Does Not Routinely Assess the Condition of Its Historic Properties or Ensure Personnel Have the Guidance and Training Needed to Preserve Them> <3.1. Some Installations Do Not Routinely Conduct Required Inventories of Historic Property to Help Ensure Its Preservation> Under DOD Instruction 4165.14, once a historic property has been identified, installations are required to complete a review of the real property asset record every 3 years, including a physical inventory that assesses the condition of the property. According to DOD, these inventories are important for planning, analysis, and decision making. However, we found that these required inventories are not routinely being conducted at six of the 10 installations we visited for a variety of reasons. Specifically, cultural resource management officials at six of the 10 installations told us that the inventory was not conducted because they were unaware of the requirement or thought that updating their ICRMPs was sufficient to satisfy the inventory requirements. As previously noted, ICRMPs should include an inventory of all known historic properties, an inventory of properties that may be eligible for listing on the National Register, and standard operating procedures covering certain maintenance aspects of historic properties. Officials at one of the six installations reported that they believe it is a best practice to inventory their historic properties every 5 years if they have sufficient staff to do so. Officials at two installations stated that they do complete the required inventory every 3 years. Officials at the remaining two installations either did not provide any comment or said they were unsure of when the last inventory was completed. However, officials from all of the services headquarters reiterated to us that the requirement under DOD Instruction 4165.14 is to inventory historic properties every 3 years. They explained that this inventory is separate and distinct from the annual inventory required under the ICRMP process. For example, Air Force headquarters officials stated that the 3 year inventory should consist of a physical check of the condition of the buildings, while the annual inventory required as part of the ICRMP update is a process to update data, such as status codes, for newly evaluated buildings. Until the military departments clarify the existing 3 year inventory requirement, current and accurate information on the condition of historic properties will not be available. Such information would better position officials who manage these properties to make informed management, maintenance, and planning decisions. <3.2. Lack of Guidance on Training Could Hamper Maintenance and Historic Preservation Efforts> We found that misunderstandings about how to maintain historic properties have led, in some instances, to problems with the preservation of these properties at installations. Each of the 10 installations that we visited has an established process and procedures for reviewing and approving maintenance/work orders on historic properties. These processes and procedures, articulated in installations ICRMPs, vary by installation and are generally intended to assist in preserving historic properties. However, cultural resource managers at five of the 10 installations said that past maintenance or renovation projects on some of their installations historic buildings may have compromised the historic significance of those buildings. In some cases, for instance, maintenance was performed improperly by tenants of historic properties or by contractors, according to installation officials. At one installation we visited, an official said a tenant made changes to a historic building without undergoing the formal approval process at the installation, which includes informing the cultural resource manager of the proposed change. The official said the tenant added additional office space and equipment, such as computers and other systems, in an unused attic without updating the capacity of the electrical panels. As a result, the official said a fire started in the attic, causing extensive damage to the building. An official at another installation we visited told us a contractor pressure washed a historic property that ended up damaging the building. The official said the damage was not intentional, as the contractor did not realize that pressure washing would harm the property. Unit members also noted some instances in which they were told by maintenance personnel that problems the members had reported could not be fixed because of the historic nature of the properties. For instance: At a Marine Corps installation, unit members said that maintenance and facilities management staff ignored or improperly handled issues they raised in their historic buildings. For example, unit members told us that maintenance personnel erroneously informed them they could not replace the air filters or clean out the mold in the ceiling because their building was historic. At an Air Force installation, unit members told us their requests for upgraded electrical outlets and roof fixes were denied because maintenance personnel told them those changes could not be completed because of the historic nature of the building. According to unit members, the existing outlets were not suitable for work on the aircraft being maintained in the building and thus presented a safety risk. Moreover, unit members told us that, to deal with the roof leaks, they ultimately resorted to using buckets to catch water. At an Army installation, unit members told us that maintenance personnel informed them they could not address certain problems, such as leaks, because of the historic nature of the building. For example, unit members at this installation resorted to boarding up their building with plywood during storms to keep rainwater from affecting the secure facility in the basement of the historic building because maintenance division staff told them addressing the leaks was not their responsibility, due, in part, to the historic nature of the building. One reason these problems may have occurred is that the individuals involved were not properly informed or trained about how to conduct maintenance on historic buildings. At nine of the 10 installations we visited, unit members who work in historic buildings told us that, based on their experiences requesting repairs to historic buildings, they believed maintenance personnel did not know what maintenance could or could not be done to the historic buildings. Officials from these installations expressed concerns about training, including a lack of training, related to historic preservation and maintenance of historic properties. For example, maintenance officials at three of the 10 installations we visited stated that they do not receive training on the special requirements associated with maintaining historic buildings; and cultural resource managers from four of the 10 installations told us that more training for installation staff, particularly maintenance staff, on historic preservation requirements would be helpful. Furthermore, officials from two of the four SHPOs representing the states where we visited military installations said they believe that tenants and maintenance personnel at installations do not have the proper training to adhere to historic preservation requirements. Officials from the Office of the Under Secretary of Defense for Acquisition & Sustainment (OUSD(A&S)) also said they were aware of misunderstandings within the military communities about aspects of historic preservation. For example, these officials said there were misunderstandings among installation personnel, including between personnel from department of public works offices, environmental offices, installation planners, and cultural resource managers about their roles and responsibilities concerning historic preservation. The OUSD(A&S) is responsible for establishing cultural resource guidance, designating responsibilities, and providing procedures to implement DOD s cultural resources program. DOD Instruction 4715.16 states that ICRMPs act as the instrument DOD uses to comply with the statutory management requirements of the NHPA. It is also DOD policy that cultural resources under DOD control are to be managed and maintained in a sustainable manner through a comprehensive program that considers the preservation of historic, archaeological, architectural, and cultural values; is mission supporting; and results in sound and responsible stewardship. In addition, the Standards for Internal Control in the Federal Government state that management should communicate quality information down and across reporting lines to enable personnel to perform key roles in achieving objectives, addressing risks, and supporting the internal control system. However, officials from each of the military departments stated that they do not have department-wide or service-wide guidance related to historic preservation training. Instead, the content and frequency of training is determined by the installations, according to military department officials. When we analyzed the installations ICRMPs, we found that responsibilities for providing cultural resources training or technical guidance, feedback, and comments to installation personnel regarding historic preservation generally lie with the installation cultural resource manager. Installation personnel rely on individual cultural resource managers and the individual installations ICRMPs to ensure that all personnel at an installation have the training they need. Without providing installations with DOD or military department-wide guidance on training related to historic preservation, there could be more instances of improper or incomplete maintenance of historic properties on installations. <4. Conclusions> According to the Advisory Council on Historic Preservation (ACHP), DOD is one of the most compliant federal agencies with regard to historic preservation requirements. DOD uses historic properties to support mission needs as well as to house military service members. Thus far, DOD has identified and evaluated 60,000 properties as historic. However, additional actions could enhance DOD s efforts to identify, assess, and preserve historic properties. First, we recently made recommendations which DOD concurred with, to improve the quality of DOD s real property data. Implementing the recommendations would help ensure that DOD has more accurate and complete information on properties of historic significance and prevent further data discrepancies. Second, taking steps to verify that private developers are identifying and evaluating privatized properties that could be historic would help mitigate the risk of developers making renovations to properties that could compromise their historic nature. Additionally, clarifying the requirement to inventory historic properties every 3 years to assess their condition would help ensure that DOD has the information it has identified as important for planning, analysis, and decision-making related to such properties. Further, establishing guidance on training for installation personnel would help ensure they possess the necessary knowledge to properly maintain historic properties on installations. <5. Recommendations for Executive Action> We are making a total of seven recommendations to DOD. The Secretary of the Navy should take steps to ensure that Navy and Marine Corps installation personnel verify that private developers are identifying and evaluating privatized properties for historic significance, as appropriate. (Recommendation 1) The Secretary of the Army should take steps to ensure that Army installation personnel verify that private developers are identifying and evaluating privatized properties for historic significance, as appropriate. (Recommendation 2) The Secretary of the Air Force should take steps to ensure that Air Force installation personnel verify that private developers are identifying and evaluating privatized properties for historic significance, as appropriate. (Recommendation 3) The Secretary of the Navy should clarify the requirement for Navy and Marine Corps installation personnel to conduct a physical inventory of historic properties every 3 years, including an assessment of each property s condition to ensure that facilities that have been identified and evaluated as historic are inventoried. (Recommendation 4) The Secretary of the Army should clarify the requirement for Army installation personnel to conduct a physical inventory of historic properties every 3 years, including an assessment of each property s condition to ensure that facilities that have been identified and evaluated as historic are inventoried. (Recommendation 5) The Secretary of the Air Force should clarify the requirement for Air Force installation personnel to conduct a physical inventory of historic properties every 3 years, including an assessment of each property s condition to ensure that facilities that have been identified and evaluated as historic are inventoried. (Recommendation 6) The Secretary of Defense should ensure that the Under Secretary of Defense for Acquisition and Sustainment, in collaboration with the military departments, develop and disseminate department-wide or service-wide guidance, on training related to historic preservation to installation personnel, including information on roles and responsibilities. (Recommendation 7) <6. Agency Comments> We provided a draft of this report to DOD for review and comment. In written comments, DOD concurred with each of our recommendations. DOD s comments are reprinted in their entirety in appendix IV. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and to the Acting Secretary of Defense; the Under Secretary of Defense for Acquisition and Sustainment; and Secretaries of the Departments of Air Force, Army and Navy, and the Director of Washington Headquarters Services. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or our staff have any questions about this report, please contact me, Elizabeth Field, at (202) 512-2775 or FieldE1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are listed on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Scope and Methodology Senate Report 115-130, accompanying a bill for the Fiscal Year 2018 Military Construction, Veterans Affairs, and Related Agencies Appropriations Act, included a provision that GAO assess the historic properties in use on the Department of Defense s (DOD) U.S. installations. This report assesses the extent to which (1) DOD identifies and evaluates properties for historic significance, including those that have been privatized, and (2) DOD assesses the condition of its historic properties and has guidance on the training of installation personnel maintaining and those working in historic properties. For both objectives, we reviewed relevant laws, regulations, executive orders, and DOD (including military service) guidance that govern efforts to identify, evaluate, manage, and maintain DOD s historic properties. We interviewed officials from the Office of the Secretary of Defense (OSD) (the Office of the Under Secretary of Defense for Acquisition and Sustainment); Washington Headquarters Services (Facilities Services Directorate); the Army (Installation Management Command; Office of the Assistant Chief of Staff for Installation Management; Office of the Assistant Secretary of the Army for Installations, Energy and Environment; U.S. Army Corps of Engineers); the Navy (Office of the Assistant Secretary of the Navy for Energy, Installations, and Environment; Office of the Deputy Assistant Secretary of the Navy for Installations and Facilities; Office of the Chief of Naval Operations; Naval Facilities Engineering Command); the Marine Corps (Headquarters Marine Corps; Marine Corps Installations Command; Environmental Management Division); and the Air Force (Headquarters Air Force; Air Force Civil Engineer Center Installations Directorate). We reviewed DOD data, plans, and agreements, and compared DOD s efforts to address criteria in the National Historic Preservation Act and DOD Instructions. Additionally, we met with officials from the Advisory Council on Historic Preservation and private developers, such as Balfour Beatty, Clark Realty Capital, Lendlease, Lincoln Military Housing, and Hunt Companies, to whom DOD has conveyed property under the Military Housing Privatization Initiative (MHPI). To gather detailed examples of DOD s historic preservation efforts, we visited historic properties at a non- generalizable sample of 10 installations. We selected these installations by analyzing DOD s fiscal year 2017 data on real property, limited our analysis to installations in the continental United States, and identified the number of buildings and structures ( properties ) in each state DOD reported as historic. We selected four states, California, Hawaii, Virginia, and Maryland, for reasons including the high concentration of historic properties in the state. To select installations in each state, we considered variation in military service representation, the number of historic properties at each installation, and geographic variation and proximity. During these visits, we interviewed officials representing environmental resource management, cultural resource management, and the department of public works, facilities management, along with privatized installation housing developers. Further, we met with relevant state stakeholders including State Historic Preservation officials in California, Hawaii, Maryland, and Virginia. We obtained documentary and testimonial evidence related to the identification, evaluation, management, and maintenance of historic properties. We also conducted semi-structured group discussions of those who work in historic properties. The results of our interviews and semi-structured group discussions are not generalizable to all DOD installations. To determine the extent to which DOD identifies and evaluates properties for historic significance, including homes that have been privatized, we reviewed prior GAO reports related to this issue, including a recent GAO report on DOD s real property data, including historic properties. We also requested and reviewed data related to historic properties, for each installation that we visited, including data on: the facility condition, plant replacement value, and facility utilization rate, among other data fields. We reviewed and compared the data from the military departments and from these selected installations. As discussed in this report, we identified limitations of the reported data on historic properties that have been identified and evaluated by DOD. Further, we compared DOD s efforts to ensure that privatized homes have been identified and evaluated for historic significance to guidelines in Department of Defense Instruction 4715.16, Cultural Resources Management, and Department of Defense Manual 4165.63, DOD Housing Management. We also obtained and assessed testimonial evidence about the process to identify and evaluate privatized homes for historic significance from officials from the military departments and private developers. To determine the extent to which DOD assesses the condition of its historic properties and has guidance on the training of installation personnel maintaining and working in historic properties, we conducted interviews with officials from within OSD, each military department and officials at the 10 installations we visited to identify efforts to manage and maintain historic properties. We also met with U.S. Army Corps of Engineers and DOD s Washington Headquarters Services to further understand their roles in historic property maintenance. We interviewed major developers who have, under the Military Housing Privatization Initiative, leased military housing from DOD and analyzed the process that is used to manage and maintain historic properties. We compared DOD s efforts to conduct inventories of historic properties to guidelines in Executive Order 13287, Preserve America, and DOD Instruction 4165.14, Real Property Inventory (RPI) and Forecasting. In addition, related to the maintenance of historic properties, we compared DOD s efforts to guidelines in DOD Instruction 4715.16, Cultural Resources Management, and the Standards for Internal Control in the Federal Government. In addition, at the 10 installations we visited, we collected physical and documentary evidence of DOD s management and maintenance practices at the installation level. We analyzed installation-level planning documents related to the management and maintenance of historic properties, specifically the installation Integrated Cultural Resource Management Plans (ICRMPs) of the installations we visited. The ICRMPs were from installations spread out across the country and represented all branches of the military. We analyzed the ICRMPs to determine if there were any common themes. We also reviewed a non-generalizable sample of 10 programmatic agreements one provided by each installation we visited to identify common themes. These themes cannot be generalized to all programmatic agreements. We conducted interviews with installation staff to understand their responsibilities for historic property management and maintenance. We interviewed state historic preservation officials to understand the relationship between installations and preservationists and efforts to preserve historic properties on installations. During our site visits to 10 installations, we conducted semi-structured group discussions with individuals who work in historic buildings to supplement our understanding of DOD s compliance with required policy and guidance, as well as any impact working in historic properties has on DOD employees. We used the military department data that informed our site selection, and queried the data to generate a random list of properties DOD identifies as historic. We provided each installation we visited with a list of 20 randomly selected historic properties and requested the installation s assistance in inviting unit members who work in these buildings to participate in a semi-structured group discussion. The participants of the semi-structured group discussions were asked to discuss their experiences working in historic buildings. The results of our semi-structured group discussions are not generalizable to all DOD installations. To conduct the analysis and summary of these discussion groups, we developed a record of analysis that listed the installations visited and overall topics posed to the unit discussion groups and assessed the extent to which unit members had similar or different experiences working in historic buildings. We identified themes that emerged for each discussion topic across these group discussions. We conducted this performance audit between March 2018 and June 2019, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Department of Defense (DOD) Relationships with State Historic Preservation Officials Installation cultural resource managers we spoke to at all 10 installations we visited said that they cultivate and maintain active relationships with their state historic preservation office (SHPO) and regularly communicate with them on preservation issues affecting their installations. Five out of the 10 cultural resource managers noted that maintaining a good working relationship with their SHPO made the consultation process more efficient. Officials we interviewed for two of the four SHPOs stated that being involved early in the consultation process with installation officials is more efficient and makes historic preservation an easier process by enabling them to receive feedback on proposed projects on historic properties, approval of programmatic agreements, and concurrence on their Integrated Cultural Resource Management Plans (ICRMPs) in a timely manner. For example, DOD officials at one military installation said they were able to use non-historic materials during renovations on a historic property in place of more costly period-accurate materials because the agreement with that SHPO facilitated such a solution. According to officials at this installation, SHPOs generally prefer the use of period-accurate materials on historic properties when conducting repairs and renovations. The officials, however, stated that they began consultations with the SHPO early in the process and were able to reach agreement that the historic nature of the property would not be adversely affected if non-historic materials were used. See figure 3 below. Officials from two of the four SHPOs said that due to positive working relationships between the installation and the SHPO, a programmatic agreement has been put in place to help manage the installation s historic properties. These programmatic agreements can be used to address routine maintenance activities for historic properties that can be carried out by the installation with no further consultation with the SHPO. In the four states that we visited, SHPO officials said they executed programmatic agreements with some installations that can save time and reduce the number of required consultation meetings. According to officials from two of the four SHPOs we interviewed, having programmatic agreements in place can increase the efficiency of the historic preservation process. Generally, these programmatic agreements can include the following: Standard operating procedures. Programmatic agreements can include a number of routine maintenance plans pre-approved by the SHPO (such as the replacement of historic windows, repairing leaking historic roofs, and painting historic buildings) that an installation cultural resource manager can then follow without having to go through the consultation process. Inventories of relevant properties. Programmatic agreements can include inventories of historic properties that are relevant to the agreement. Generally, the procedures outlined in the programmatic agreement would apply to all of the properties listed in the inventory. Dispute resolution and emergency plans. Programmatic agreements can also include dispute resolution mechanisms between parties to the agreement and contingency plans for the maintenance and repair of historic properties in the event on an emergency. DOD Instruction 4715.16 on cultural resource management states that installations should adapt and reuse existing structures at their installation before disposal, new construction, or leasing. Installations typically consult with the SHPO before renovation work can proceed on historic properties, but, according to officials at one installation, alternative solutions can be reached if there is a good working relationship. In the figure below, at one military installation we visited, a historic property formerly used by National Aeronautics and Space Administration (NASA) and now used by the installation is in the process of being renovated and converted into additional office space. The concrete dome was used to test the aerodynamics of some of NASA s satellite and spaceship components and is being converted into a new conference room after the SHPO approved the installation s plan. See figure 4 below. While all of the installation cultural resource managers we spoke to told us they regularly communicate with their SHPO and five of these cultural resource managers said that good working relationships with the SHPO made the consultation process more efficient, installation officials may still experience challenges when trying to address historic preservation concerns. For example, maintenance officials at four of the 10 installations expressed some concerns about a backlog of consultations due in part to the increased time that they felt it takes to conduct these consultations. According to these officials, consultation backlogs caused delays to maintenance projects on historic properties at their installations. Appendix III: DOD Installation Maintenance and Work Order Review Processes DOD officials from every military service stated that each installation has a process for reviewing maintenance requests and work orders, including those involving historic properties. These procedures, articulated in installations Integrated Cultural Resource Management Plans (ICRMPs), vary by installation. For example, at seven of the 10 installations we visited, the ICRMPs state that all maintenance requests and work orders are reviewed by a board (or other body of internal stakeholders) composed of maintenance personnel, cultural resources staff (including the cultural resources manager), and other installation personnel. Officials from the military departments said that these boards are responsible for, among other duties, regularly identifying maintenance requests and work orders that affect historic properties and ensuring that the proper steps are carried out before addressing a maintenance request. Decisions by the board, results of SHPO consultations, and programmatic agreement requirements are then, according to officials from the military departments, passed down to maintenance personnel before they begin work on the historic property. At two of the other installations we visited, the installations department of public works reviews all maintenance requests and work orders, and at the remaining installation, the cultural resources manager reviews them, according to installation officials. During our visits to the military installations, cultural resource managers from eight of the 10 installations stated that they play a role in their installation s maintenance request/work order review process and that maintenance personnel are typically included in the process. For example, one installation we visited set up a work induction board composed of staff from the installation s Environmental Security Division (which handles cultural resources), maintenance staff, and other internal stakeholders. The senior official within the Environmental Security Division at this installation said the board meets on a weekly basis to determine whether proposed projects (such as maintenance requests and work orders) will affect historic properties at the installation. If the project involves a historic property, the installation s cultural resources manager becomes involved and determines the extent of the affect to the property s historic nature. This senior official also told us that the board also checks in regularly on ongoing projects and monitors work being done on historic properties. Officials at another installation we visited said they treat any building that is aged 50 or older in their database as historic and the maintenance division sends every new project to their installation s historic preservation division to ensure a review of potential impacts of the maintenance requests or work orders. Appendix IV: Comments from the Department of Defense Appendix V: GAO Contact and Staff Acknowledgments <7. GAO Contact> <8. Acknowledgments> In addition to the contact named above, Brian Lepore, Director (retired); Maria Storts, Assistant Director; Whitney Allen; Ronnie Bergman; Aaron Chua; Christopher Gezon; Alexandra Gonzalez; Lori Kmetz; Amie Lesser; Emily Martin; Natalia Pe a; Clarice Ransom; Jodie Sandel; Monica Savoy; and John Van Schaik made key contributions to this report. Related GAO Products High-Risk Series: Substantial Efforts Needed to Achieve Greater Progress on High-Risk Areas. GAO-19-157SP. Washington, D.C.: March 6, 2019. Defense Real Property: DOD Needs to Take Additional Actions to Improve Management of Its Inventory Data. GAO-19-73. Washington, D.C.: November 13, 2018. Military Housing Privatization: DOD Should Take Steps to Improve Monitoring, Reporting, and Risk Assessment. GAO-18-218. Washington, D.C.: March 13, 2018. High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington D.C.: February 15, 2017. Defense Infrastructure: More Accurate Data Would Allow DOD to Improve the Tracking, Management, and Security of Its Leased Facilities. GAO-16-101. Washington, D.C.: March 15, 2016. High-Risk Series: An Update. GAO-15-290. Washington D.C.: February 11, 2015. Federal Real Property: Improved Data Needed to Strategically Manage Historic Buildings, Address Multiple Challenges. GAO-13-35. Washington, D.C.: December 11, 2012. Defense Infrastructure: Military Services Lack Reliable Data on Historic Properties. GAO-01-437. Washington, D.C.: April 6, 2001. | Why GAO Did This Study
The National Historic Preservation Act of 1966 requires each federal agency to establish a preservation program that ensures properties are identified and evaluated for historic significance, as well as managed and maintained in a way that considers their preservation.
Senate Report 115-130 accompanying a bill for the Military Construction, Veterans Affairs, and Related Agencies Appropriations Act for fiscal year 2018, included a provision that GAO assess DOD's management of historic properties in use on U.S. installations. This report examines the extent to which DOD (1) identifies and evaluates properties for historic significance, including those that have been privatized, and (2) assesses the condition of its historic properties and has guidance on the training of installation personnel maintaining and those working in historic properties. GAO reviewed DOD fiscal year 2017 real property data and policies and procedures; visited a non-generalizable sample of 10 installations, selecting them based factors such as military service representation and concentration of historic properties; and interviewed DOD officials, privatized housing developers, and installation personnel.
What GAO Found
The Department of Defense (DOD) reported that it has identified and evaluated about 60,000 of its approximately 375,000 properties on installations as historic as of October 2017. DOD's practice is to identify and evaluate property for historic significance as installations have an identified need for or a project planned for the property, according to DOD officials. However, GAO identified opportunities for DOD to enhance its efforts in several areas.
DOD lacks complete and consistent data on historic properties. Specifically, GAO identified data gaps and discrepancies between the data reported at the installation and department levels for fiscal year 2017. For example, for one installation, GAO found that 150 more historic properties were listed in its installation data than were listed in department-level data for that installation. In November 2018, GAO reported on issues concerning DOD's data and made recommendations to improve the data quality. DOD concurred and reported actions it plans to take to improve data quality. Doing so would help DOD to ensure it has complete information on properties of historic significance and prevent further data discrepancies.
DOD has limited visibility of privatized homes that could be historic. When the military departments transferred military homes to private developers, DOD officials said they also transferred the responsibility to identify and evaluate homes for historic significance to the private developers. However, the military departments do not verify that private developers are doing so. Private developers at seven of the nine installations with privatized housing that GAO visited said they do not identify or evaluate homes for historic significance. Taking steps to verify that private developers carry out this responsibility could help DOD ensure that renovations or repairs are not made to privatized properties that could compromise their historic nature.
Additionally, DOD does not routinely assess the condition of its historic properties and a lack of guidance on training could hamper maintenance and preservation efforts. First, inventories of historic properties, including physical inspections, required every 3 years, are not being conducted at six of the 10 installations GAO visited. Officials at these six installations said that the inventory was not conducted because they were unaware of or misunderstood the requirement. Second, while each installation GAO visited had an established process for approving maintenance work orders, DOD officials reported problems with the maintenance of historic properties at these installations, ranging from maintenance personnel not addressing issues, to maintenance being conducted improperly. At nine of the 10 installations GAO visited, individuals who work in historic buildings said that they believed maintenance personnel did not know what maintenance could or could not be done to the historic buildings, and installation officials expressed concerns about a lack of training related to historic preservation. By clarifying the requirement to conduct a physical inventory and developing guidance on training, DOD would be better positioned to preserve the historic properties under its purview.
What GAO Recommends
GAO is making seven recommendations, including that DOD take steps to verify that privatized military homes are identified and evaluated for historic significance; clarify the inventory requirement for historic properties; and develop guidance related to historic preservation training. DOD concurred with the recommendations. |
gao_GAO-20-126 | gao_GAO-20-126_0 | <1. Background> Federal agencies and our nation s critical infrastructures rely on information technology systems that are highly complex and dynamic, technologically diverse, and often geographically dispersed. This complexity increases the difficulty in identifying, managing, and protecting the numerous operating systems, applications, and devices comprising their systems and networks. Further, federal systems and networks are at an increased risk of attack. This is due to those systems often being interconnected with other internal and external systems and networks, including the internet. Cloud computing relies on internet-based interconnectivity and resources to provide computing services to customers, while intending to free customers from the burden and costs of maintaining the underlying infrastructure. As federal agencies increasingly use cloud computing to perform their missions, the implementation of effective information security controls becomes more important. The effective implementation of a standardized process for securing cloud environments could reduce risks to agency systems and information maintained on an agency s behalf. The Federal Information Security Modernization Act of 2014 (FISMA) was enacted to provide a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. The act requires federal agencies to develop, document, and implement an information security program, and evaluate the program s effectiveness. FISMA also requires OMB to develop and oversee the implementation of policies, principles, standards, and guidelines on information security in federal agencies, except with regard to national security systems. The law assigns OMB the responsibility of requiring agencies to identify and provide information security protections commensurate with assessments of risk to their information and information systems. In addition to implementing an agencywide security program, FISMA requires agencies to ensure the security of information and systems maintained by or on behalf of the agency. The law also applies to systems used or operated by a contractor or other organization on behalf of the agency, such as IT resources provided via cloud services. In December 2010, OMB issued a plan for improving IT management that included provisions for a decision framework to migrate IT services to cloud environments. Since then, OMB has developed cloud computing requirements, issued a number of cloud-related documents, and established FedRAMP. OMB cloud-related documents include: Federal Cloud Computing Strategy, which was intended to accelerate the government s use of cloud computing by requiring agencies to evaluate safe, secure cloud computing options before making any new investments. Security Authorization of Information Systems in Cloud Computing Environments, which established FedRAMP in December 2011. 2019 Federal Cloud Computing Strategy, issued in June 2019, updates the 2011 Federal Cloud Computing Strategy and provides agencies with additional guidance on implementing cloud solutions and emphasizes cloud security as one of the three pillars of successful cloud adoption. In addition, the FedRAMP PMO established a framework for authorizing cloud services and guidance to help participants, including all agencies, implement it. According to the program management office, the framework is based on NIST guidance that agencies are supposed to follow. In addition to the framework, the program management office issued guidance on how agencies can leverage existing security authorization packages. <1.1. Agencies Can Select from a Number of Cloud Service and Deployment Models> Agencies can select different cloud services to support their missions. These services can range from a basic computing infrastructure on which agencies run their own software, to a full computing infrastructure that includes software applications. In defining cloud service models, NIST identifies three primary models, as follows: Infrastructure as a Service (IaaS). The cloud service provider delivers and manages the basic computing infrastructure of servers, software, storage, and network equipment. The agency provides the operating system, programming tools and services, and applications. Platform as a Service (PaaS). The cloud service provider delivers and manages the infrastructure, operating system, and programming tools and services, which the agency can use to create applications. Software as a Service (SaaS). The service provider delivers one or more applications and all the resources (operating system and programming tools) and underlying infrastructure, which the agency can use on demand. In addition, agencies can choose from a variety of arrangements for obtaining cloud services (called cloud deployment models), ranging from a private cloud for one organization to sharing a public cloud. NIST identified the following four cloud deployment models: Private cloud. The service is set up specifically for one organization, although there may be multiple customers within that organization and the cloud may exist on or off the customer s premises. Community cloud. The service is set up for organizations with similar requirements. The cloud may be managed by the organizations or a third-party and may exist on or off the organization s premises. Public cloud. The service is available to the general public and is owned and operated by the service provider. Hybrid cloud. The service is a composite of two or more of the three deployment models (private, community, or public) that are bound together by technology that enables data and application portability. These deployment models differ from each other in the number of consumers they serve, the nature of various consumers data that may be present in the cloud environment, and the amount of control consumers have over their data. A private cloud can allow for its consumers to have ultimate control in selecting who has access to that cloud environment. Community clouds and hybrid clouds allow for a mixed degree of consumers control and knowledge of other consumers. A public cloud allows access by all interested consumers, but, in doing so, should not allow one consumer who uses it to know or control data that belong to other consumers of that environment. <1.2. FedRAMP Is a Government-wide Program for Authorizing Cloud Services> Established by OMB and managed by GSA, the FedRAMP program is intended to provide a standardized approach to securing systems, assessing security controls, and continuously monitoring cloud services used by federal agencies. According to GSA, this approach is a do once, use many times framework that potentially lowers government costs, eliminates duplications, and ensures the consistent application of federal security requirements. The goals of FedRAMP are to: ensure that cloud-based services used by government agencies have adequate safeguards in place; eliminate the duplication of effort to assess security controls, and reduce risk management costs; and enable rapid and cost-effective procurement of information systems/service for federal agencies. The program s key participants are the FedRAMP PMO, JAB, federal agencies, cloud service providers, and third-party assessor organizations. FedRAMP PMO. FedRAMP s PMO is headed by GSA and serves as the facilitator of the program. The office s responsibilities include managing the program s day-to-day operations, creating guidance and templates for agencies and cloud service providers to use for developing, assessing, authorizing, and continuously monitoring cloud services per federal requirements (e.g., FISMA). JAB. The JAB is made up of chief information officers from the Department of Defense (DOD), DHS, and GSA. It is the primary governing and decision-making body of the program. The JAB is responsible for defining and establishing FedRAMP baseline security controls and accreditation criteria for third-party assessment organizations. The JAB is also responsible for issuing a provisional authorization to operate (P-ATO) for cloud services it determines will be leveraged across most of the federal government. Federal agencies. They are consumers and, in some cases, providers of cloud services. Agencies are responsible for ensuring that cloud services which process, transmit, or store government information, use FedRAMP s baseline security controls before they issue subsequent authorizations for using those cloud services. Cloud service providers (CSP). These providers include commercial firms and some federal agencies that offer cloud services to agencies. Providers are required to meet the FedRAMP security requirements and implement the program s baseline security controls. Providers work with an independent third-party assessment organization to conduct an initial system assessment, create security assessment documentation per the program s requirements, and comply with federal requirements for incident reporting, among others. Third-party assessment organizations. These FedRAMP accredited assessors perform initial and periodic assessments of cloud providers controls to ensure they meet the program s requirements. In addition, these assessors must be accredited through FedRAMP if they are assessing a cloud provider seeking a provisional authorization from the JAB. For details on the roles and responsibilities of other entities involved with the program, see table 6 in appendix II. <2. The FedRAMP Security Assessment Framework Outlines Key Artifacts for Authorizing Cloud Services> In December 2015, the FedRAMP PMO developed a security assessment framework that is to be followed by the cloud service providers (providers) and agencies seeking to authorize cloud services through the program. In addition to outlining roles and responsibilities, the framework provides agencies and cloud service providers with guidance on elements key to issuing authorizations for using cloud services through the program. These elements are critical to developing the information system or cloud service authorization package. Authorization packages include, but are not limited to the following artifacts: a control implementation summary, the security plan, the security test plan and assessment report, and remedial actions plan. These artifacts are described in table 1. FedRAMP provides agencies with two options for authorizing cloud services. The first option, called a JAB authorization, involves the agency authorizing the cloud service based on a provisional authorization issued by the board. The second option, called an agency authorization, involves the agency issuing an authorization after either sponsoring a cloud service provider through FedRAMP, or by leveraging another agency s FedRAMP authorization of that cloud service provider. Using either of these options, the agency is to review the authorization package for that cloud service prior to issuing its authorization. In reviewing the package, the agency is to consider the cloud service s system impact level (low impact, moderate impact, or high impact), and deployment model, among other things, to help determine which authorization option is more appropriate. After an agency has reviewed the package and made a risk-based decision to authorize a cloud service for use, it is to formally document this decision in an authorization letter. The agency official authorizing the cloud service must provide a copy of the letter to the FedRAMP PMO. The PMO uses the information to verify agency use and keep other agencies informed of any changes to a provider s authorization. <3. Agencies Increased Their Use of FedRAMP, but Many Continued to Use Cloud Services Not Authorized through FedRAMP> As of July 2019, all 24 CFO Act agencies participated in FedRAMP. According to the program management office s documentation, from June 2017 through July 2019, these agencies use of FedRAMP authorizations increased from 390 authorizations to 926 authorizations. Specifically, the number of JAB authorizations increased from 155 to 317 a 105 percent increase. Further, the total number of agency sponsored and leveraged authorizations increased, from 235 to 609 a 159 percent increase. Figure 1 illustrates the increase in the number of FedRAMP authorizations for the 24 agencies from June 2017 through July 2019. <3.1. Agencies Reported a Higher Number of Authorizations for Software as a Service than for Other Cloud Services> Survey responses from 23 of 24 CFO Act agencies indicated that the highest number of cloud service authorizations through FedRAMP were for Software as a Service. Software as a Service accounted for 331 of the 590 reported authorizations or 56 percent. For the other two services, Infrastructure as a Service and Platform as a Service, agencies reported issuing 153 authorizations (26 percent) and 106 authorizations (18 percent), respectively. Figure 2, depicts the authorizations by agency and cloud service and shows that 18 of 23 agencies issued more authorizations for Software as a Service than Platform as a Service or Infrastructure as a Service. In addition, while agencies are consumers of cloud services, some agencies also serve as cloud service providers to other federal agencies. Four of 24 agencies reported that they served as cloud service providers to other federal agencies in FY 2017. All four agencies reported that their cloud services received authorizations that were approved through FedRAMP and used by other federal agencies. These four agencies reported a total of seven cloud services with an agency authorization and one cloud service with a provisional authorization from the JAB. <3.2. Agencies Reported Using Cloud Services That Were Not Authorized through FedRAMP> OMB required all agencies to use FedRAMP for authorizing cloud services by June 2014, and by June 2017, all of the 24 CFO Act agencies were using the program. However, the agencies also used cloud services that were not authorized through the program. In responding to our survey, the majority of the agencies (15 of 24) reported that they used cloud services that were not authorized through FedRAMP. For instance, one agency reported that it used 90 cloud services that were not authorized through FedRAMP and the other 14 agencies reported using a total of 157 cloud services that were not authorized through FedRAMP. Seven agencies responded that they only use cloud services authorized through FedRAMP. Two agencies did not provide a response for this question. Agencies provided varying explanations for using cloud services that were not authorized through FedRAMP. For example, officials from two of the agencies stated that they were unable to identify providers authorized through the program that could meet their unique needs. An official from a third agency noted that the efforts to meet the program s requirements were labor-intensive and that it was too expensive for the providers to become compliant with FedRAMP. In addition, that official stated that providers did not want to pursue FedRAMP compliance unless they had enough demand from federal customers. An official from a fourth agency stated that some of that agency s cloud services were considered to be private and, thus, did not need to be authorized through the program. Nevertheless, according to that official, the agency performed its own authorization actions to ensure that FedRAMP requirements were met. In a similar example, an official at another agency noted that it took a significant amount of time for a provider to complete the FedRAMP process and that the agency had to issue its own authorization while the provider was going through the process. That authorization had not yet been approved through FedRAMP. The survey responses of cloud service providers were consistent with the agencies responses and indicated that multiple agencies were using cloud services that were not authorized or approved through FedRAMP. For example, 31 of 47 providers that responded to our survey reported that, during FY 2017, agencies had used their cloud services and those services were not authorized by FedRAMP. According to one cloud service provider, agencies were using 30 of its cloud services that were not authorized through FedRAMP. Another cloud service provider reported that agencies were using nine of its cloud services that were not authorized through the program. Officials from the FedRAMP program management office also provided several reasons why agencies did not use the program for all of their cloud services. For example, one PMO official indicated agencies had misperceptions of the program, its process, and resources required for a FedRAMP authorization. The official also specified that agencies did not use the program for all their cloud services because of internal resource constraints based on other competing agency priorities. Based on our work, another potential reason that agencies authorize cloud services outside of the FedRAMP program is that OMB has not adequately monitored compliance with this requirement. As mentioned earlier, OMB has issued a number of policies encouraging agencies to adopt cloud computing solutions and requiring agencies to use FedRAMP for authorizing cloud services. Nevertheless, OMB has not monitored agencies compliance or held agencies accountable for complying with the requirement to ensure that agencies are using the program to authorize their cloud services. According to an OMB technical specialist, the office collects and reviews data from the FedRAMP Marketplace to monitor agencies use of the program. However, the office does not collect data on the extent to which federal agencies are using cloud services authorized outside of the program or oversee agencies compliance with using FedRAMP. As a result, if OMB does not monitor or hold agencies accountable for using the FedRAMP program, OMB and federal agencies have reduced assurance that security controls required by the program are being consistently implemented. Additionally, OMB may lack information on agencies needs for cloud services. <4. Selected Agencies Did Not Consistently Address Key Elements of FedRAMP s Authorization Process> Although the four selected agencies included key documents supporting FedRAMP s authorization process, they did not consistently include key information in those documents. Specifically, these four agencies did not consistently or fully address required information in system security plans, security assessment reports, and remedial action plans. In addition, the agencies did not always prepare their authorizations approving the use of cloud services. <4.1. Agencies Authorization Packages Included Control Implementation Summaries> FedRAMP recommends that agencies use the FedRAMP Control Implementation Summary (CIS) when leveraging cloud services for their systems. In addition, FedRAMP specifies that agencies are to use NIST guidance when addressing their individual or shared control implementation responsibilities when leveraging cloud services. All 10 authorization packages we reviewed contained a summary, which identified agencies control implementation responsibilities as well as that of the cloud service providers. <4.2. Selected Agencies Did Not Consistently Document Required Information in System Security Plans> An objective of system security planning is to improve the protection of information system resources. A system security plan provides an overview of the security requirements for a system or cloud service and describes the controls that are in place or planned to meet those requirements. To identify controls that an agency will need to document on its security plan, the agency reviews the CIS which lists both the agency and CSP s security control responsibilities. Further, NIST guidelines state that federal agencies system security plans should identify: an explicitly defined authorization boundary for the system, how the system operates in terms of mission and business processes, the security categorization of the system including supporting rationale, the operational environment of the system and connections to other information systems, the security controls in place or planned for meeting security requirements, including a rationale for supplementing controls, and a review and approval by the authorizing official or designated representative prior to plan implementation. As shown in table 2, the four selected agencies had documented security plans for 10 systems. However, the agencies had not consistently addressed the required information in their plans. As illustrated above, the security plans for the nine selected systems did not fully address all required information. For example, three plans partially identified the operational environment of the system, such as identifying external connections which could include the cloud service the agency system was leveraging. In addition, nine plans did not fully address the extent to which security controls were in place, including those listed as the agency s responsibility. Further, agencies did not provide complete support that their authorizing officials had reviewed and approved the plans for five systems. Specifically, agencies provided signed letters indicating that the agencies initially approved the plans. However, agencies did not provide documentation to show that subsequent changes to the system security plan after the date of the signed letters were reviewed and approved by the authorizing official. Additionally, one agency had an expired letter. Until agencies fully address required information in their security plans, including the controls relied on by the cloud service provider, they have reduced assurance that security controls are in place and operating as intended. <4.3. Selected Agencies Security Assessment Reports Did Not Consistently Summarize Control Effectiveness> NIST specifies that organizations document the results of security assessments in a security assessment report. According to FedRAMP s guidance, agencies are to use the Control Implementation Summary to identify controls that are their responsibility and assess agency-specific controls, inclusive of any agency controls that are shared with providers. The security assessment report is to summarize the control testing and describe whether the tested controls were effectively in place. As shown in table 3, agencies did not always summarize the testing of controls on security assessment reports. The four agencies prepared security assessment reports for each of the 10 selected systems. However, agencies summarized the results of control tests for only three of the 10 systems reviewed. USAID summarized the test results in the security assessment report for the agency system we reviewed, but the other three agencies did not consistently summarize their results. For example, HHS did not summarize test results for three controls for one system and six controls for another system. GSA did not summarize tests results for 17 controls for one of its systems. If security assessment reports do not fully summarize the test results, agencies may have limited assurance that the controls intended to protect agency data in the cloud environment are in place and operating effectively. <4.4. Selected Agencies Remedial Action Plans Did Not Include Required Information> A remedial action plan assists agencies in identifying, assessing, prioritizing, and monitoring progress in correcting security weaknesses that are found in information systems. NIST guidelines specify that organizations develop a remedial action plan, also referred to as a plan of action and milestones, to document the organization s planned actions to correct weaknesses or deficiencies noted during the assessment of security controls of the information system. In addition, FedRAMP guidance stated that all agencies should follow FISMA which requires agencies to have a process for documenting remedial actions to address any deficiencies in the information security policies, procedures, and practices. OMB requires that remedial action plans include the following information: a description of the specific weakness; the name of the office or organization responsible for resolving the weakness; an estimate of the funding required to resolve the weakness, including the anticipated source of funding; an estimated completion date for resolving the weakness; key milestones with estimated completion dates; any changes to the key milestones and completion dates; the source of the identified weakness (e.g. security assessment, program review, inspector general audit, etc.); and the status of the corrective action (ongoing, completed, etc.). As shown in table 4, the four selected agencies documented remedial action plans for each of the selected systems, but did not consistently identify required information. As illustrated above, three plans partially identified the office responsible for addressing the weakness. Two plans did not include changes to information regarding key milestones and completion dates and two partially included the information. Further, two agencies partially identified the source of the weakness for three systems while a third agency did not identify any sources for the selected system. Until agencies include all required elements in their remedial action plans, they will be less likely to effectively assess, prioritize, and monitor efforts to resolve weaknesses in their systems. <4.5. Selected Agencies Did Not Consistently Prepare and Provide Authorization Letters to the FedRAMP PMO> OMB defines an authorization to operate as an official management decision where a federal official or officials authorize the operation of information system(s) and accept the risk to agency operations and assets, individuals, and other organizations based on the implementation of security and privacy controls. OMB requires agencies to use FedRAMP processes when granting authorizations to operate for their use of cloud services. According to FedRAMP PMO guidance, authorizing officials should document the authorization of (1) the agency system supported by the cloud service and (2) the cloud service used by the agency. Additionally, the agency should provide a copy of its authorization letter for the cloud service (cloud service authorization letter) to the FedRAMP program management office so that the office can verify the agency s use of the service and keep agencies informed of any changes to a provider s authorization status. As shown in Table 5, agencies did not consistently prepare and provide the FedRAMP PMO with the cloud service authorization letter. GSA prepared both system and cloud service authorization letters for its two selected systems. However, the other three agencies did not consistently prepare the letters. Specifically, USAID did not consistently prepare letters authorizing the cloud service and the system supported by the cloud service. In addition, HHS and EPA did not consistently prepare letters authorizing their use of the cloud services. Further, EPA, HHS, and USAID did not consistently provide the FedRAMP PMO with authorization letters for cloud services. Although GSA and an HHS component, CDC, provided cloud service authorization letters to the FedRAMP PMO, only HHS included the requirement to provide the letter to the FedRAMP PMO in its guidance. Three of the four selected agencies did not include this requirement in their guidance. Not including this requirement in their security guidance could be a potential reason for agencies inconsistent implementation. If agencies do not provide copies of their cloud service authorization letters to the program management office, the office may not have accurate information on which agencies are using approved cloud services. Further, the lack of such information could result in the office being delayed in notifying agencies when a service provider s authorization has been revoked or a provider has experienced a security incident. Agencies provided various reasons for not including required information in FedRAMP authorization documents. Such reasons included the agency was restricted from documenting proprietary information concerning the cloud service provider s portion of the shared control in the security plan and the agency was tracking all remedial actions, but the agency did not include them in the plan it provided to us. By not including the required information, agencies have reduced assurance that controls over cloud services have been effectively implemented. <5. Program Participants Reported Improved Security and other Benefits, but also Identified Challenges> FedRAMP participants identified a number of the program s benefits, such as improved security of agencies data and increased efficiency for providers to obtain authorizations. Participants also cited a number of challenges, such as the agency resources needed for authorizing a cloud service or the resources needed by the provider to implement the program s requirements. To address challenges, GSA has taken steps to improve the program, but its guidance on FedRAMP s requirements and participant s responsibilities was not always clear and the program s process for monitoring the status of security controls over cloud services was limited. <5.1. Participants Identified Various Challenges with Implementing FedRAMP> FedRAMP participants indicated that implementing certain elements of the program were challenging. Participants specifically identified the authorization process, remedial actions, and time and resources as key challenges. Authorization process and requirements. Complex authorization process. Surveyed participants agencies and cloud service providers responded that simplifying the agency authorization process would help them to better understand and manage their ongoing authorizations and continuous monitoring efforts. For example, 17 of 23 agencies, responding to this question, identified the agency authorization process as an area for improvement as did 30 of 47 surveyed cloud service providers. Survey respondents indicated that the agency authorization process should be streamlined to be less-restrictive and time-consuming. Agencies also reported that overcoming the complexity of the authorization process was one of their largest hurdles. According to the Director of FedRAMP, the FedRAMP PMO encourages agencies to streamline their agency authorization processes to be less- restrictive and time-consuming. Limitations with reviewing authorization packages. Agencies also identified reviewing authorization packages as a challenge. Agencies reported in the survey and during interviews that there were limitations in their ability to review cloud security packages prior to selecting a cloud service provider. Agencies that are currently using or want to evaluate specific FedRAMP authorized cloud services are able to access FedRAMP security packages directly through the FedRAMP Secure Repository, located on OMB MAX portal. However, agencies are given a 30-day period to access packages, which one agency official stated is too short of a time period for them to properly review documentation. Although access is limited to 30 days, agencies are able to renew the access by sending an email to the FedRAMP program management office. The Director of FedRAMP indicated that agencies can work directly with cloud service providers to obtain additional permissions to the package to save, print, email, post, publish, or reproduce. In addition, agencies expressed challenges with restrictions on downloading the packages, which limited their ability to automate their review of packages and subsequent monitoring of changes to the services security posture. Agencies also cited challenges with sharing review-related information due to the restrictive nature of cloud service nondisclosure agreements. The Director of FedRAMP mentioned that agencies can work directly with cloud service providers to obtain additional access permissions to their packages. Lack of uniform guidance for selecting cloud services. Federal agencies suggested that uniform guidance on authorization packages could assist FedRAMP customers in making better risk-based decisions in selecting cloud services. Agency officials we interviewed stated the quality and reviews of authorization packages approved through FedRAMP varied. Officials stated that inconsistencies in both FedRAMP agency and JAB provisional authorization packages have required some agencies to perform additional work. According to the officials, while the JAB process takes longer, the review appears to be more detailed than the agency process. Officials noted that improving guidance on reviewing authorization packages could help with the consistency and quality of the agency package reviews. The FedRAMP PMO has taken action and published guidance during our engagement to address more details of the authorization process. In addition, according to the Director of FedRAMP, the FedRAMP PMO launched a series of training events between February 2018 and June 2019 that provided detailed guidance into the package review process. Need for improved collaboration and coordination. Participants also identified opportunities for improving collaboration and coordination. Federal agencies suggested that improved collaboration among federal agencies in leveraging cloud services could provide transparency on the cloud service providers and the services other agencies are using. This could inform agencies on whether those services could be adopted to fit the need of their missions. Agencies also mentioned that FedRAMP PMO could improve its coordination across federal agencies and cloud service providers to provide consistent information and help facilitate opportunities to improve the program. For example, three participants suggested improving cross-agency collaborations for cloud authorizations. Additionally, one survey participant noted that improved collaboration within the cloud service provider community could provide a better understanding of the impacts and associated cost of potential changes to program s policies or requirements before they are made. According to officials from the FedRAMP PMO, their standard practice is to solicit feedback from industry and agency stakeholders prior to release of significant guidance. They added that they plan to continue collaborating with agency and industry partners. Remedial action process. In responding to our survey, 9 of 23 agencies reported that the lack of clarity on actions taken to resolve weaknesses in systems supporting cloud services was a major or moderate challenge. Specifically, two agencies cited this area as a major challenge and seven as a moderate challenge. Two agencies suggested that the program management office could make improvements by providing better visibility and traceability of the remedial action process to inform agencies on the risks associated with a cloud service. Participants responded that the remedial action process could be improved by having structured procedures for aggregating system vulnerabilities and deficiencies. This would provide agencies with better information on weaknesses identified by cloud service providers or their third party assessors in order to better consider risks prior to the purchase or use of cloud services. Additionally, agencies cited the need for improvements to the consistency of remedial action plans. Specifically, agencies cited the need for a consistent format and content of remedial action plans among security packages. Further, one cloud service provider stated that outcome-based performance metrics were a better measure of monitoring the status and effectiveness of the ongoing authorization and assessment of cloud services, as opposed to only relying on remedial action plans. According to the Director for FedRAMP, the FedRAMP PMO developed additional remedial action guidance in February 2018 and a dedicated webpage specific to the remedial action process in January of 2018. Additionally, the Director noted that for all JAB provisional authorizations, the FedRAMP PMO and JAB analyzes raw data on vulnerability scans and provides a one-page summary report that is available to agencies within the OMB MAX portal. Commitment of time and resources to complete and maintain an agency authorization. The amount of time to complete an agency authorization to operate for a cloud service was cited as one of the most challenging aspects of FedRAMP. In responding to our survey, six agencies cited the commitment of time and resources for agency authorizations as a major challenge; five agencies identified it as a moderate challenge; and six as a minor challenge. One responding agency mentioned that the time and costs associated with completing and maintaining an ongoing agency authorization was burdensome to both the agency and cloud vendor. This burden was due to a lack of allocated agency resources to continue implementing the program s requirements. In response to this challenge, the program management office has streamlined the authorization process for low-risk systems to allow for risk-based decisions that can reduce the time and resources required for an agency authorization. In addition, 36 of 47 cloud service providers responding to our survey indicated that the significant amount of resources required to implement the program s requirements for an authorization was a major or moderate challenge. Additionally, JAB technical representatives identified many of the challenges and opportunities for improving the program that agencies and cloud service providers identified. In addition, the officials stated that the FedRAMP PMO is aware of these issues and has taken steps to address them. According to the JAB technical representatives, the FedRAMP PMO s program intended improvements include, but are not limited to, updates to guidance and education resources, plans to automate the continuous monitoring process with vulnerability scanning tools, and reduced time and costs associated for completing the authorization process for both customer agencies and cloud service providers. According to the Director for FedRAMP, the FedRAMP PMO has continued to make enhancements based on industry and agency feedback. The official reported that numerous guidance documents, relating to continuous monitoring, the agency authorization process, and FedRAMP designations have been released during our engagement. The official also mentioned that the PMO actively seeks feedback from stakeholders and that additional opportunities for FedRAMP training was available. <5.2. GSA Took Steps to Improve FedRAMP, but Program Guidance Was Not Always Clear and the Process for Monitoring Security Controls Was Limited> GSA has taken a number of steps to improve FedRAMP. Among other things, the office has provided updated instructions for completing authorization packages and established and updated its training portal to help agencies and cloud service providers better understand the steps required for obtaining an authorization. In addition, the office has taken steps to streamline the authorization process and provided additional guidance on continuous monitoring of security controls over cloud services. Nevertheless, FedRAMP s requirements and guidance on implementing controls were not always clear and the program s process for monitoring the status of security controls over cloud services was limited. Clarity in program requirements and responsibilities. Agencies reported challenges with understanding FedRAMP s requirements and the process for granting an agency authorization. Specifically, agencies cited the need for clearer guidance on requirements and agency responsibilities for completing and maintaining an authorization. Eight agencies reported the clarity of FedRAMP requirements associated with the agency authorization process as a moderate challenge; whereas nine identified it as a minor challenge and no agencies reported it as a major challenge. Five agencies reported this was not a challenge. In addition, 20 of 24 surveyed agencies indicated that additional guidance describing roles and responsibilities would be very or moderately useful to their participation in FedRAMP. Further, 37 of 47 cloud service providers specified that additional guidance for describing the security roles and responsibilities between agencies and cloud service providers was needed. Both agencies and cloud service providers commented that existing guidance for using the program does not fully address control implementation roles and responsibilities and that a process should be established to address these issues. Officials from selected agencies also indicated that responsibilities were not always clearly detailed. Specifically, HHS, GSA, and USAID officials stated that guidance for using FedRAMP could be clearer on helping define roles and responsibilities between agencies and providers in implementing security controls for cloud services. The JAB technical representatives we interviewed acknowledged that while control implementation responsibilities between the agency and cloud service provider are defined in the Control Implementation Summary, in some cases, shared responsibilities are not clearly delineated. The JAB technical representatives stated that the unclear shared responsibilities could lead to inconsistent implementation of certain controls between the agency and its provider. According to the Director of FedRAMP, it is the cloud service providers responsibility to ensure the spreadsheet identifying control responsibilities are completed accurately and consistently. Our analysis of agency documentation of required information in authorization packages found that the cause of selected agencies gaps in required information for security plans, security assessment reports and remedial action plans were due in part, to unclear guidance for implementing their control responsibilities. If responsibilities are not clear, agencies may have reduced ability to ensure that controls over the cloud services they authorized are in place and effective. Limited capabilities for continuously monitoring security controls. FedRAMP s continuous monitoring process does not allow for an automated review of control requirements by agencies with security management tools. According to NIST SP 800-137, security continuous monitoring is maintaining an ongoing awareness of information security, vulnerabilities, and threats to support organizational risk management decisions. In addition, NIST mentions that timely, relevant, and accurate information is vital, particularly when resources are limited and agencies must prioritize their efforts. According to the program s officials, they will be working with NIST to incorporate automation into the authorization process. Based on our work and survey responses from agencies and cloud service providers, a number of weaknesses with the program s continuous monitoring process existed. For example, copy-protected PDFs, Word documents, and Excel spreadsheets comprised the remedial action plans and other documents supporting continuous monitoring of FedRAMP cloud service provider controls. Because of the static nature of the documents, including restrictions on copying information concerning cloud service provider controls, the documents could not be readily integrated with agencies automated security management tools in providing ongoing awareness of control implementation. Further, agency staff would have to spend time manually accessing and reviewing the documents each time they needed to determine the status of a cloud service s implementation of a particular control. Agency personnel would also have to confirm that the documents they reviewed were the most current version. According to the Director of FedRAMP, agencies may request unrestricted access to the security package directly from the provider. Agencies survey responses also indicated that: 1) remedial action plans, used in continuous monitoring, were not updated consistently, 2) the manual process did not allow for automated data feeds into their continuous monitoring tools, and 3) restrictions on copying documents reduced information sharing within the agency. Further, 21 of 23 agencies responded that FedRAMP s continuous monitoring of cloud security controls was a needed area of improvement. Cloud service providers also reported difficulties (36 of 47) with implementing continuous monitoring which could highlight the need for further improvements. In response, the Director of FedRAMP indicated that as of October 30, 2018, the FedRAMP PMO consolidated all continuous monitoring guidance documents, templates, and blog posts to a single webpage for ease of access by program stakeholders. JAB technical representatives also acknowledged challenges with implementing continuous monitoring such as difficulties with using continuous monitoring reports to assess the security posture of a cloud service. According to JAB technical representatives, agencies are responsible for reviewing continuous monitoring reports from the cloud service providers, but not all agencies could effectively conduct continuous monitoring. For example, an agency s continuous monitoring efforts could be affected from not receiving a timely notification that its cloud service provider has uploaded the required monthly continuous monitoring updates, including updates to remedial actions. According to the Director of FedRAMP, the OMB MAX portal provides the capability for agencies to receive automatic notifications when there is an update to the continuous monitoring. Agencies can enable updates by selecting the Watch this Page option in the menu bar. While the FedRAMP PMO recommends agencies to enable this feature, agencies were not aware of the feature. As a result, agencies may not be aware that such updates have taken place and tend to be reliant on a providers ability to ensure that effective security practices are in place. The JAB technical representatives commented that as cloud services evolve and mature, the continuous monitoring process needs to become more automated and user-friendly to provide real-time awareness of the security status of cloud services. Until the PMO allows for more options to automate continuous monitoring, agencies may have less assurance that they will receive timely information on the extent that controls are being effectively implemented for the cloud services they are using. In addition, as more federal agencies move toward DHS s Continuous Diagnostics and Mitigation program, automation may become even more important. <6. Conclusions> Although federal agencies increased their use of FedRAMP, they continued to authorize the use of cloud services that had not been approved through the program. While OMB requires agencies to use FedRAMP to authorize the use of cloud services, it did not monitor or ensure that agencies used the program to authorize cloud services. As a result, agencies have less assurance that security controls over cloud services have been consistently implemented. The selected agencies did not fully address key elements necessary for implementing the FedRAMP authorization process. Agencies did not consistently address required information for implementing controls, summarizing control tests, and tracking corrective actions. In addition, agencies also did not always provide the FedRAMP PMO with their cloud service authorization letters. By not fully addressing these elements, agencies have less assurance that they have effectively implemented security controls intended to protect their data in cloud environments and that those controls operating as intended. FedRAMP participants identified a number of benefits as well as challenges with the program. Among other benefits, several agencies indicated that FedRAMP improved of the security of their data. However, participants identified challenges with the program and areas where the program could be improved. GSA has taken a number of actions toward improving and furthering the program s progress, nonetheless unclear guidance and limitations with FedRAMP s continuous monitoring process could hamper the program s effectiveness and result in agencies implementing the program unevenly. <7. Recommendations for Executive Action> We are making a total of 25 recommendations 1 recommendation to OMB and 24 recommendations to the 4 selected agencies in our review, including additional recommendations to GSA as the FedRAMP program lead. The Director of OMB should establish a process for monitoring and holding agencies accountable for authorizing cloud services through FedRAMP. (Recommendation 1) The Administrator of GSA should direct the Director of FedRAMP to clarify guidance to agencies and cloud service providers on program requirements and responsibilities. (Recommendation 2) The Administrator of GSA should direct the Director of FedRAMP to improve the program s continuous monitoring process by allowing more automated capabilities, including for agencies to review documentation. (Recommendation 3) The Administrator of GSA should update security plans for selected systems to include the description of security controls and reviews and approvals plan. (Recommendation 4) The Administrator of GSA should update the security assessment report for the selected system to identify the summarized results of control effectiveness tests. (Recommendation 5) The Administrator of GSA should update the list of corrective actions for selected systems to identify the responsible office and estimated funding required and anticipated source of funding. (Recommendation 6) The Administrator of GSA should develop guidance requiring that cloud service authorization letters be provided to the FedRAMP program management office. (Recommendation 7) The Secretary of HHS should direct the Director of CDC to update the security plan for the selected system to identify the authorization boundary, the system operational environment and connections, a description of security controls, and the individual reviewing and approving the plan and date of approval. (Recommendation 8) The Secretary of HHS should direct the Director of CDC to update the security assessment report for the selected system to identify the summarized results of control effectiveness tests. (Recommendation 9) The Secretary of HHS should direct the Director of CDC to update the list of corrective actions for the selected system to identify the specific weaknesses, funding source, changes to milestones and completion dates, identified source of weaknesses, and status of corrective actions. (Recommendation 10) The Secretary of HHS should direct the Administrator of CMS to update the system security plans for selected systems to identify a description of security controls. (Recommendation 11) The Secretary of HHS should direct the Administrator of CMS to update the security assessment report for selected system to identify the summarized results of control effectiveness tests. (Recommendation 12) The Secretary of HHS should direct the Administrator of CMS to update and document the CMS remedial action plan for the selected system to identify the anticipated source of funding. (Recommendation 13) The Secretary of HHS should direct the Administrator of CMS to prepare letters authorizing the use of cloud services for the selected systems and submit the letters to the FedRAMP program management office. (Recommendation 14) The Secretary of HHS should direct the Director of NIH to update security plans for selected systems to identify the authorization boundary, system operation in terms of mission and business processes, operational environment and connections, and a description of security controls. (Recommendation 15) The Secretary of HHS should direct the Director of NIH to update the security assessment report for selected systems to identify summarized results of control effectiveness tests. (Recommendation 16) The Secretary of HHS should direct the Director of NIH to update the NIH list of corrective actions for selected systems to identify estimated funding and anticipated source of funding, key milestones with completion dates, and changes to milestones and completion dates. (Recommendation 17) The Secretary of HHS should direct the Director of NIH to submit the division s letters authorizing the use of cloud services for the selected systems to the FedRAMP program management office. (Recommendation 18) The Administrator of EPA should update security plan for the selected operational system to identify a description of security controls, and the individual reviewing and approving the plan and date of approval. (Recommendation 19) The Administrator of EPA should update the security assessment report for the selected operational system to identify the summarized results of control effectiveness tests. (Recommendation 20) The Administrator of EPA should update the list of corrective actions for the selected operational system to identify the specific weakness, estimated funding and anticipated source of funding, key remediation milestones with completion dates, changes to milestones and completion dates, and source of the weaknesses. (Recommendation 21) The Administrator of EPA should prepare the letter authorizing the use of cloud service for the selected operational system and submit the letter to the FedRAMP program management office. (Recommendation 22) The Administrator of EPA should develop guidance requiring that cloud service authorization letter be provided to the FedRAMP program management office. (Recommendation 23) The Administrator of USAID should update the list of corrective actions for the selected system to include the party responsible for addressing the weakness, and source of the weakness. (Recommendation 24) The Administrator of USAID should prepare the letter authorizing the use of cloud service for the selected system and submit the letter to the FedRAMP program management office. (Recommendation 25) <8. Agency Comments and Our Evaluation> We provided a draft of this report to OMB and the 24 CFO Act agencies for review and comment. In response, we received comments from OMB and the four agencies (GSA, HHS, EPA, and USAID) to which we made recommendations. Specifically, in comments provided via email on October 15, 2019, an OMB Associate General Counsel stated that OMB neither agreed nor disagreed with our draft recommendation that it establish a process for monitoring and enforcing agency compliance with its guidance on using FedRAMP. The official asserted that OMB does not have a mechanism for enforcing agencies compliance with its guidance on FedRAMP. However, we believe OMB can and should hold agencies accountable for complying with its policies. Policies without accountability mechanisms present the risk that the benefits expected from their implementation will likely not be realized. To ensure our position is clearly stated, we modified the recommendation to state that OMB should establish a process for monitoring and holding agencies accountable for authorizing cloud services through FedRAMP. In addition, the OMB Associate General Counsel stated that the report did not appropriately reflect FedRAMP s progress. We disagree. Although identifying the program s progress was not one of our objectives, we highlighted several areas throughout the report where progress was achieved such as the agencies increasing use of the program to authorize cloud services and the development of additional guidance and training opportunities for using the program. The OMB Associate General Counsel also commented on the duration of the audit. Additionally, OMB commented that our use of surveys on agencies and cloud service providers use of FedRAMP did not address whether the program was meeting its overall objectives, but presented more of a perception. As discussed in the scope and methodology for this review, and consistent with our objectives, the purpose of the surveys was to obtain program participants views on the benefits, challenges, and their use of the program. Additionally, our review, as designed, including our timelines, allowed us the opportunity to best assess the implementation of the program. OMB also provided technical comments, which we have incorporated into our report as appropriate. In its written comments, GSA concurred with each of our six recommendations. The agency stated that it is developing a plan to address the recommendations. GSA s comments are reprinted in appendix IV. In written comments, HHS concurred with each of our 11 recommendations. One operating division, CDC, noted that our observations were narrowly focused on authorization artifacts and did not take their FISMA compliant authorization process into account. We disagree. Our reviews of their FedRAMP authorization processes included procedures for reviewing security practices that are required under FISMA. The department stated that it would work with its operating divisions to address our recommendations. HHS s comments are reprinted in appendix V. The agency also provided technical comments, which we incorporated into the report as appropriate. EPA provided written comments, in which it disagreed with the findings for two recommendations, partially agreed with the findings for one recommendation and disagreed with two other recommendations. EPA disagreed with the finding supporting our recommendation to update the security plans for the two selected systems to identify specific required information The agency stated that one of the systems we selected for review was no longer in production and not used for EPA's operations. Nevertheless, the agency stated that its chief information security officer would coordinate with the agency s information security officers to ensure that security plans for the systems used to support its operations include all required information. We acknowledged in the report that EPA discontinued the system after we completed our review of the system s authorization package. However, our recommendation in the draft report did not clearly convey that it was intended only for the operational system. Thus, we revised the recommendation to specify the system in operation. EPA disagreed with the finding supporting our recommendation to update the security control assessment report for one of the selected systems to identify the summarized results of control effectiveness tests. The agency stated that it used a FedRAMP certified third-party assessor that provided full documentation of control test results. However, neither the security assessment report nor other documents that EPA provided to us summarized information on how the agency tested the effectiveness of its corrective actions to rectify a critical control that had previously failed. As a result, EPA had limited assurance that it had effectively implemented a control that was intended to protect agency data in the cloud environment. Accordingly, we believe that our recommendation is warranted. EPA partially agreed with the finding supporting our recommendation to update the list of corrective actions for the selected systems to identify specific required information. The agency stated that one of the systems we selected for review was no longer in production and not used for EPA's operations. In addition, the agency said that the Chief Information Security Officer would coordinate with agency information security officers to ensure that plans of corrective actions and milestones include all required information, as appropriate. We acknowledged in the report that EPA discontinued its use of the system after we completed our review of the system s authorization package. However, our recommendation in the draft report did not clearly convey that it was intended only for the operational system. As a result, we revised the recommendation to specify the system in operation. EPA disagreed with our recommendation that the agency prepare letters authorizing the cloud services for the selected systems and submit the letters to the FedRAMP program management office. The agency stated that one of the systems we selected for review was no longer in production and not used for EPA's operations. We acknowledged in the report that EPA had discontinued the system after we completed our review of the system's authorization package. However, our recommendation in the draft report did not clearly convey that it was intended only for the operational system. We have revised the recommendation accordingly. EPA also stated that it prepares and sends authorization letters for cloud services to the FedRAMP PMO. However, at the time of our review, the FedRAMP PMO stated it had not received the cloud service authorization letter from EPA for the selected operational system. We believe that our revised recommendation for EPA to prepare and send the cloud service authorization to the FedRAMP PMO for the operational system is warranted. EPA disagreed with our recommendation that the agency develop guidance requiring cloud service authorization letters to be provided to the FedRAMP program management office. The agency stated that it had a standard operating procedure in which the EPA Chief Information Security Officer forwards the letters to the FedRAMP program management office. However, the agency did not provide us a copy of the standard operating procedure or otherwise demonstrate that it had such an operating procedure. Thus, we continue to believe that the recommendation is warranted. EPA s comments are reprinted in appendix VI. The agency also provided technical comments, which we incorporated into the report, as appropriate. Further, in written comments, USAID concurred with two of our three recommendations, but did not concur with the third. Specifically, USAID concurred with the two recommendations for the agency to update the list of corrective actions for the selected system and prepare the letter authorizing the use of cloud services supporting the system and submit it to the FedRAMP program management office. However, USAID did not concur with our recommendation to update the system security plan for the selected system to identify the authorization boundary, system operational environment and connections, and a description of security controls. The agency provided additional information that it had documented the authorization boundary, system operational environment and connections, and security controls for the selected system. Upon our review of the information, we agreed that the agency had sufficiently documented these items. Accordingly, we revised our report to reflect the agency s actions and withdrew the recommendation from the report. USAID s comments are reprinted in appendix VII. In addition to the aforementioned responses, two agencies the Department of Veterans Affairs and the Social Security Administration provided written responses stating that they had no comments on the draft report. These agencies responses are reprinted in appendixes VIII and IX, respectively. Also, the Department of Justice provided technical comments, which we incorporated into the report as appropriate. Sixteen CFO agencies provided emails stating that they had no comments on the draft report. These agencies were the Departments of Agriculture, Commerce, Defense, Education, Energy, Homeland Security, Housing and Urban Development, the Interior, Labor, State, Transportation, and the Treasury; as well as the National Aeronautics and Space Administration, National Science Foundation, Nuclear Regulatory Commission, and Office of Personnel Management. We did not receive a response from one agency the Small Business Administration. We are sending copies of this report to appropriate congressional committees, the Director of the Office of Management and Budget, the 24 CFO Act agencies; and other interested parties. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact Gregory C. Wilshusen at (202) 512-6244 or WilshusenG@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix X. Appendix I: Objectives, Scope, and Methodology Our objectives were to determine the extent to which 1) federal agencies used FedRAMP to authorize the use of cloud services, 2) selected agencies addressed key elements of the program s authorization process, and 3) program participants identified FedRAMP benefits and challenges. The scope of our review included the 24 agencies covered by the Chief Financial Officers Act. To address the three objectives, we developed one survey for the 24 agencies and another survey for 83 cloud service providers identified by the FedRAMP Program Management Office (PMO) as participating in the program. We administered these web-based surveys between April and November 2018. We sent two follow-up email messages to all nonrespondents and subsequently attempted to contact the remaining nonrespondents by telephone or email at least twice more. To inform our survey questions and options, we designed our questionnaire based on FedRAMP PMO documentation and interviews with the 24 agencies and cloud service providers. We pretested the surveys with three major federal agencies, three cloud service providers, and one internal GAO group. We requested that agency chief information officers and chief information security officers review and confirm the results of the survey. We received completed surveys from 24 of 24 agencies (a 100 percent response rate) for our agency survey and 47 of the 83 cloud survey providers identified (a 57 percent response rate) for our cloud service provider survey. Not all survey respondents provided answers to all survey questions. With any survey, error can be introduced with respect to measurement of concepts, representation of respondents, and other factors, and we took steps to minimize these errors. We conducted a nonresponse bias analysis to determine whether certain cloud service providers might have been more or less likely to respond to the survey than others. Specifically, we examined whether a cloud service provider s service model (e.g., Software as a Service, Infrastructure as a Service, Platform as a Service), impact level (e.g., high, moderate, low), or deployment model (e.g., government, hybrid, private) was related to whether the CSP responded to the survey. We found that a higher share of cloud service providers that provide Software as a Service (SaaS) responded to the survey than those that provide Infrastructure as a Service (IaaS). In addition, we found that a higher share of cloud service providers that deployed in the government community cloud responded to the survey than those that deployed in the public cloud. These results suggest that cloud service providers that utilize certain service or deployment models were more likely to reply to the survey than others. As a result, the responses of the cloud service provider survey represent only those cloud service providers that participated in this survey, and are not generalizable to cloud service providers as a whole. Despite these limitations, the survey results provide insight into the experiences and views of cloud service providers that did respond. In addition to the surveys, to address our first objective, we examined 2017, 2018, and 2019 Joint Authorization Board (JAB) and agency authorization data from the 24 agencies to determine if there were an increase, decrease, or no change in the usage of the program. We also interviewed knowledgeable officials from the 24 agencies and FedRAMP PMO to obtain their views on the program. To address our second objective, we selected four agencies from the 24 agencies based on those with the highest and lowest amount of FedRAMP PMO reported FedRAMP authorizations as of June 15, 2017. We selected the four agencies by dividing them into three equal groups of eight agencies based on the highest to lowest number of FedRAMP PMO reported service authorizations. We selected at least one agency with the highest number of authorizations through FedRAMP in each group, unless we conducted prior FedRAMP work with the agency. Given that two agencies in the third group had the same number of services authorized, we selected both agencies as one had a higher number of reported provisional authorizations through the FedRAMP Joint Authorization Board process and the other had the higher number of reported authorizations through the FedRAMP agency process. To avoid a duplication of our efforts given limited resources, we excluded DOD because another GAO team was reviewing the department s cloud- related efforts, which included leveraging FedRAMP authorizations. As a result, we selected the Department of Health and Human Services, General Services Administration, the Environmental Protection Agency, and the United States Agency for International Development for our review. Because HHS is a large federated agency, we selected three operating divisions for a more detailed review. The three operating divisions included the Centers for Disease Control and Prevention (CDC), Centers for Medicare and Medicaid (CMS), and National Institutes of Health (NIH). We selected these divisions based on their extensive usage of cloud service providers authorized through FedRAMP. To select the agency systems authorization packages for review, we first identified six cloud services based on FedRAMP PMO data that indicated as of June 15, 2017, the 24 agencies used these cloud services the most. We then requested the selected agencies to provide us with an inventory of systems that relied on the six cloud services in fiscal years 2017 and 2018. From these inventories, we selected 10 agency systems. However, due to sensitivity concerns, we are not disclosing the names of the systems in this report. The case studies we selected are not generalizable to the other agencies covered by the Chief Financial Officers Act. However, it may show the potential FedRAMP issues other agencies face. For each agency system, we reviewed security authorization documentation, including: cloud service provider documentation, such as the Control Implementation Summary on agency and cloud service provider responsibilities to determine the extent agencies documented selected core controls and consistently documented responsibilities in the system security plan; security plans to determine the extent to which plans documented and implemented selected identified core security controls, and met FedRAMP and National Institute of Standards and Technology (NIST) elements; security assessment reports to determine if the effectiveness of selected core controls had been assessed and operating as intended; the extent to which agencies documented remedial action plans for selected systems to determine if they met FedRAMP or Office of Management and Budget (OMB) elements; and authorization letters to determine the extent appropriate officials approved a cloud service and agency system for use. To select identified core controls as part of our authorization documentation review, we identified and selected 24 security controls from the 97 identified core controls. Then, to determine the agencies compliance with the FedRAMP authorization process to assure the protection of agency data, we compared the authorization documentation with the Federal Information Security Modernization Act of 2014, the Federal Risk and Authorization Management Program guidance, including the program s Security Assessment Framework, OMB guidance, and NIST Special Publication 800-53 Revision 4. Each authorization package area was examined and reviewed by an analyst and each conclusion was corroborated by a second analyst. Where there was disagreement in the assessment, analysts discussed their analysis and reached a consensus. In addition, we interviewed security representatives and management officials from our selected agencies to determine the effectiveness of the FedRAMP authorization process in reviewing the controls necessary for securing agency data in the cloud, and potential rationale for deficiencies identified in authorization documentation. We also interviewed FedRAMP PMO and OMB staff on their efforts related to the FedRAMP authorization process. To address our second and third objectives, we also interviewed JAB technical representatives to obtain their views on the benefits and challenges of FedRAMP. Additionally, we obtained information about how the JAB technical representatives reviewed authorization packages. To determine the reliability of the data used to select agencies and of other data to address our three objectives, we assessed the following: FedRAMP program management office points of contact list provided for active cloud service providers and federal agency users of FedRAMP, FedRAMP program management office data on the 24 CFO Act agencies fiscal years 2017, 2018, and 2019 JAB and agency authorizations, FedRAMP program management office data on cloud service provider participation and agency usage of FedRAMP as of June 15, 2017, Agency inventory of systems relying on selected cloud services, Cloud service provider authorization documentation contained within Cloud service provider and agency reported third-party assessment organizations security assessment reports, and Agency plans of actions and milestones. To assess the reliability of the information received and reviewed on the FedRAMP marketplace, we collected and reviewed information on agencies quality control procedures and asked program officials relevant questions on the FedRAMP authorization log standard operating procedure. We reviewed GSA program officials responses to our data reliability questions such as: how the information was generated, how current the data provided was, how frequently it was updated, and how the data was accurately and consistently entered into the system used. The limitation FedRAMP officials noted was that the data generated was based on voluntarily provided authorization to operate letters submitted to the FedRAMP program management office by each of the CFO Act agencies. To ensure that the agency systems we reviewed relied on selected cloud service provider products, we had agencies confirm their use of the service supporting the agency s system. We then compared the selected services with agencies annual FISMA reporting to OMB along with system security documentation (e.g. system security plans) to determine whether the cloud service services we selected were applicable to the selected agency system. A limitation with this method of selection is if an agency s inventory is inaccurate, we would need to reselect a system. For this review, one agency s inventory and system was incomplete resulting in removing that agency system from our selection. To confirm agencies virtual access to packages in OMB s repository or a cloud service provider s repository, we obtained screen captures of web portal contents from the FedRAMP PMO. We compared these screen captures with our own virtual access to the packages. We also obtained additional information from the FedRAMP PMO on how it ensures the accuracy and reliability of the cloud service provider package information. One limitation of this method is that cloud service providers could update documentation where access was outside of OMB MAX portal, and the PMO may not be immediately aware of package updates. To verify the accuracy and reliability of plans of actions and milestones provided by agencies, we compared the agency s plans of actions and milestones with required OMB elements. We also requested that agencies describe how they generated the plans of action and milestones provided to us, identify the quality control procedures used, and any limitations to the data they provided. We evaluated the materiality of the information we obtained and compared it to our audit objectives. We assessed the reliability of the information by reviewing related documents and internal controls such as agency policies and procedures as well as examining packages stored in OMB s MAX portal and cloud service provider repositories. We also interviewed knowledgeable agency officials. Through these methods, we concluded that the information was sufficiently reliable for the purposes of our reporting objectives. We conducted this performance audit from November 2016 to December 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: FedRAMP Roles and Responsibilities Appendix II: FedRAMP Roles and Responsibilities Roles and responsibilities Issues policies which define the key requirements and capabilities of the FedRAMP program. Oversees and reports on agencies implementation of information security requirements, including implementation of FedRAMP. Develops processes for agencies and providers to request FedRAMP security authorization; Creates a framework for agencies to leverage security authorization packages; Establishes a centralized and secure repository for authorization packages that agencies can leverage to grant security authorizations; Coordinates with the National Institute of Standards and Technology ( NIST) and American Association for Laboratory Accreditation to implement a formal conformity assessment to accredit assessors; Develops templates for standard contract language and service level agreements , Memorandum of Understanding and/or Memorandum of Agreement; and Is led by GSA and serves as a liaison to ensure effective communication among all participants. Defines and updates the FedRAMP security authorization requirements; Approves accreditation criteria for third-party assessment organizations; Reviews security assessment packages of cloud service providers to grant provisional authorizations; Ensures provisional authorizations are reviewed and updated regularly; and Notifies agencies of changes to or removal of provisional authorizations. Advises FedRAMP on FISMA compliance guidance and assists in developing the standards for the accreditation of independent third-party assessment organizations (3PAO). Distributes FedRAMP information to federal CIOs and other representatives through cross- agency communications and events. Assists government-wide and agency-specific efforts to provide adequate, risk-based and cost- effective cyber security; Coordinates cyber security operations and incident response; Develops continuous monitoring standards for ongoing cyber security of federal Information systems; and Develops guidance on agency implementation of the Trusted Internet Connection program with cloud services. Appendix IV: Comments from the General Services Administration Appendix V: Comments from the Department of Health and Human Services Appendix VI: Comments from the Environmental Protection Agency Appendix VII: Comments from the U.S. Agency for International Development Appendix VIII: Comments from the Department of Veterans Affairs Appendix IX: Comments from the Social Security Administration Appendix X: GAO Contact and Staff Acknowledgments <9. GAO Contact> Gregory C. Wilshusen, (202) 512-6244 or wilshuseng@gao.gov. <10. Staff Acknowledgments> In addition to the individual named above, Sara Ann W. Moessbauer, (Director), Larry Crosland (Assistant Director), Rosanna Guerrero (Analyst-in-Charge), Sher rie Bacon, Nabajyoti Barkakati , Christina Bixby, David Blanding, Chris Businsky, Fatima Jahan, David Plocher, Dana Pon, Carl Ramirez, Cynthia Saunders, and Priscilla Smith made significant contributions to this report. | Why GAO Did This Study
Federal agencies use internet-based (cloud) services to fulfill their missions. GSA manages FedRAMP, which provides a standardized approach to ensure that cloud services meet federal security requirements. OMB requires agencies to use FedRAMP to authorize the use of cloud services.
GAO was asked to review FedRAMP. The objectives were to determine the extent to which 1) federal agencies used FedRAMP to authorize cloud services, 2) selected agencies addressed key elements of the program's authorization process, and 3) program participants identified FedRAMP benefits and challenges. GAO analyzed survey responses from 24 federal agencies and 47 cloud service providers. GAO also reviewed policies, plans, procedures, and authorization packages for cloud services at four selected federal agencies and interviewed officials from federal agencies, the FedRAMP program office, and OMB.
What GAO Found
The 24 federal agencies GAO surveyed reported using the Federal Risk and Authorization Management Program (FedRAMP) for authorizing cloud services. From June 2017 to July 2019, the number of authorizations granted through FedRAMP by the 24 agencies increased from 390 to 926, a 137 percent increase. However, 15 agencies reported that they did not always use the program for authorizing cloud services. For example, one agency reported that it used 90 cloud services that were not authorized through FedRAMP and the other 14 agencies reported using a total of 157 cloud services that were not authorized through the program. In addition, 31 of 47 cloud service providers reported that during fiscal year 2017, agencies used providers' cloud services that had not been authorized through FedRAMP. Although the Office of Management and Budget (OMB) required agencies to use the program, it did not effectively monitor agencies' compliance with this requirement. Consequently, OMB may have less assurance that cloud services used by agencies meet federal security requirements.
Four selected agencies did not consistently address key elements of the FedRAMP authorization process (see table). Officials at the agencies attributed some of these shortcomings to a lack of clarity in the FedRAMP guidance.
Program participants identified several benefits, but also noted challenges with implementing the FedRAMP. For example, almost half of the 24 agencies reported that the program had improved the security of their data. However, participants reported ongoing challenges with resources needed to comply with the program. GSA took steps to improve the program, but its FedRAMP guidance on requirements and responsibilities was not always clear and the program's process for monitoring the status of security controls over cloud services was limited. Until GSA addresses these challenges, agency implementation of the program's requirements will likely remain inconsistent.
What GAO Recommends
GAO is making one recommendation to OMB to enhance oversight, two to GSA to improve guidance and monitoring, and 22 to the selected agencies, including GSA. GSA and HHS agreed with the recommendations, USAID generally agreed, EPA generally disagreed, and OMB neither agreed nor disagreed. GAO revised four recommendations and withdrew one based on new information provided; it maintains that the remaining recommendations are warranted. |
gao_GAO-20-149 | gao_GAO-20-149_0 | <1. Background> <1.1. Medicaid Section 1115 Demonstrations> As of November 2018, 43 states operated at least part of their Medicaid programs under demonstrations. State demonstrations can vary in size and scope, and many demonstrations are comprehensive in nature, affecting multiple aspects of states Medicaid programs. In fiscal year 2017, federal spending on demonstrations accounted for more than one- third of total federal Medicaid spending and in eight states accounted for 75 percent or more of Medicaid expenditures. CMS typically approves demonstrations for an initial 5-year period that can be extended in 3- to 5-year increments with CMS approval. Some states have operated portions of their Medicaid programs under a demonstration for decades. Each demonstration is governed by special terms and conditions, which reflect the agreement reached between CMS and the state, and describe the authorities granted to the state. For example, the special terms and conditions may define what demonstration funds can be spent on including which populations and services as well as specify reporting requirements, such as monitoring or evaluation reports states must submit to CMS. <1.2. Work Requirements> In January 2018, CMS announced a new policy to support states interested in using demonstrations to make participation in work or community engagement a requirement to maintain Medicaid eligibility or coverage. CMS s guidance indicates that states have flexibility in designing demonstrations that test work requirements, but it also describes parameters around the populations that could be subject to work requirements and other expectations. CMS guidance addresses several areas, including the following: Populations. Work requirements should apply to working-age, non- pregnant adult beneficiaries who qualify for Medicaid on a basis other than a disability. Exemptions and qualifying activities. States must create exemptions for individuals who are medically frail or have acute medical conditions. States must also take steps to ensure eligible individuals with opioid addiction and other substance use disorders have access to coverage and treatment services and provide reasonable modifications for them, such as counting time spent in medical treatment toward work requirements. The guidance indicates that states can allow a range of qualifying activities that satisfy work requirements, such as job training, education programs, and community service. The guidance also encourages states to consider aligning Medicaid work requirements with work requirements in other federal assistance programs operating in their states. Beneficiary supports. States are expected to describe their strategies to assist beneficiaries in meeting work requirements and to link them to additional resources for job training, child care assistance, transportation, or other work supports. However, CMS s guidance specifies that states are not authorized to use Medicaid funds to finance these beneficiary supports. About one-third of states have either received CMS approval or submitted applications to CMS to test work requirements in their demonstrations. Nine states have had work requirements approved as part of new demonstrations or extensions of or amendments to existing demonstrations as of May 2019. Also as of May 2019, seven more states had submitted demonstration applications with work requirements, which were pending CMS approval. (See fig. 1.) States with approved work requirements were in various stages of implementation as of August 2019, and three states faced legal challenges to implementation. The requirements were in effect in Arkansas for 9 months before a federal district court vacated the approval in March 2019. Work requirements became effective in Indiana in January 2019 and will be enforced beginning in January 2020. CMS s approval of work requirements in Kentucky was vacated in March 2019 several days before the work requirements were set to become effective on April 1, 2019. As of August 2019, CMS was appealing the court decisions vacating demonstration approvals in Arkansas and Kentucky. Other states requirements are approved to take effect in fiscal years 2020 and 2021. (See fig. 2.) <1.3. Federal Funding for Administrative Costs to Implement Work Requirements> Implementing work requirements, as with other types of beneficiary requirements, can involve an array of administrative activities by states, including developing or adapting eligibility and enrollment systems, educating beneficiaries, and training staff. In general, CMS provides federal funds for 50 percent (referred to as a 50 percent matching rate) of state Medicaid administrative costs. These funds are for activities considered necessary for the proper and efficient administration of a state s Medicaid program, including those parts operated under demonstrations. CMS provides higher matching rates for certain administrative costs, including those related to IT systems. For example, expenditures to design, develop, and install Medicaid eligibility and enrollment systems are matched at 90 percent, and maintenance and operations of these systems are matched at 75 percent. States may also receive federal funds for administrative activities delegated to MCOs. The amount of federal Medicaid funds states receive for payments to MCOs that bear financial risk for Medicaid expenditures is determined annually by a statutory formula based on the state s per capita income, known as the Federal Medical Assistance Percentage (FMAP). The FMAP sets a specific federal matching rate for each state that, for fiscal year 2019, ranges from 50 percent to 76 percent. There are exceptions to this rate for certain populations, providers, and services. For example, states that chose to expand Medicaid under the Patient Protection and Affordable Care Act (PPACA) receive a higher FMAP for newly eligible adults, equal to 93 percent in 2019. (See fig. 3.) <1.4. CMS Oversight of Administrative Costs> CMS has several different related processes under which the agency oversees Medicaid administrative costs, including those for demonstrations. Demonstration approval, monitoring, and evaluation. States seeking demonstration approvals must meet transparency requirements established by CMS. For example, states must include certain information about the expected changes in expenditures under the demonstration in public notices seeking comment at the state level and in the application to CMS, which is posted for public comment at the federal level. In addition, CMS policy requires that demonstrations be budget neutral that is, that the federal government should spend no more under a demonstration than it would have without the demonstration. Prior to approval, states are required to submit an analysis of their projected costs with and without the demonstration. CMS uses this information to determine budget neutrality and set spending limits for demonstrations. During the demonstration, CMS is responsible for monitoring the state s compliance with the terms and conditions of the demonstration, including those related to how Medicaid funds can be spent and the demonstration spending limit. States must also evaluate their demonstrations to assess the effects of the policies being tested, which could include impacts on cost. Review and approval of federal matching funds for IT projects. To request higher federal matching rates for changes to Medicaid IT systems, including eligibility and enrollment systems, states must submit planning documents to CMS for review and approval. States plans must include sufficient information to evaluate the state s goals, procurement approach, and cost allocations within a specified budget. States may request funds for system development related to a proposed demonstration before the demonstration is approved. Funding can be approved and expended under the approved plan while the demonstration application is being reviewed. States submit updates to planning documents annually for CMS review, which can include requested changes to the approved budget. Quarterly expenditure reviews. In order to receive federal matching funds, states report their Medicaid expenditures quarterly to CMS, including those made under demonstrations. Expenditures associated with demonstrations, including administrative expenditures, are reported separately from other expenditures. CMS is responsible for ensuring that expenditures reported by states are supported and allowable, meaning that the state actually made and recorded the expenditure and that the expenditure is consistent with Medicaid requirements. With regard to consistency, this includes comparing reported expenditures to various approval documents. For example, CMS is responsible for comparing reported demonstration expenditures against the special terms and conditions that authorize payment for specified services or populations and establish spending limits. CMS is also responsible for reviewing states reported expenditures against budgets in states planning documents to ensure that states do not exceed approved amounts. A list of GAO reports related to these CMS oversight processes is included at the end of this report. <2. States Work Requirements Varied in Terms of Target Population, Required Activities, and Consequences of Non-Compliance> States took different approaches to designing work requirements under their Medicaid demonstrations. These requirements varied in terms of the beneficiary groups subject to the requirements; the required activities, such as frequency of required reporting; and the consequences beneficiaries face if they do not meet requirements. <2.1. Beneficiaries Subject to Work Requirements and Required Activities> In the nine states with approved work requirements as of May 2019, we found differences in the age and eligibility groups subject to work requirements, and, to a lesser extent, the number of hours of work required and frequency of required reporting to the state. For example: Age and eligibility groups subject to work requirements. Four of these states received approval to apply the requirements to adults under the age of 50, similar to how certain work requirements are applied under the Supplemental Nutrition Assistance Program (SNAP). Among the other five states, approved work requirements apply to adults up to the age of 59 (Indiana and Utah), 62 (Michigan), and 64 (Kentucky and New Hampshire). States generally planned to apply the requirements to adults newly eligible under PPACA or a previous coverage expansion, but some states received approval to apply the requirements to additional eligibility groups, such as parents and caretakers of dependents. Number of hours of work required and frequency of required reporting. Under approved demonstrations in seven states, Medicaid beneficiaries must complete 80 hours of work or other qualifying activities per month to comply with work requirements. Five states approved demonstrations require beneficiaries to report each month on their hours of work or other qualifying activities, using methods approved by the state, such as online or over the phone. (See table 1.) We saw similar variation under the seven state applications that were pending as of May 2019. All nine states with approved work requirements as of May 2019 exempted several categories of beneficiaries and counted a variety of activities as meeting the work requirements. For example, all nine states exempted from the work requirements people with disabilities, pregnant women, and those with certain health conditions, such as a serious mental illness. In addition, depending on the state, other groups were also exempted, such as beneficiaries who are homeless, survivors of domestic violence, and those enrolled in substance use treatment programs. States also counted activities other than work as meeting the work requirements, such as job training, volunteering, and caregiving for non-dependents. In addition to work requirements, eight of the nine states received approval under their demonstrations to implement other beneficiary requirements, such as requiring beneficiaries to have expenditure accounts. (See app. I for more information on these other beneficiary requirements.) <2.2. Beneficiary Consequences for Non-Compliance> The consequences Medicaid beneficiaries faced for non-compliance and the timing of the consequences varied across the nine states with approved work requirements. The consequences for non-compliance included coverage suspension and termination. For example, Arizona received approval to suspend beneficiaries coverage after 1 month of non-compliance. In contrast, Wisconsin will not take action until a beneficiary has been out of compliance for 4 years, at which time coverage will be terminated. Three states (Arkansas, Michigan, and Wisconsin) imposed or planned to impose a non-eligibility period after terminating a beneficiary s enrollment. For example, under Arkansas demonstration, after 3 months of non-compliance, the beneficiary was not eligible to re-enroll until the next plan year, which began in January of each year. Thus, beneficiaries could be locked out of coverage for up to 9 months. (See table 2.) For states with pending applications, suspension or termination of coverage takes effect after 2 or 3 months of non- compliance. For states that suspend coverage for beneficiaries, there are different conditions for coming into compliance and lifting the suspension. For example: Arizona received approval to automatically reactivate an individual s eligibility at the end of each 2-month suspension period. In other states, such as Indiana, beneficiaries must notify the state that they have completed 80 hours of work or other qualifying activities in a calendar month, after which the state will reactivate eligibility beginning the following month. (See text box.) Indiana s Suspension Process for Non-Compliance with Medicaid Work Requirements At the end of each year, the state reviews beneficiaries activities related to work requirements. Beneficiaries must meet the required monthly hours 8 out of 12 months of the year to avoid a suspension of Medicaid coverage. If coverage is suspended for not meeting work requirements, the suspension will start January 1 and could last up to 12 months. During a suspension, beneficiaries will not be able to access Medicaid coverage to receive health care. Beneficiaries with suspended Medicaid coverage can reactivate coverage if they become medically frail; or employed, enrolled in school, or engaged in volunteering. Beneficiaries must contact the state to reactivate coverage. To prevent suspension from taking effect, two states (Kentucky and New Hampshire) require beneficiaries to make up required work hours that were not completed in order to maintain compliance with work requirements. For example, in Kentucky, if the beneficiary worked 60 hours in October (20 hours less than the required 80), the beneficiary must work 100 hours in November to avoid suspension of coverage in December. <3. Available Estimates of Costs to Implement Work Requirements Varied among Selected States, with the Majority of Costs Expected to Be Financed by Federal Dollars> Available estimates of the costs to implement Medicaid work requirements varied considerably among the five selected states, and these estimates did not account for all costs. These states estimated that federal funding would cover the majority of these costs, particularly costs to modify IT systems. <3.1. Selected States Estimates of Administrative Costs Associated with Work Requirements Ranged from Millions to Hundreds of Millions of Dollars> Selected states (Arkansas, Indiana, Kentucky, New Hampshire, and Wisconsin) reported estimates of the costs to implement work requirements that ranged from under $10 million in New Hampshire to over $250 million in Kentucky. These estimates compiled by states and reported to us did not include all planned costs. The estimates were based on information the states had readily available, such as the costs of contracted activities for IT systems and beneficiary outreach, and primarily reflect up-front costs. Four selected states (Arkansas, Indiana, Kentucky, and New Hampshire) had begun implementing work requirements and making expenditures by the end of 2018. Together, these states reported to us having spent more than $129 million in total for implementation activities from the time the states submitted their demonstration applications through the end of 2018. (See table 3.) Several factors may have contributed to the variation in the selected states estimated costs of administering work requirements, including planned IT system changes and the number of Medicaid beneficiaries subject to the work requirements. IT system changes. Selected states planned distinct approaches to modify their IT systems in order to administer work requirements. For example: Indiana, which implemented work requirements by expanding on an existing work referral program, planned to leverage existing IT systems, making modifications expected to result in IT costs of $14.4 million over 4 years. In contrast, Kentucky planned to develop new IT system capabilities to communicate, track, and verify information related to work requirements. Kentucky received approval to spend $220.9 million in fiscal years 2019 and 2020 to do that and make changes needed to implement other beneficiary requirements in its demonstration. Number of beneficiaries subject to requirements. The estimated cost of some activities to administer work requirements depended on the number of Medicaid beneficiaries subject to work requirements, which varied across selected states. For example: Kentucky estimated 620,000 beneficiaries would be subject to work requirements including those who may qualify for exemptions and estimated costs of $15 million for fiscal years 2019 and 2020 to conduct beneficiary education, outreach, and customer service. In contrast, Arkansas had fewer beneficiaries subject to work requirements (about 115,000 in February 2019, with about 100,000 of those eligible for exemptions) and estimated fewer outreach costs. The state estimated $2.9 million in costs from July 2018 through June 2019 to conduct education and outreach. As noted earlier, states available estimates did not include all expected Medicaid costs. For example, four of the five selected states planned to use MCOs or other health plans to help administer work requirements, but two of these four did not have estimates of the associated costs. Indiana and Kentucky estimated additional payments to MCOs $20.7 million in Indiana to administer work requirements in 2019 and $50.7 million in Kentucky to administer its demonstration from July 2018 through June 2020. In contrast, officials in New Hampshire told us that no estimates were available. In Arkansas, where beneficiaries receive premium support to purchase coverage from qualified health plans on the state s health insurance exchange, plans were instructed to include the costs of administering work requirements in the premiums, according to Arkansas officials. State officials and representatives from a qualified health plan we spoke with could not provide the amount that the state s premium assistance costs increased as a result. States estimates also did not include all ongoing costs that they expect to incur after the up-front costs and initial expenditures related to implementation of the work requirements. States had limited information about ongoing costs, but we collected some examples. For instance, New Hampshire provided estimated costs of $1.6 million to design and implement the evaluation of its demonstration, which all states are required to perform. In addition, officials or documents in each selected state acknowledged new staffing costs that may be ongoing, such as Indiana s costs for five full-time employees to assist beneficiaries with suspended coverage to meet requirements or obtain exemptions. Finally, states reported that administering Medicaid work requirements will increase certain non-Medicaid costs costs that are not funded by federal Medicaid, but are borne by other federal and state agencies, stakeholders, or individuals. For instance, New Hampshire officials planned to use approximately $200,000 to $300,000 in non-Medicaid funds for six positions performing case management for workforce development. Similarly, in July 2017, Indiana estimated that providing beneficiaries with job skills training, job search assistance, and other services would cost $90 per month per beneficiary, although state officials said these costs were uncertain after learning they were not eligible for federal Medicaid funds. In addition, beneficiaries and entities other than states, such as community organizations, may incur costs related to the administration of work requirements that are not included in states estimates. <3.2. Selected States Estimated the Federal Government Would Pay the Majority of Administrative Costs Associated with Work Requirements> All five selected states expected to receive federal funds for the majority of estimated costs and expenditures (described previously) for implementing work requirements. For example, the four selected states that provided data on expenditures to administer work requirements through 2018 (Arkansas, Indiana, Kentucky, and New Hampshire) expected the portion of those expenditures paid by the federal government to range from 82 percent in Indiana to 90 percent in New Hampshire and Kentucky. These effective matching rates exceed the 50 percent matching rate for general administrative costs, largely due to higher matching rates of 75 and 90 percent of applicable IT costs. For example, Kentucky received approval to spend $192.6 million in federal funds for its $220.9 million in expected IT costs over 2 years to implement work requirements and other beneficiary requirements, an effective match rate of 87 percent. In addition to higher federal matching rates for IT costs, the selected states receive federal funds for the majority of MCO capitation payments, which the states planned to increase to pay MCOs costs to administer work requirements. Each of the three states that planned to use MCOs to administer work requirements planned to increase capitation payments in order to do so. For example, Indiana planned to increase capitation payments to MCOs by approximately 1 percent (or $20.7 million in 2019) to pay for a variety of ongoing activities to administer work requirements, including requiring MCOs to help beneficiaries report compliance, reporting beneficiaries who qualify for exemptions, and helping the state verify the accuracy of beneficiary reporting, according to state officials. The federal government pays at least 90 percent of capitation payments to MCOs to provide covered services to beneficiaries who are newly eligible under PPACA, the primary population subject to work requirements among the five selected states. Indiana and Kentucky also received approval to apply work requirements to other populations, and capitation payments for these other populations receive federal matching rates of 66 percent in Indiana and 72 percent in Kentucky in fiscal year 2019. States approaches to implementing work requirements can affect the federal matching funds they receive. For example, Arkansas officials told us that the state decided to collect information on beneficiary compliance through an on-line portal the initial cost of which received an effective federal matching rate of 87 percent, according to Arkansas. Officials told us that the state avoided having beneficiaries report compliance to staff costs of which receive a 75 percent matching rate. However, after approximately 17,000 beneficiaries lost coverage due to non-compliance with work requirements, Arkansas revised its procedures to allow beneficiaries to report compliance to state staff over the phone. Three of the five selected states sought to leverage other programs funded by the federal government to help implement work requirements or provide beneficiary supports, such as employment services. Kentucky officials reported piloting elements of Medicaid work requirements using its SNAP Employment and Training program. Similarly, Arkansas officials sought a waiver to be able to use TANF funds to provide employment services to individuals without children in order to serve Medicaid beneficiaries subject to work requirements. New Hampshire also used TANF funds to provide employment services to Medicaid beneficiaries who were also enrolled in TANF. <4. Weaknesses Exist in CMS s Oversight of Administrative Costs of Demonstrations with Work Requirements> CMS does not consider administrative costs when approving any demonstrations including those with work requirements though these costs can be significant. The agency has recently taken steps to obtain more information about demonstration administrative costs. However, we identified various weaknesses in CMS s oversight of administrative costs that could result in states receiving federal funds for costs to administer work requirements that are not allowable. <4.1. CMS s Approval Process Does Not Take into Account How a Demonstration Will Affect Administrative Costs> CMS s demonstration approval process does not take into account the extent to which demonstrations, including those establishing work requirements, will increase a state s administrative costs. CMS policy does not require states to provide projections of administrative costs in their demonstration applications or include administrative costs in their demonstration cost projections used by CMS to assess budget neutrality. CMS officials explained that in the past demonstrations had generally not led to increases in administrative costs, and as such, the agency had not seen a need to separately consider these costs. However, the officials told us and have acknowledged in approval letters for demonstrations with work requirements, that demonstrations may increase administrative costs. Kentucky provides an example of this, reporting to us estimated administrative costs of approximately $270 million including about $200 million in federal funds to implement the demonstration over 2 years. However, neither Kentucky nor the other four selected states provided estimates of their administrative costs in their applications to CMS, and CMS officials confirmed that no additional information on administrative costs was provided by the states while their demonstration applications were being reviewed. By not considering administrative costs in its demonstration approval process, CMS s actions are counter to two key objectives of the demonstration approval process: transparency and budget neutrality. Transparency. CMS s transparency requirements are aimed at ensuring that demonstration proposals provide sufficient information to ensure meaningful public input. However, CMS officials told us that they do not require the information states provide on the expected changes in demonstration expenditures in their applications to account for administrative costs. This information would likely have been of interest in our selected states, because public commenters in each state expressed concerns about the potential administrative costs of these demonstrations. In prior work, we reported on weaknesses in CMS s policies for ensuring transparency in demonstration approvals. Budget neutrality. The aim of CMS s budget neutrality policy is to limit federal fiscal liability resulting from demonstrations, and CMS is responsible for determining that a demonstration will not increase federal Medicaid expenditures above what they would have been without the demonstration. However, CMS does not consider administrative costs when assessing budget neutrality. For three of our five selected states, the demonstration special terms and conditions specify that administrative costs will not be counted against the budget neutrality limit. Even though demonstrations administrative costs can be significant, CMS officials said the agency has no plans to revise its approval process either to (1) require states to provide information on expected administrative costs to CMS or the public, or to (2) account for these costs when the agency assesses whether a demonstration is budget neutral. CMS officials explained that the agency needs more experience with policies that require administrative changes under a demonstration before making any revisions to its processes. Without requiring states to submit projections of administrative costs in their demonstration applications, and by not considering the implications of these costs for federal spending, CMS puts its goals of transparency and budget neutrality at risk. This is inconsistent with federal internal control standards that call for agencies to identify, analyze, and respond to risks related to achieving program objectives. <4.2. CMS Has Taken Steps to Collect New Information on Administrative Costs, yet Risks May Remain of CMS Providing Federal Funds for Work Requirement Costs that Are Not Allowable> CMS recently implemented procedures that may provide additional information on demonstrations administrative costs. These included implementing new procedures to identify costs specific to demonstrations when approving federal matching funds for states planned IT costs and issuing guidance on monitoring and evaluating demonstrations. However, it is unclear whether these efforts will result in data that improve CMS s oversight. (See table 4.) In addition to these new initiatives, states quarterly expenditure reports provide CMS with some information on their demonstration administrative costs, but this information also has limitations. States are required to separately track and report administrative expenditures attributable to their demonstrations in their quarterly expenditure reports. However, CMS officials told us that states typically use the same resources, such as staff, to administer their demonstrations and their regular Medicaid program, which can affect the demonstration costs states report. We found that about a quarter of states with demonstration expenditures in fiscal year 2017 reported no administrative expenditures related to their demonstrations. CMS officials acknowledged that the data states submit in their quarterly expenditure reports may not provide a meaningful measure of states demonstration-related administrative costs. CMS s recently implemented procedures may provide more information on the amounts states are spending on demonstration administrative costs, but they do not address weaknesses we found in CMS s oversight of administrative costs. In four of the five selected states, we identified examples of states requesting federal matching funds for costs to administer work requirements that do not appear to be allowable, or at higher matching rates than appropriate under CMS guidance. In some cases, states received CMS approval for planned administrative costs while in others it was unclear whether CMS would have identified the issues through their oversight procedures. Areas of risk included funds for planned IT costs, funds for beneficiary supports, and funds provided under managed care contracts. Federal funds for planned IT costs that may not be allowable or eligible for higher matching rates. Three of our five selected states requested and received funding approval for planned IT costs to implement their demonstrations that did not appear to be allowable or at higher matching rates than appropriate under CMS guidance. Kentucky and Indiana requested and received funding approval for planned IT costs that do not appear to be allowable under CMS guidance. Kentucky requested and received CMS approval for funds (at the 90 percent federal matching rate) for a contract that included activities to assist Medicaid beneficiaries obtain employment. (See text box.) However, CMS s 2018 guidance states that Medicaid funding is not available to finance beneficiary supports, such as job training or other employment services. CMS officials said that the agency did not review the contract and approved the request based on Kentucky s assertion that these costs were specific to technology. Indiana received approval to receive IT funds to develop a website that provides beneficiaries access to information and tools to seek, acquire, and retain employment, costs that also appear related to beneficiary supports. Kentucky Received Approval of Information Technology Funding for Activities Aimed at Helping Beneficiaries Obtain Employment In 2018, in an update to its information technology budget request, Kentucky included costs for a contract with the state s Department of Workforce Services to assist Medicaid beneficiaries in developing skills needed to obtain and retain employment. The contracted services included activities such as assessing beneficiaries eligibility for non-Medicaid programs, providing services to beneficiaries at career assistance centers, and making referrals to other agencies and programs. Kentucky budgeted $21 million for this contract at a 90 percent federal matching rate ($18.9 million in federal funds) for fiscal year 2019 and another $21 million at a 75 percent matching rate ($15.8 million in federal funds) for fiscal year 2020. CMS approved Kentucky s budget request without reviewing the contract. Medicaid Services. | GAO-20-149. Indiana and New Hampshire received funding approval for federal IT funds at the 90 percent matching rate for costs that do not appear eligible for that rate. In 2018, CMS approved Indiana s request for a 90 percent match rate to pay $500,000 in consulting fees to develop work requirement policies, despite CMS guidance indicating that policy research and development activities should be matched at 50 percent. New Hampshire requested and received CMS approval in 2018 for federal funds at a 90 percent matching rate for $180,000 in costs to educate beneficiaries about work requirements, including costs to place outreach calls through an existing contracted call center. CMS guidance indicates that these costs should receive funding at a lower matching rate. Federal funds for beneficiary supports that are not allowable. Wisconsin requested and planned to seek federal funds for beneficiary support costs that are not allowable until our work identified the issue for CMS. Wisconsin officials told us that it was their understanding during the planning phase of the demonstration that administrative costs incurred by state programs providing such services were eligible for federal matching funds. State officials said that CMS officials told them on multiple occasions that the state could receive a 50 percent federal match for these costs. Based on this, the state requested budget authority from its legislature for $51.2 million for employment and training services, of which it anticipated $23.1 million would come from federal Medicaid funds. CMS officials told us that such costs are not eligible for federal matching funds and maintained that the agency s guidance which indicates that beneficiary support costs are not eligible for federal matching funds was clear. In response to our inquiries, the agency contacted the state in April 2019 and clarified this with officials. Federal funds for costs to administer work requirements provided through managed care contracts, which may not be allowable. As noted earlier, three of the five selected states (Indiana, Kentucky, and New Hampshire) required or planned to require MCOs to perform a number of activities to implement work requirements. These activities included, for example, providing information on options to satisfy work requirements, assisting beneficiaries with reporting compliance with work requirements, and providing referrals to state work requirement resources. To fund these activities, officials in these states said that they plan to increase their capitation payments. States will receive at least a 90 percent federal matching rate for most of these payments, because the payments are largely for beneficiaries who are newly eligible under PPACA. It is unclear, however, whether including these activities in capitation payments is allowable. CMS regulations provide that states may only include administrative costs that are related to the provision of covered health care services in their MCO capitation payments. In addition, CMS guidance notes that implementing work requirements will not change the types of expenditures that are allowable. We provided CMS with specific examples of activities states delegated or planned to delegate to MCOs and asked if these types of activities met CMS s criteria to be included under capitation payments. CMS officials told us that federal review of the related managed care contracts in Indiana and New Hampshire had not been completed as of June 2019 and could not make a definitive statement. While CMS guidance requires states to carry out a range of activities to implement work requirements some of which are not eligible for federal Medicaid funds agency officials told us that CMS has not updated any procedures for the various reviewers of these costs. Further, CMS has not completed a risk assessment to determine whether current procedures for overseeing administrative costs are sufficient, and agency officials told us that there were no plans to do so. According to federal internal control standards, agencies should identify, analyze, and respond to risks related to achieving program objectives (in this case, ensuring that administrative expenditures under demonstrations are allowable and matched at the correct rate). Without identifying, assessing, and addressing the risks posed by demonstrations that may increase administrative costs, CMS may be providing federal funds for costs that are not allowed or at inappropriately high matching rates. <5. Conclusions> A third of states have sought approval to implement work requirements in their Medicaid programs. CMS has acknowledged that demonstrations, including those with work requirements, may increase Medicaid administrative costs and therefore overall Medicaid spending. Yet, CMS is not factoring these costs into its approval decisions, which is counter to the agency s goals of transparency and budget neutrality. Further, the agency has not taken steps to assess and respond to risks of federal funds being spent for administrative costs that are not allowable or matched at rates higher than what is appropriate, risks we found in four of the five demonstrations we reviewed. While administrative costs are a relatively small portion of states Medicaid spending, the weaknesses in CMS s oversight of these costs could take on increased importance as more states seek and receive approval to implement work requirements. <6. Recommendation for Executive Action> We are making the following three recommendations to CMS: The Administrator of CMS should require states to submit and make public projections of administrative costs when seeking approval of demonstrations, including those with work requirements and all other demonstrations. (Recommendation 1) The Administrator of CMS should account for the administrative costs of demonstrations, including those with work requirements and all other demonstrations, when assessing whether demonstrations are budget neutral. (Recommendation 2) The Administrator of CMS should assess the risks of providing federal funds for costs to administer work requirements that are not allowable and should respond to risks by improving oversight procedures, as warranted. This assessment should consider risks related to costs for information systems, beneficiary supports, and managed care. (Recommendation 3) <7. Agency Comments and Our Evaluation> We provided a draft of this report to HHS for comments and its comments are reproduced in appendix II. HHS also provided us with technical comments, which we incorporated in the report as appropriate. HHS did not concur with our recommendations. In general, HHS commented that it expects administrative costs to represent a relatively small proportion of total Medicaid spending and that its current approach to overseeing administrative costs including those incurred under Medicaid demonstrations is appropriate given the level of financial risk. HHS commented that administrative costs were approximately 5 percent of Medicaid expenditures. While these cost may represent a relatively small share of total spending, CMS projected them to be $18 billion in federal funds in fiscal year 2019 and this does not include all administrative spending. In particular, it does not include amounts paid to MCOs for administrative costs, which are likely considerable given that managed care payments now represent about half of all Medicaid spending. Further, demonstrations may represent a heightened financial risk given our finding that they can result in additional administrative costs that would not otherwise occur. Regarding our first recommendation to require states to submit and make public projections of administrative costs, HHS commented that its experience suggests that demonstration administrative costs will be a relatively small portion of total costs and therefore HHS believes making information about these costs available would provide stakeholders little to no value. As noted, Medicaid is a significant component of federal and state budgets. In each of the five states we reviewed, public commenters expressed concerns about the potential administrative costs of Medicaid demonstrations with work requirements, suggesting stakeholders would value information about these costs. We maintain that requiring states to make public information about administrative costs would help to ensure that demonstration proposals provide sufficient information to ensure meaningful public input. Regarding our second recommendation to account for administrative costs when assessing whether demonstrations are budget neutral, HHS again commented that its experience suggests that demonstration administrative costs will be a relatively small portion of total costs and that it believed that its current approach is appropriate for the level of financial risk. However, we found that demonstration administrative costs could be significant and HHS s current policy of not considering these costs in its assessments of budget neutrality could increase federal fiscal liability. For example, in Kentucky, we found estimated administrative costs for implementing the demonstration exceeded $270 million over about 2 years. We maintain that including administrative costs in its assessments will help HHS ensure that demonstrations are budget neutral. Regarding our third recommendation to assess and respond to risks of providing federal funds for costs to administer work requirements that are not allowable, HHS commented that (1) all states requests for federal Medicaid funding are subject to the same federal regulations and requirements; (2) the expenditures reported by states to GAO had not been reviewed against federal requirements or certified by states to be accurate and permissible; and (3) HHS believes its existing approach is appropriate for the low level of risk that administrative expenditures represent. Our findings indicate that CMS s oversight procedures which are designed to prevent state spending on costs that do not meet federal requirements have vulnerabilities, particularly given the types of administrative activities associated with work requirements. Four of the five states we reviewed were planning to seek federal funds for costs (1) that did not appear allowable, or (2) at higher matching rates than appear appropriate, and three states succeeded in gaining CMS approval to do so. We agree with HHS that CMS may also identify inappropriate expenditures during its reviews of state-reported expenditures. However, our past work has identified weaknesses in that review process. In 2018, we reported that CMS officials indicated that resource constraints have limited the agency s ability to target risk during such reviews, potentially allowing errors to go undetected. Finally, the basis for HHS s conclusion that its current approach is appropriate for the risks posed by these administrative expenditures is unclear. As we note in our report, CMS officials told us that they had not assessed whether current procedures sufficiently address risks posed by administrative costs for work requirements and had no plans to do so. We maintain that assessing these risks of providing federal funds for costs that are not allowable and improving oversight, as warranted, would help HHS to ensure the integrity of the Medicaid program. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the appropriate congressional committees, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7144 or yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix III. Appendix I: Other Beneficiary Requirements in States with Approved Medicaid Work Requirements Eight of the nine states that received approval for work requirements, as of May 2019, also received approval under their demonstrations for other beneficiary requirements, such as requiring beneficiaries to have expenditure accounts. Some of these beneficiary requirements preceded work requirements, while others were newly introduced with the work requirements. For example, Kentucky was developing and implementing work requirements at the same time as other beneficiary requirements, such as the requirement for beneficiaries to have two expenditure accounts and make premium payments. (See table 5.) Appendix II: Comments from the Department of Health and Human Services Appendix III: GAO Contact and Staff Acknowledgments <8. GAO Contact> <9. Staff Acknowledgments> In addition to the contact named above, Susan Barnidge (Assistant Director), Russell Voth (Analyst in Charge), Linda McIver, and Matt Nattinger made key contributions to this report. Also contributing were Giselle Hicks, Drew Long, Ethiene Salgado-Rodriguez, and Emily Wilson Schwark. Related GAO Reports Medicaid Demonstrations: Approvals of Major Changes Need Increased Transparency. GAO-19-315. Washington, D.C.: April 17, 2019. Medicaid: CMS Needs to Better Target Risks to Improve Oversight of Expenditures. GAO-18-564. Washington, D.C.: August 6, 2018. Medicaid Demonstrations: Evaluations Yielded Limited Results, Underscoring Need for Changes to Federal Policies and Procedures. GAO-18-220. Washington, D.C.: January 19, 2018. Medicaid Demonstrations: Federal Action Needed to Improve Oversight of Spending. GAO-17-312. Washington, D.C.: April 3, 2017. Medicaid: Federal Funds Aid Eligibility IT System Changes, but Implementation Challenges Persist. GAO-15-169. Washington, D.C.: December 12, 2014. Medicaid Demonstration Waivers: Approval Process Raises Cost Concerns and Lacks Transparency. GAO-13-384. Washington, D.C.: June 25, 2013. Medicaid Demonstration Waivers: Recent HHS Approvals Continue to Raise Cost and Oversight Concerns. GAO-08-87. Washington, D.C.: January 31, 2008. | Why GAO Did This Study
Section 1115 demonstrations are a significant component of Medicaid spending and affect the care of millions of low-income and medically needy individuals. In 2018, CMS announced a new policy allowing states to test work requirements under demonstrations and soon after began approving such demonstrations. Implementing work requirements can involve various administrative activities, not all of which are eligible for federal funds.
GAO was asked to examine the administrative costs of demonstrations with work requirements. Among other things, this report examines (1) states' estimates of costs of administering work requirements in selected states, and (2) CMS's oversight of these costs. GAO examined the costs of administering work requirements in the first five states with approved demonstrations. GAO also reviewed documentation for these states' demonstrations, and interviewed state and federal Medicaid officials. Additionally, GAO assessed CMS's policies and procedures against federal internal control standards.
What GAO Found
Medicaid demonstrations enable states to test new approaches to provide Medicaid coverage and services. Since January 2018, the Centers for Medicare & Medicaid Services (CMS) has approved nine states' demonstrations that require beneficiaries to work or participate in other activities, such as training, in order to maintain Medicaid eligibility. The first five states that received CMS approval for work requirements reported a range of administrative activities to implement these requirements.
These five states provided GAO with estimates of their demonstrations' administrative costs, which varied, ranging from under $10 million to over $250 million. Factors such as differences in changes to information technology systems and numbers of beneficiaries subject to the requirements may have contributed to the variation. The estimates do not include all costs, such as ongoing costs states expect to incur throughout the demonstration.
GAO found weaknesses in CMS's oversight of the administrative costs of demonstrations with work requirements.
No consideration of administrative costs during approval. GAO found that CMS does not require states to provide projections of administrative costs when requesting demonstration approval. Thus, the cost of administering demonstrations, including those with work requirements, is not transparent to the public or included in CMS's assessments of whether a demonstration is budget neutral—that is, that federal spending will be no higher under the demonstration than it would have been without it.
Current procedures may be insufficient to ensure that costs are allowable and matched at the correct rate. GAO found that three of the five states received CMS approval for federal funds—in one case, tens of millions of dollars—for administrative costs that did not appear allowable or at higher matching rates than appeared appropriate per CMS guidance. The agency has not assessed the sufficiency of its procedures for overseeing administrative costs since it began approving demonstrations with work requirements.
What GAO Recommends
GAO makes three recommendations, including that CMS (1) require states to submit projections of administrative costs with demonstration proposals, and (2) assess risks of providing federal funds that are not allowable to administer work requirements and improve oversight procedures, as warranted. CMS did not concur with the recommendations and stated that its procedures are sufficient given the level of risk. GAO maintains that the recommendations are warranted as discussed in this report. |
gao_GAO-20-347 | gao_GAO-20-347_0 | <1. Background The Mexico City Policy and the PLGHA> The Mexico City Policy, which the U.S. government announced at the United Nations Conference on Population in Mexico City in 1984, required foreign NGOs to agree they would not, as a condition for receiving U.S. assistance for family planning, perform or actively promote abortion as a method of family planning. As shown in figure 1, subsequent administrations have rescinded or reinstated the policy through executive branch action, typically through presidential memoranda. In a January 2017 Presidential Memorandum, the Trump Administration reinstated and expanded the Mexico City Policy, directing the Secretary of State in coordination with the Secretary of Health and Human Services to implement a plan to extend the requirements of the reinstated policy to all global health assistance furnished by all departments or agencies to the extent allowable by law. Consequently, the policy, later renamed PLGHA, applies to billions of dollars in annual U.S. global health assistance such as HIV/AIDS, maternal and child health, and malaria rather than only family planning and reproductive health assistance, which received about $560 million in GHP account funding in fiscal year 2018. State reported that USAID, State, and DOD began applying the PLGHA policy as of May 15, 2017, and HHS applied the policy as of May 31, 2017. The affected departments and agencies applied the policy to: (1) All existing grants and cooperative agreements that provide global health assistance that received new funding after May 2017. Agencies established a PLGHA standard provision for inclusion in relevant grants and cooperative agreements for global health assistance requiring foreign NGOs to agree that, during the term of the award, they would not perform or actively promote abortion as a method of family planning in foreign countries, or provide financial support to any foreign NGO that does. Agency officials stated that after the policy was implemented, when additional funds were to be obligated to relevant awards with foreign NGOs, these organizations would be required to accept the PLGHA terms and conditions to receive these additional funds, or decline the award. (2) All new grants and cooperative agreements that provide global health assistance awarded after May 2017, according to a State report. The PLGHA terms and conditions apply to foreign NGOs that receive global health assistance prime awards or sub-awards. Prime awardees, including U.S. NGOs, may not provide assistance under the awards to any foreign NGOs that perform or actively promote abortion as a method of family planning, are required to include the PLGHA standard provision in sub-awards to foreign NGOs, and may be held liable for the sub- awardee s failure to comply with the conditions of the policy. According to UN reporting, the legality of abortion varies among countries receiving U.S. global health assistance. This may result in some countries legally permitting abortion services that are not permitted under the PLGHA policy, according to NGO representatives we met with. The representatives noted that under these circumstances, foreign NGOs would be prohibited under the policy from providing such services, even with non-U.S. funds, as a condition of receiving U.S. global health assistance. Additionally, in March 2019, the Secretary of State clarified that foreign NGOs that accept U.S. global health assistance may not provide financial support, with any source of funds and for any purpose, to another foreign NGO that performs, or actively promotes, abortion as a method of family planning. According to agency officials, the PLGHA terms and conditions do not apply under the following circumstances: Global health contracts. State reported that the executive branch is taking steps to develop a PLGHA contract clause through a formal rule-making process required to revise the Federal Acquisition Regulation. Awards funded out of the Food for Peace program. Water Supply and Sanitation assistance funded from the Development Assistance account. Assistance provided directly by U.S.-based organizations. The PLGHA policy does apply, however, to sub-awards made by U.S.- based organizations to foreign NGOs. Assistance provided directly to national governments, such as ministries of health. Assistance to multilateral organizations. This includes but is not limited to U.S. global health funds provided to the Global Fund to Fight AIDS, Tuberculosis, and Malaria (the Global Fund) and the Joint United Nations Program on HIV/AIDS (UNAIDS). In a May 2017 briefing on the PLGHA policy, State noted that humanitarian assistance, including State Department migration and refugee assistance activities, USAID disaster and humanitarian relief activities, and U.S. Department of Defense disaster and humanitarian relief were also all excluded from the policy. State also noted that the Secretary of State, in consultation with the Secretary of HHS, may authorize additional case-by-case exemptions to the policy. <2. Funding for U.S. Global Health Programs Accounts in Fiscal Year 2018> Congress provided about $8.7 billion for the Global Health Programs account (GHP) in fiscal year 2018, most of which supported HIV/AIDS assistance managed by State and implemented through transfers of funds to several agencies and contributions to multilateral organizations (see table 1). Because of the various exclusions described above, not all global health funds are subject to the PLGHA policy. In particular, State s fiscal year 2018 contribution of $1.35 billion to the Global Fund is not subject to the policy because it is a multilateral institution. U.S. Agencies Applied the PLGHA Policy to Over 1,300 Awards as of the End of Fiscal Year 2018 USAID and CDC Had the Most Awards and Planned Funds Subject to the PLGHA Policy USAID and CDC had the most global health assistance awards subject to the PLGHA policy, representing more planned funding than other agencies (see table 2). In total, U.S. agencies reported that they applied the PLGHA policy to 1,309 prime awards active in May 2017 or made through September 2018. There were 761 active awards when agencies implemented the policy in May 2017, and 548 new awards that began after they implemented the policy. Most awards started in fiscal year 2016 or later, although some started earlier. Average award duration varied among agencies. The estimated total value of these 1,309 awards was almost $29 billion across multiple fiscal years, of which about $12 billion was planned funding that had not yet been obligated as of September 30, 2018, and is subject to the PLGHA policy upon acceptance of the PLGHA terms and conditions. USAID awards represented 50 percent of planned funds that were not yet obligated for awards subject to the PLGHA policy, while CDC awards represented 46 percent of such funds. Other HHS component agencies awards subject to the policy combined represented almost 4 percent of planned funds that were not yet obligated. DOD and State awards represented less than 1 percent of these funds. State s awards were relatively numerous but shorter-term and of smaller dollar value than other agencies awards. <3. The Majority of Estimated Planned Award Funding Subject to PLGHA Supported HIV/AIDS Assistance and Was Directed to Countries in Africa> Agencies reported that, as of September 30, 2018, over $8 billion of the more than $12 billion in estimated planned funding (over 66 percent) for awards subject to PLGHA that were active between May 2017 and September 2018 was for HIV/AIDS assistance (see table 3). All DOD and State planned funding, and almost all HHS planned funding, supported HIV/AIDS assistance. USAID reported that its planned funding was distributed across several global health areas including HIV/AIDS, family planning and reproductive health, maternal and child health, and tuberculosis. Agencies reported that over $8 billion of the more than $12 billion (over 66 percent) of the estimated planned funding for awards subject to PLGHA that were active between May 2017 and September 2018 was for awards in Africa (see table 4). Awards in Asia accounted for the second highest level of planned funding for an individual region at almost $600 million (5 percent). Global awards implemented in more than one region represented about $3 billion in planned funding (26 percent). By global health assistance area and region, HIV/AIDS assistance in Africa accounted for the most planned funding that had not yet been obligated for awards subject to PLGHA: over $6 billion of about $12 billion (52 percent) (see table 5). The next largest category was global HIV/AIDS assistance awards, which accounted for over $1 billion (13 percent). The top 10 countries receiving the most estimated planned funding that had not yet been obligated under awards subject to PLGHA accounted for over $6 billion of more than $12 billion (54 percent) (see table 6). All 10 countries are in sub-Saharan Africa. Of these countries, South Africa had the most planned funding remaining (over $2.4 billion) that was subject to the policy. See appendix II for more details on the locations of awards subject to PLGHA. Agencies Identified 54 Prime and Sub- Awards in which NGOs Declined to Accept PLGHA Conditions <4. USAID Awarded All but One of the Projects in which NGOs Declined to Accept PLGHA Conditions> USAID identified 53 awards six prime awards and 47 sub-awards in which NGOs declined to accept PLGHA terms and conditions. CDC identified one prime award in which an NGO declined to accept the policy s terms and conditions. These prime and sub-awards had about $153 million in estimated planned funding remaining that was not obligated at the end of fiscal year 2018 (see table 7). DOD and State did not identify any declinations. The remaining planned funding that was not obligated as of September 30, 2018, represents an estimate of the amount that had been planned for the awards but which was not obligated under these awards because awardees declined to accept the terms and conditions of the PLGHA policy, according to the agencies. <5. USAID Identified Six Prime Awards in Which NGOs Declined to Accept PLGHA Terms and Conditions> USAID identified six prime awards in which NGOs declined to accept PLGHA terms and conditions resulting in an estimated $94 million in planned funding that was not obligated as of September 30, 2018. These six prime awards, presented in table 8, supported different global health assistance areas. Three of the awards were global in scope, two provided assistance to India, and one provided assistance to Zimbabwe. The two largest of the six prime awards declined were global awards to Marie Stopes International (MSI) and International Planned Parenthood Federation (IPPF), both of which publicly stated that they could not meet the conditions of PLGHA because abortion services or referrals are part of reproductive health care services they provide and a right to which their patients are entitled. Together, these two awards had about $79 million remaining in planned funding that was not obligated as of September 30, 2018. The primary objective of these two awards was to increase access to and use of family planning products and services, although the award to MSI also supported maternal and child health and HIV/AIDS and the IPPF award supported HIV/AIDS in addition to family planning and reproductive health, according to information provided by USAID. According to MSI and IPPF representatives, these two awards both included, among other activities, mobile family planning and reproductive health outreach activities that reached underserved rural populations in multiple countries. While MSI and IPPF were able to obtain some funding from other donors when the USAID awards were suspended, the additional funds fell far short of the funds provided by USAID, according to the organizations representatives, resulting in reductions in family planning services they provided to recipient countries. <6. CDC Identified One Prime Award for Which the NGO Declined to Accept PLGHA Conditions> CDC identified one prime award in which an NGO declined to accept the PLGHA terms and conditions. According to CDC, this award had about $8.4 million remaining of a 5-year, $10.5 million award ceiling for delivery of HIV services in sexual and reproductive health clinics and in confidential clinics for commercial sex workers in Ethiopia. <7. USAID Identified 47 Sub- Awards in Which NGOs Declined to Accept PLGHA Conditions> USAID identified 47 global health sub-awards in which foreign NGOs declined to accept the PLGHA policy s terms and conditions and thus ceased receiving U.S. funding under those awards following implementation of the PLGHA policy (see table 9). The planned funding that was not obligated for these sub-awards amounted to about $51 million, as of September 30, 2018. As shown in table 9, sub-awards with NGOs that declined to accept the PLGHA terms and conditions involved multiple global health assistance areas. Family planning and reproductive health represented the largest share of planned sub-award value involving declinations, followed by awards supporting multiple global health areas and HIV/AIDS. Sub-awards involving declinations also addressed maternal and child health, tuberculosis, and nutrition assistance. According to data provided by USAID, sub-awards in which NGOs declined the PLGHA terms and conditions occurred in multiple regions, but primarily in countries in Africa. USAID identified 32 sub-awards implemented in African countries involving NGOs that declined the PLGHA terms and conditions following implementation of the policy. The estimated total value of these sub-awards was about $56 million, of which more than half (about $32 million) remained as planned funding that was not obligated as of September 30, 2018 (see table 10). Of the 47 sub-awards for which the PLGHA terms and conditions were declined, 26 were declined by affiliates of either IPPF or MSI. The estimated total award value of these 26 sub-awards amounted to over half of the value of the 47 sub-awards (see figure 2). Four countries had the largest estimated amount of sub-award funds declined by NGOs, with at least $8 million in planned funding that was not obligated as of September 30, 2018 (see table 11). For example, two declined sub-awards implemented in Senegal had a combined $9.7 million in planned funding that was not obligated as of September 30, 2018. These two sub-awards were implemented by an MSI affiliate that, among other services, used the USAID funds to operate mobile family planning clinics for beneficiaries in rural, underserved areas. According to MSI representatives, these sub-awards did not involve abortion services, which MSI indicated are illegal in Senegal. However, the NGO declined the sub-award because of its affiliation with MSI, according to the representatives. Bangladesh had the most sub-awards in which NGOs declined the PLGHA terms and conditions with five. Total planned funding that was not obligated for these five sub-awards amounted to about $9 million as of September 30, 2018. These awards supported multiple areas of global health assistance including family planning and reproductive health, tuberculosis, nutrition, and maternal and child health. Agency Comments We provided a draft of this report to DOD, HHS, State, and USAID, and for review and comment. In their written comments, reproduced in appendix III, USAID stated that it found our estimates of the number and value of awards subject to PLGHA and those in which NGOs declined to accept PLGHA the terms and conditions to be reasonable given the data available. USAID also elaborated on limitations with available data, which we believe are consistent with the data limitations we describe in this report. DOD, HHS, and State did not provide written comments. In addition, HHS, State, and USAID provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretaries of Defense, Health and Human Services, and State, and the Administrator of the U.S. Agency for International Development. In addition, the report is available at no charge on the GAO website at http://www.gao.gov . If you or your staff have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our objectives were to identify (1) global health assistance awards that U.S. agencies determined to be subject to the terms and conditions of the U.S. government s Protecting Life in Global Health Assistance (PLGHA) policy requiring foreign non-governmental organizations (NGOs) to agree that they would not perform or actively promote abortions as a method of family planning, and (2) planned funding for awards involving NGOs that declined to accept the terms and conditions of this policy. To identify the global health assistance awards subject to the terms and conditions of the PLGHA policy, we obtained data from the Departments of State (State), Health and Human Services (HHS), and Defense (DOD), and the U.S. Agency for International Development (USAID) on all relevant awards active when the policy was first implemented in May 2017 or awarded through September 30, 2018. We identified the relevant agencies based on a February 2018 State report on the initial implementation of PLGHA and discussions with each agency to identify affected component agencies. Component agencies within HHS that identified awards subject to the PLGHA included the Centers for Disease Control and Prevention (CDC), the National Institutes of Health, the Health Resources and Services Administration, and Substance Abuse and Mental Health Services. Within DOD, the Department of the Army and the DOD HIV/AIDS Prevention Program identified awards subject to the policy. To obtain information that was as complete and consistent as possible from each relevant agency on all awards subject to the PLGHA terms and conditions, we created a data collection instrument. This instrument asked the agencies to identify all awards that were subject to the PLGHA, that were either active in May 2017 when the PLGHA policy was first implemented or that were new awards through the end of fiscal year 2018 (September 30, 2018). We analyzed the responses to our data collection instrument to describe the number and estimated total value of the awards, the amount obligated as of September 30, 2018 and the estimated amount of planned funding that was not yet obligated for these awards, the implementing agency, the type of global health assistance, and the recipient countries. Agencies defined estimated total award value as either award ceilings or total award amounts for the life of the award including both funding that recipient organizations may have obligated prior to the PLGHA policy as well as funding that organizations have not yet received but may receive in future years. We asked the agencies to categorize the type of global health assistance based on the Foreign Assistance Standardized Program Structure and Definitions, which State updated in 2016. During the development of this data collection instrument, we discussed drafts with each of the agencies and made modifications as appropriate. We provided definitions for each data element requested that allowed for variations in the ways these agencies collect and record data on awards. To estimate the value of planned funds not yet obligated and therefore subject to the PLGHA policy, we subtracted the obligated amount from the estimated total award value of each award. While this calculation provides an estimate of the funds subject to the PLGHA, it is limited by two factors. First, while planned award funding that was not already obligated before May 2017 when PLGHA was first implemented was made subject to the PLGHA policy, agencies did not have obligations data as of May 2017 readily available but were able to readily identify obligations as of September 30, 2018. Therefore, information provided on planned funding that was not yet obligated as of September 30, 2018, may not capture all of the funding made subject to the PLGHA policy because it does not include obligations between May 2017 and September 30, 2018, for NGOs that accepted PLGHA terms and conditions. Second, estimates of total award value can change over time, according to agency officials. For example, awards could have extensions with additional funding not yet reflected in the estimated total award values agencies provided us. In addition, the estimated total award values the agencies provided could be based on a maximum or ceiling for some awards, which may overstate actual amounts. To identify the prime and sub-awards active in May 2017 that involved NGOs that declined the PLGHA terms and conditions, we developed additional data collection instruments one for prime awards between agencies and NGOs and one for sub-awards between prime awardees and NGOs to request information on these awards from the relevant agencies. We followed the same process described above to develop these two additional instruments to identify estimated total value of the awards, obligated amounts as of September 30, 2018, the implementing agency, the type of global health assistance, and the recipient countries. USAID identified 53 declined prime or sub-awards and CDC identified one. For these agencies, identifying these awards involved contacting staff based in overseas posts. The other agencies reported to us that they had no awards in which NGOs declined the PLGHA terms and conditions. A USAID official also noted that the sub-award amounts they provided to us could vary from year to year, which would affect the amounts of remaining planned funding that was not obligated as of September 30, 2018. Nevertheless, we relied on these amounts to estimate the amount of planned funding that was not obligated under these awards as of the end of fiscal year 2018 because the NGOs declined to accept the PLGHA terms and conditions. Efforts taken by prime awardees to replace declined sub-awards were not part of our review. In addition to meeting and corresponding with USAID and CDC officials to discuss awards involving declinations, we interviewed representatives of Marie Stopes International (MSI) and International Planned Parenthood Federation (IPPF) two prime awardees that publicly declined to accept the terms and conditions of the PLGHA policy. These two NGOs declined the two largest of the six prime awards declined and their local affiliates were implementers of many of the sub-awards that were declined. We discussed with MSI and IPPF the characteristics of these two awards and the accuracy of USAID s data provided to us on them. We examined the reliability of the data on awards identified by the agencies through testing for logical assumptions such as whether award start dates preceded their end dates, and whether an award s estimated total value met or exceeded the total amount of funding that had been obligated to it. In addition, we met with agency officials to discuss and correct any discrepancies in the award data they provided. However, we did not independently verify the awards identified or the funds associated with each award. Overall, we found the data on awards subject to the PLGHA policy and in which NGOs declined the terms and conditions of the policy to be sufficiently reliable for the purposes of delineating the agencies, assistance areas, countries, estimated total value of awards, and obligations. As noted earlier, we also calculated the amounts of planned funding that were not obligated as of September 30, 2018, to estimate the amount of funding subject to the policy. We conducted this performance audit from April 2018 to March 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Awards Subject to the Protecting Life in Global Health Assistance (PLGHA) Policy by Location Global health awards that agencies identified as subject to the PLGHA terms and conditions amounted to almost $29 billion in estimated total award value. This amount includes funding that agencies had obligated before implementing the PLGHA policy in May 2017 as well as funding across multiple fiscal years and for potential award extensions. Agencies reported that about $12 billion in funding was not yet obligated as of September 30, 2018. Award funding included assistance to specific countries, as well as awards that were regional or global in scope (see table 12). Appendix III: Comments from the U.S. Agency for International Development Appendix IV: GAO Contacts and Staff Acknowledgments <8. GAO Contact Staff Acknowledgements> David Gootnick (202) 512-3149 or gootnickd@gao.gov In addition to the individual named above, Leslie Holen (Assistant Director), Howard Cott, Martin de Alteriis, Kelsey Griffiths, Christopher Keblitis, Andrew Kurtzman, Michael McAtee, Aldo Salerno, Fatima Sharif, and Alexander Welsh made significant contributions to this report. | Why GAO Did This Study
The United States is the world's largest donor of global health assistance. Congress provided about $8.7 billion for the Global Health Programs (GHP) account in fiscal year 2018. In 2017, the President reinstated and expanded a policy, which now requires foreign NGOs to agree that, as a condition of receiving global health assistance, they will not perform or actively promote abortion as a method of family planning or provide financial support during the award term to other foreign NGOs that conduct such activities. The Reagan administration first implemented this policy, known as the Mexico City Policy, in 1984, and subsequent administrations have rescinded and reinstated it. The Mexico City Policy initially applied only to family planning and reproductive health assistance, which received about $560 million of GHP funds in fiscal year 2018. Upon reinstating the policy, the Trump Administration renamed it PLGHA and applied it to all global health assistance to the extent allowable by law. GAO was asked to review the implementation of the PLGHA policy. This report identifies (1) global health assistance awards that U.S. agencies determined to be subject to the U.S. government's PLGHA policy requiring foreign NGOs to agree that they would not perform or actively promote abortion as a method of family planning, and (2) planned funding for awards involving NGOs that declined to accept the terms and conditions of this policy. GAO analyzed data provided by U.S. agencies of awards subject to the PLGHA policy and awards in which NGOs declined to accept the terms and conditions of this policy.
What GAO Found
U.S. agencies reported to GAO that from May 2017 through fiscal year 2018, they applied the Protecting Life in Global Health Assistance (PLGHA) policy to over 1,300 global health awards. The policy's restrictions on performing or actively promoting abortion as a method of family planning applied to active awards that received new funding after the policy was implemented, and all funding for new awards made after May 2017. As of September 30, 2018, about $12 billion in estimated planned award funding was subject to the policy. The U.S. Agency for International Development (USAID), with over $6 billion, and the Centers for Disease Control and Prevention (CDC), with over $5 billion, awarded about 96 percent of this amount. Agencies implemented these awards across multiple geographic regions and global health assistance areas. About two-thirds of estimated planned funding subject to the policy supported HIV/AIDS assistance, while the remaining third supported other global health areas, such as maternal and child health, and family planning and reproductive health. Over two-thirds of planned funding subject to the policy was for awards in Africa.
U.S. agencies identified seven prime awards and 47 sub-awards in which non-governmental organizations (NGOs) declined to accept the terms and conditions of the PLGHA policy, and these awards had about $153 million remaining in estimated planned funding not obligated as of September 30, 2018. The seven prime awards that were declined included six USAID awards and one CDC award and amounted to about $102 million of the $153 million in estimated planned funding that was not obligated. Marie Stopes International and the International Planned Parenthood Foundation declined the two largest of these awards, resulting in about $79 million in planned funding that was not obligated. These two awards included, among other activities, mobile family planning and reproductive health outreach activities to underserved, rural populations in multiple countries. USAID identified all of the 47 sub-awards that were declined, which had a total of about $51 million in planned funds that was not obligated. Thirty-two of the 47 subawards were intended for Africa.
by Global Health Assistance Area
Source: GAO analysis of agency reported data | GAO-20-347 |
gao_GAO-19-387 | gao_GAO-19-387_0 | <1. Background> MDA is responsible for developing a number of systems, known as elements, with the purpose of defending against ballistic missile attacks. MDA s mission is to combine these elements into an integrated system- of-systems, known as the Ballistic Missile Defense System. The goal of the BMDS is to combine the abilities of two or more elements to achieve objectives that would not have been possible for any individual element. These emergent abilities are known as integrated capabilities or BMDS- level capabilities. Table 1 provides a list and description of elements included in our review. <1.1. MDA s Acquisition Flexibilities and Steps to Improve Traceability and Oversight> When MDA was established in 2002, it was granted exceptional flexibilities to set requirements and manage the acquisition of the BMDS developed as a single program that allow MDA to expedite the fielding of assets and integrated ballistic missile defense capabilities. These flexibilities allow MDA to diverge from DOD s traditional acquisition life cycle and defer the application of certain acquisition policies and laws designed to facilitate oversight and accountability until a mature capability is ready to be handed over to a military service for production and operation. Some of the laws and policies include such things as: obtaining the approval of a higher-level acquisition executive before making changes to an approved baseline, reporting certain increases in unit cost measured from the original or current baseline, obtaining an independent life-cycle cost estimate prior to beginning system development and/or production and deployment, and regularly providing detailed program status information to Congress, including specific costs, in Selected Acquisition Reports. In response to concerns related to oversight, Congress and DOD have taken a number of actions. For example, Congress enacted legislation in 2008 requiring MDA to establish cost, schedule, and performance baselines starting points against which to measure progress for each element that has entered the equivalent of system development or is being produced or acquired for operational fielding. MDA reported its newly established baselines to Congress for the first time in its June 2010 BMDS Accountability Report. Since that time, Congress has required more details for the content of these baselines. Additionally, to enhance oversight of the information provided in the BMDS Accountability Report, MDA continues to incorporate suggestions and recommendations from GAO. However, not all of our recommendations have been fully implemented. For example, in April 2013, we recommended that MDA stabilize its acquisition baselines so that meaningful comparisons can be made over time to support oversight. MDA stated that the information presented in the BAR is sufficient; however, we continue to find that the lack of stable baselines makes comparison difficult and in some instances, impossible. <1.2. MDA s Process for Delivering Capabilities> MDA develops capabilities and then delivers them to the military services. Using this process, MDA declares an asset or capability ready for delivery for potential operational use while communicating the capabilities and limitations of the asset. Representatives from the receiving military service or combatant command then have the ability to assess this evidence and decide whether to accept the new capability. Because the military services conduct minimal missile defense testing of their own, this process is one of the only ways to convey vital performance information. The accuracy of this information is especially important as it informs training materials, doctrine, and deployment decisions and provides evidence supporting these assertions. MDA supports its assertions of capabilities with evidence from three sources: models and simulations, ground testing, and flight testing. Ground tests and models and simulations permit more flexibility in scheduling and design, but both are dependent on logistically more difficult flight tests to provide real-world performance data. As a result, MDA s ability to organize, conduct, and evaluate flight tests is one of the most important factors in whether MDA is able to adhere to its schedule and declare an asset or capability ready for delivery. <1.3. MDA s Contracting Practices> Though MDA has flexibilities in managing the acquisition process, it must follow the same contracting regulations that apply to DOD, including the Federal Acquisition Regulation and the Defense Federal Acquisition Regulation Supplement (DFARS). For this report, we reviewed MDA s use of a particular type of contract action that authorizes a contractor to begin work before contract terms, specifications, or price have been agreed upon. These undefinitized contract actions are permitted by the DFARS, with certain limitations. Undefinitized contract actions are generally used when negotiation of a definitive contract action is not possible in sufficient time to meet the government s requirements and the government s interest demands that the contractor be given a binding commitment so that contract performance can begin immediately. Under the DFARS, undefinitized contract actions must include a specific not-to- exceed price. Once the action s terms, specifications, and price have been agreed upon or determined, a process known as definitization, the contract action converts to a definitive contract. Under the DFARS, undefinitized contract actions must contain definitization schedules that provide for definitization by the earlier of (1) 180 days after issuance or (2) the date on which the amount of funds obligated under the action is more than 50 percent of the not-to-exceed price. Once the government has received a qualifying proposal from the contractor, however, the government can extend the undefinitized period another 180 days. Similarly, the government may obligate up to 75 percent of the not-to-exceed price, if the contractor submits the qualifying proposal before 50 percent of the not-to-exceed price has been obligated. <1.4. MDA s Regional Efforts in Europe and Korea> DOD s regional Ballistic Missile Defense (BMD) effort consists of a number of specific weapon systems or elements that compose the BMD system as a whole. According to DOD, various versions of these weapon systems are being deployed in Europe, Korea and other regions. The European effort known as the European Phased Adaptive Approach (EPAA) integrates the upgrades to Aegis BMD Weapon System, Aegis BMD interceptors, C2BMC and sensors, and was originally planned for delivery in four phases. Additionally, each phase is designed to rely on increasingly capable missiles, sensors, command and control, and integration to defend Europe against increasingly longer range ballistic missiles. DOD delivered the first phase, for short- and medium-range defense of Europe, in December 2011, and delivered the second phase for medium- range missiles in December 2015. Its efforts for both of these phases were also characterized by schedule delays, technical challenges that led to reductions in the scope of capability delivered, as well as testing reductions, which reduced confidence in capabilities that had been delivered. According to its capability plans, the purpose of EPAA Phase 3 is to provide a robust Intermediate-Range Ballistic Missile (IRBM) defense. Figure 1 depicts the weapon systems that DOD deployed in support of the European Phased Adaptive Approach capability. As we have previously reported, MDA encountered numerous challenges in an effort to meet its original EPAA goals and we have made several recommendations to improve MDA s management of its integrated capability efforts, including EPAA, to reduce risk for individual elements and to improve testing practices overall. For instance: In January 2011, we recommended that DOD develop life-cycle cost estimates and establish an integrated schedule for EPAA. DOD partially concurred and concurred, respectively, to the recommendations. An independent life cycle cost estimate was prepared, however an integrated schedule that produced sufficient detail was never completed. In April 2012, we recommended that DOD assess the extent to which the dates announced by the President in 2009 are contributing to concurrency and recommend schedule adjustments where significant benefits can be obtained. DOD did concur with this recommendation, however never included a specific assessment of the extent to which capability delivery dates for the European Phased Adaptive Approach announced by the president in 2009 were contributing to concurrency; instead, it asserts that BMDS technology development is fundamentally driven by completion of technical milestones, not schedule declarations. In May 2017, we recommended that MDA address deficiencies in its testing scheduling policy to better align it with best practices for scheduling. DOD did not concur with this recommendation. Consequently, the department continues to allow MDA to schedule and plan its test program without risk analyses, or assigning resources to each test. Unless the department takes action to address these challenges, the department should continue to expect MDA to fall further behind in its test program. In fiscal year 2018, MDA focused additional regional capability efforts on the Korean Peninsula. This new effort was requested by the United States Forces Korea in December 2017 to counter North Korean ballistic missiles. Capabilities for the Korean effort are currently planned for delivery between February 2018 and April 2021, and are based on element-level upgrades as well as integration enhancements between THAAD and Patriot. <2. MDA Made Progress Delivering Capabilities and Assets and Conducting Tests but Fell Short of Its 2018 Goals> <2.1. MDA Delivered Several Important Capabilities According to Its Planned Baseline, but Did Not Meet Most of Its Asset Delivery Goals> In December 2017, MDA achieved a significant asset delivery milestone, completing the deployment of 44 operational ground-based interceptors (GBI). In deploying these interceptors, MDA also fulfilled a goal set by the Secretary of Defense in March 2013 to increase the inventory of GMD interceptors from 30 to 44 by the end of December 2017. Although MDA achieved this goal, it did not deliver two of the four GBIs planned for fiscal year 2018. One of the GBIs is intended for use in an upcoming flight test that was delayed to fiscal year 2019. The other delayed GBI delivery was the result of the boost vehicle contractor mishandling the booster avionics module a critical component that houses the flight computer and navigation systems. The contractor is working on replacing the component but the rework has delayed delivery of the final GBI to fiscal year 2020. Other on-time capability deliveries included the release of new software versions for several major BMDS elements, including C2BMC (Spiral 8.2- 3), BOA 6.1, THAAD (THAAD 3.0), AN/TPY-2 (CX 3.0), and GMD (GS 7A). Another expected software release was Aegis Weapon System (BL 9.2), but that was delayed to at least March 2019 to accommodate verification and validation of models and simulations and to accompany the delivery of the Aegis BMD SM-3 Block IIA. In terms of asset deliveries, specifically interceptors used to counter enemy missiles, MDA successfully delivered all 53 THAAD interceptors specified in the baseline for fiscal year 2018, as well as an additional five interceptors the delivery of which had been delayed from the previous year. For a summary of MDA s asset delivery status for fiscal year 2018, see table 2. Although MDA made a number of deliveries, including all planned THAAD interceptors, it did not meet its fiscal year 2018 asset delivery goals due to a variety of factors. The Aegis BMD SM-3 Block IB program, which received full production authority early in fiscal year 2018 after years of delays, delivered 12 of 36 planned interceptors in fiscal year 2018. This shortfall was due to the discovery of a parts quality issue that necessitated suspending deliveries until MDA could complete an investigation of the issue s impact on the interceptor s performance. In addition, the Aegis BMD SM-3 Block IIA program delivered one of four planned test interceptors due to a flight test failure early in the year suspending further deliveries pending completion of a failure review board. Moreover, according to MDA officials, construction contractor performance issues will result in the Aegis Ashore Missile Defense System Complex Poland not being delivered until at least 18 months after the planned December 2018 date. As discussed later in this report, this facility is central to MDA s plans for the EPAA Phase 3, such that a delay in the completion of this facility resulted in a delay in the planned EPAA Phase 3 delivery to the warfighter. <2.2. MDA Conducted Seven of Eleven Flight Tests Planned for Fiscal Year 2018, One of Which Failed> MDA conducted seven fiscal year 2018 flight tests as planned, and during one of those seven the interceptor failed. According to MDA s Integrated Master Test Plan, MDA scheduled eleven flight tests of the systems included in our review. MDA s ability to adhere to its flight test schedule for fiscal year 2018 was hampered by several issues, including technical challenges, test failures requiring new tests to be inserted into the schedule, and range and target availability. Of the four tests not conducted, MDA delayed two to future fiscal years, and deleted two, with their objectives planned to be mostly fulfilled by separate events. Table 3 highlights MDA s fiscal year 2018 flight tests. MDA also added several test events to its schedule over the course of fiscal year 2018. They are listed below in table 4. The two most significant flight tests scheduled for fiscal year 2018 were delayed into fiscal year 2019. Specifically, FTG-11, GMD s first salvo test (launching multiple interceptors at a single target), was delayed until the second quarter of fiscal year 2019 to accommodate other BMDS testing priorities while GMD fixed software issues uncovered during pre-test planning. In addition, FTO-03 Event 1, a test designed to assess the Aegis BMD SM-3 Block IIA capability against an IRBM was to be the first (and only) operational test of the EPAA Phase 3 architecture before MDA delivered the capability. This test was delayed to accommodate the demand for range and test assets following the insertion of a new test into the schedule. <3. Mid-Year Budget Changes Significantly Affected MDA s Future-Year Plans> Fiscal year 2018 legislation expanded and accelerated several MDA programs. In December 2017, Congress passed and the President signed into law the Department of Defense Missile Defeat and Defense Enhancements Appropriations Act, 2018 (MDDE), which increased missile defense appropriations. The MDDE provided approximately $2 billion in appropriations for missile defense. MDDE provided funds in support of plans that would expand and accelerate several missile defense programs beyond the agency s previous baselines. According to MDA, the administration directed the Secretary of Defense to develop options for accelerating missile defense capabilities in response to North Korea flight testing a new intercontinental ballistic missile in July 2017. According to MDA, it collaborated with Office of the Secretary of Defense and the Joint Chiefs of Staff to identify programs and capabilities that could be accelerated and delivered within the current Future Years Defense Plan and directly address the North Korean missile threat. DOD then took those options back to the administration to finalize the MDDE plan, which was subsequently presented to Congress. These plans most significantly affected the GMD program and the Aegis BMD SM-3 Block IIA. Under the plans and with the funds provided by MDDE, the GMD program will increase its inventory from 44 GBIs to 64 GBIs by 2023. Each of these new interceptors will be equipped with the Redesigned Kill Vehicle (RKV), accelerating the latter program s schedule by approximately one year. MDA also intends to use $451 million from MDDE to procure 16 additional Aegis BMD SM-3 Block IIA interceptors. The Aegis BMD SM-3 Block IIA program was still in development at the time, and these funds represented the first time Congress appropriated procurement funds, and not research and development, for the program. <3.1. Programs Accelerated and Expanded by the Fiscal Year 2018 Missile Defeat and Defense Enhancement Amendments Subsequently Experienced Challenges> The RKV program, in part to support the accelerated schedule, adopted a new program schedule that required concurrency in some areas. As we previously reported, the original RKV strategy avoided concurrency by aligning production decisions with flight testing. However, to accommodate the newly accelerated schedule, the program began procuring some components before completing qualification testing. Under this new plan, qualification testing would only be completed around the same time as the planned first flight test. MDA s contracting plans for the RKV have been closely aligned to the test schedule, to the point that MDA will have more than half of its planned RKV buy under contract before conducting a successful intercept test. The program planned to award a production contract for Lot 1 and the long-lead materials contract for Lot 2 following a major design review, but before the first flight test. Following the first flight test (CTV-03+) in first quarter fiscal year 2020, the program planned to award a production contract for Lot 2 and long-lead materials for Lot 3. Upon completion of the first intercept test (FTG-17) in the first quarter of fiscal year 2021, the program planned to award the production contract for the final planned lot, Lot 3. Through the course of fiscal year 2018, the RKV program has been unable to meet its cost and schedule milestones. Specifically, the prime contractor has reported accumulating negative cost and schedule variances with no signs of arresting these trends. The contractor also reported inefficiencies stemming from bringing large numbers of new staff onto the project, as well as requiring more personnel for the project than they originally anticipated. According to MDA, as fiscal year 2018 progressed, the program discovered that some components would not meet performance requirements. MDA therefore postponed the critical design review from fiscal year 2018 to fiscal year 2021. Moreover, MDA no longer plans to achieve its goal of fielding 64 interceptors by 2023. In addition, MDA anticipates RKV s total cost has increased by nearly $600 million as a result of the design issues. See appendix VI for information on RKV and the GMD program. <3.1.1. Aegis BMD SM-3 Block IIA> The Aegis BMD SM-3 Block IIA schedule planned for an initial production decision in fiscal year 2018, but one month after the MDDE s enactment, the program experienced its second consecutive failure in a significant flight test FTM-29 that introduced significant uncertainty into the Aegis BMD SM-3 Block IIA s schedule. In an effort to maintain the program s schedule, the Undersecretary of Defense for Acquisition and Sustainment in an Acquisition Decision Memorandum provided selective authorization to use procurement funds. The memorandum placed a cap on how much the program could spend, and had a list of approved pacing items (which excluded parts still under investigation for the test failure) on which the funds could be spent. Under the terms of the memorandum, MDA would have to meet a series of requirements to lift these limitations, such as completion of the failure review board and implementation and demonstration of corrective actions. MDA operated under these limitations for the remainder of the fiscal year. <3.2. MDA Relied on Undefinitized Contract Actions to Achieve Its Acquisition Goals> MDA used undefinitized contract actions (UCA) in fiscal year 2018, particularly in programs receiving MDDE appropriations. In May 2018, we found that MDA s use of UCAs in recent years had increased in both total not-to-exceed value and in the length of the undefinitized period. While MDA improved its performance in timely definitization of these contract actions in fiscal year 2018, the total not-to-exceed value of the undefinitized contract actions MDA initiated in 2018 far exceeded previous years we reviewed. UCAs allow work to begin on a program before the government and contractor have agreed to all contract terms, such as price or scope. MDA states that undefinitized contract actions are necessary, particularly in the case of programs accelerated by the MDDE appropriation, because they allow work to begin immediately. Coming to agreement on all terms before beginning work would have added months to program schedules that, MDA stated, could not accommodate such a delay. Undefinitized contract actions are permitted under the Defense Federal Acquisition Regulation Supplement, but we have found in the past that the use of these contracts can pose particular risks for the government. Examples of recent UCAs follow: In October 2017, MDA issued a sole source undefinitized contract action for $60 million (according to DOD and MDA, the value was later increased to $88 million) for the purposes of transitioning the Aegis BMD SM-3 Block IIA program from development to production. This work will improve the manufacturing readiness of the contractor s production facilities, with the goal of eventually supporting a production rate of two interceptors per month. According to MDA officials, definitizing this contract action proved difficult. The contractor s initial cost and fee position were substantially higher than MDA s and independent government estimates, even after those estimates were revised upwards when they were found not to include costs specific to the Aegis BMD SM-3 Block IIA. MDA initially planned for a definitization in April 2018. By that time, all terms had been agreed to except for the contractor s fee. According to MDA officials, the parties deadlocked until August 2018, when, with the authorization of the Director, MDA, contracting officials unilaterally definitized the contract. MDA officials told us that when a unilateral definitization occurs, the government essentially imposes its terms on a take-it-or-leave-it basis, effectively halting negotiations. According to MDA officials, in this case, the contractor acceded to the government s terms and continued work on the project. When asked about possible consequences to this action, MDA officials stated that it is possible for contractors in this situation to seek administrative relief, but in this case, they stated such an appeal would be unlikely to succeed, and believed the contractor would be unlikely to pursue it. It is also possible, officials said, that the contractor would either be reluctant or refuse to accept an undefinitized contract action from MDA in the future. In fiscal year 2017, MDA issued a sole source undefinitized contract action for the design and initial production of the RKV. This contract had a not-to-exceed value of $1.1 billion. MDA issued the contract with an estimated definitization date of May 14, 2018. Despite the issues encountered by the RKV program described above, MDA reported that it definitized this contract action on schedule in May 2018, for the same price as the original not-to-exceed value, $1.1 billion. MDA issued several undefinitized contract actions in 2018. For example, in April 2018, MDA issued a sole source undefinitized contract action for the production of Aegis BMD SM-3 Block IIA pacing items , with a not-to-exceed value of $387 million. The Undersecretary of Defense for Acquisition and Sustainment issued a memorandum stating the circumstances under which MDA could obligate additional procurement, defense wide funds. MDA officials stated that pacing items were those items whose lead times were not long enough to qualify for long-lead procurement, but which were still substantial enough (more than 2 years) to cause delays if their production waited until the successful completion of operational testing. These officials also explained that the pacing items excluded any components which were still under investigation for the failure of FTM-29. Before that test s failure and the ensuing involvement of the Undersecretary, MDA planned for a not-to-exceed value of $672 million. MDA initially planned for a definitization date of December 2018, but it has since been delayed. MDA issued its largest undefinitized contract action for the fiscal year (as measured by its not-to-exceed value of $6.56 billion) in January 2018. For the past several years, the GMD program planned to transition away from its all-inclusive contract to a structure involving three new contracts: one for systems engineering, integration, and testing; one for ground systems readiness, operations, and support; and one for all-up round interceptors. This Development, Operations and Sustainment, and Production approach would have been a significant undertaking. It would have required that MDA take control of the technical baseline for the entire program. MDA also believed that this strategy would provide for enhanced competition and reduced organizational conflicts of interest. With the MDDE appropriation and associated program acceleration, the Director, MDA decided that managing the transition to this new contracting strategy, in addition to fielding 20 new ground-based interceptors was too risky. Thus, MDA issued an undefinitized contract action that provided a six-year extension to the main development and sustainment contract for GMD. The contract action has a not-to-exceed value of $6.56 billion, a value higher than that for all undefinitized contract actions issued by MDA in the previous 5 years combined. MDA was able to definitize most elements of this contract in March 2019. Figure 2 illustrates MDA s increasing use of undefinitized contracts as measured by the sum of their not-to-exceed values. <4. MDA Completed Some Key Milestones for Integrated Regional BMDS Capabilities, but Key Aspects of Its European Effort Have Been Deferred and Testing De-scoped> In fiscal year 2018, MDA delivered regional capabilities to counter threats from North Korea, but did not meet all of its 2018 goals for its effort in Europe to counter intermediate-range ballistic missile (IRBM) threats from Iran, known as the European Phased Adaptive Approach (EPAA) Phase 3. Specifically, the agency delivered planned upgrades and additional assets for the Korean Peninsula an effort it began in 2017. However, the delivery of the third and final phase of the EPAA has been delayed by 18 months. Despite this delay, testing intended to demonstrate EPAA Phase 3 capability has been significantly reduced and de-scoped or deferred past the new delivery date, which reduces the warfighter s insight on the system s capabilities and limitations. <4.1. MDA Met Its Fiscal Year 2018 Goals for Capabilities in the Korean Peninsula> MDA delivered upgrades on time to the Korean Peninsula in February and September 2018. Notably, the upgrades provided initial integration between THAAD and Patriot key elements of the effort in Korea improving THAAD and Patriot s ability to coordinate during engagements. MDA also delivered element-level upgrades for THAAD, including additional interceptors, as well as a new software release that expanded THAAD s ability to counter new threats and improved its performance in the presence of debris. These upgrades were assessed in an April 2018 flight test that demonstrated interoperability between THAAD and Patriot by exchanging Link-16 messages over tactical data links while tracking a missile target, and an April 2018 BMDS-level ground tests that provided further performance data for these upgrades in a simulated environment. MDA plans to deliver additional capabilities for the Korean Peninsula in the future. We currently have ongoing work related to these areas. Details will be included in a future report. <4.2. European Phased Adaptive Approach Capability against Intermediate-Range Threats Has Been Delayed> MDA s effort to deliver the third and last phase of the EPAA has been delayed from December 2018 to May 2020. MDA planned to deliver the EPAA Phase 3, for defense against IRBM threats, at the end of calendar year 2018, but construction delays for Aegis Ashore, the linchpin of Phase 3, delayed its completion by 18 months. In fiscal year 2018, the delay for EPAA Phase 3 was caused by challenges at the construction site for Aegis Ashore in Poland. According to MDA officials, delays to the Aegis Ashore were primarily driven by military construction contractor performance issues. As these delays continued to accumulate, MDA initially planned to make up for them by increasing concurrency between the construction phase and the installation and checkout phases of the project, and concurrently working at the sites in Romania and in Poland. As we previously reported, these increasing levels of concurrency posed a growing risk for the program and its ability to achieve its target delivery date. In March 2018, MDA officials recognized that plans for Aegis Ashore had become untenable, and the project s schedule would have to be extended. This plan required the development of a new delivery schedule for EPAA Phase 3 resulting in delivery in May 2020. <4.3. Despite the Delays, Delivery of EPAA Phase 3 Will Occur with Less Robust Testing than Originally Planned> MDA experienced testing disruptions throughout the EPAA Phase 3 development, including delays and failures, but overcame some of them in fiscal year 2018. The consequence of the testing disruptions is that EPAA Phase 3 will be delivered to the warfighter with less data than planned about performance against planned threats. According to DOD s acquisition guidance and the BMDS Warfighter Capability Acceptance document, testing is fundamental to ensuring that DOD acquire a system that works, and to provide data necessary to characterize the system s effectiveness in operational settings. Thus, the warfighter relies on testing to understand the system s capabilities and limitations and therefore how to fight with what MDA has built. As we previously found, EPAA Phase 3 testing disruptions started in 2016, when MDA delayed the first and second intercept flight tests of the Aegis BMD SM-3 Block IIA, the interceptor planned for fielding in EPAA Phase 3. Although this test was successfully conducted in February 2017, testing difficulties continued when it failed the second intercept flight test. MDA continued to experience challenges with testing necessary to demonstrate the EPAA Phase 3 capability in fiscal year 2018, which resulted in less robust testing. Specifically, as we discussed earlier in this report, the interceptor failed its first intercept test, FTM-29, against an intermediate range target, EPAA Phase 3 s intended threat. Following a failure investigation, and developmental work, MDA rectified the Aegis BMD SM-3 Block IIA design flaws and successfully demonstrated them against a medium-range ballistic missile target in October 2018, during FTM-45. MDA decided to use a medium range target in this test and concluded that it was sufficient to assess Aegis BMD SM-3 Block IIA fixes. However, according to MDA documentation, the test against a medium range target does not provide the same challenges as an intermediate range target. In December 2018, it successfully demonstrated for the first time an intercept of an IRBM during a test called FTI-03, previously called FTO-03 Event 1. While this test was successful, its scope was reduced from an attempt against a raid of two targets to instead a single intercept, in part, due to a test range safety asset malfunction. With these flight tests, according to MDA officials, it completed its flight testing requirements for EPAA Phase 3 delivery and that adding additional tests would be disruptive to their overall test plan. Our analysis indicates that flight testing to demonstrate EPAA Phase 3 performance against IRBMs the goal of Phase 3 has been reduced by 80 percent and even with the added 18-month delay, MDA no longer plans to conduct a flight test against a raid prior to delivery in fiscal year 2020. Figure 3 shows both the original and current plans for demonstrating EPAA Phase 3 performance through flight testing. Figure 3 above shows that the original plan included five IRBM intercepts across three tests, including tests to assess capability against small raids requiring simultaneous intercepts of multiple missiles a likely tactic in a real-world attack prior to delivery of EPAA Phase 3. However, as figure 3 also depicts, the current plan reduces the number of intercept tests against an IRBM and does not include a flight test against a raid until after EPAA Phase 3 capability is declared. Although the delivery has been delayed 18 months, in part due to the delay in construction at the Aegis Ashore site in Poland, the current plan significantly reduces the amount of data needed to support the EPAA Phase 3 capability and limitation assertions. As we previously reported, test and evaluation activities are an integral part of developing and producing weapon systems, as they provide knowledge of a system s capabilities and limitations as it matures and is eventually delivered for use by the warfighter. Consequently, the 18-month delay provides an opportunity to add in additional tests and an ability to provide further data to the warfighter or to make any design changes discovered during testing. As we previously reported, delivering capability before testing is complete has led to performance unknowns and increases the likelihood of cost increases if future testing discovers any design flaws. <5. Conclusions> MDA made further progress in fiscal year 2018 in its mission to defend the United States and its allies from enemy ballistic missiles, including achieving a significant integrated capability milestone for defending the United States. However, MDA did not meet all of its goals for the fiscal year. Specifically, not all programs delivered all planned assets in fiscal year 2018 and shortfalls were attributed to developmental delays and testing challenges. The acceleration of several programs following a budget increase in December 2017 introduced concurrency, which indicates a familiar risk: accounting for insufficient margin in an effort to meet schedule-driven milestones, rather than pursuing a knowledge- based approach. Construction delays related to another integrated capability, EPAA Phase 3, may, in fact, present an opportunity to build more knowledge in that area. EPAA Phase 3 intends to provide a robust defense against IRBM and raids of multiple targets, but tests to demonstrate that capability have been reduced from five to one with the test against the raid scenario not occurring before the capability is delivered. Our prior work has shown that proceeding with limited test data can result in late, and costly, discovery of performance problems. More thorough assessment of the capabilities and limitations of the system could mitigate that risk by building a more solid base of knowledge. <6. Recommendation for Executive Action> We are making one recommendation to MDA: The Director, MDA, should utilize additional schedule margin afforded by the EPAA Phase 3 delay to conduct additional testing necessary to thoroughly assess the capabilities and limitations of Phase 3 against IRBMs and a raid scenario prior to delivery. (Recommendation 1) <7. Agency Comments and Our Evaluation> We provided a draft of this report to DOD for comment. DOD s comments are reproduced in appendix IX. DOD and MDA also provided technical comments, which were incorporated as appropriate. In its comments, DOD partially concurred with our recommendation to utilize additional schedule margin afforded by the 18-month delay to the EPAA Phase 3 delivery to conduct additional testing necessary to thoroughly assess the capabilities and limitations against IRBMs and a raid scenario prior to delivery. DOD stated that all EPAA Phase 3 BMDS functions requiring a flight test environment were already successfully demonstrated and MDA has addressed the intent of our recommendation by adding ground tests to further assess EPAA Phase 3 capabilities. However, in order for the agency to meet the full intent of our recommendation, additional flight testing to demonstrate capability against EPAA Phase 3 threats is necessary. Flight testing against IRBM threats and raid scenarios could provide additional confidence in modeled performance, even for aspects of the model that have the achieved accreditation threshold. Our finding is supported by MDA s own assessment of testing needed for EPAA Phase 3, which originally included five IRBM intercepts and two raid flight tests. These testing requirements were reduced even after EPAA Phase 3 flight test failures and delays. Specifically, our analysis indicates that flight testing to demonstrate EPAA Phase 3 performance against an IRBM has been reduced 80 percent. Moreover, MDA will not conduct a flight test against a raid a likely tactic in a real-world attack prior to delivery. As we identified in this report, MDA experienced testing disruptions throughout the EPAA Phase 3 development, which resulted in significant data collection reductions, especially regarding performance against planned threats. According to the Director, Operational Test and Evaluation (DOT&E), these testing challenges, in large part, precluded MDA from testing Aegis BMD against some expected threat types, ranges, and raid sizes. Consequently, the use of models and simulations- based ground tests to supplement such significant reduction in real-world data collections could be problematic. Specifically, we have previously reported that some of MDA s models and simulations used in its ground tests do not provide realistic representation of the BMDS, the environments it encounters, or the modeled threats. This year, we found that as a result of testing perturbations, certain aspects of Aegis BMD 5.1 will not be validated until after EPAA Phase 3 delivery. Relying on unaccredited models increases chances for modeling errors, and a single undetected modeling error can distort the results for the entire assessment. Lastly, DOD stated that the demands on the test program due to the evolutionary nature of the BMDS acquisition leave no margin (cost or schedule) for adding additional flight tests. While we agree that adding a flight test requires additional costs and coordination, the reductions to EPAA Phase 3 testing constitute a significant reduction in performance data and decreases warfighter s knowledge base about how best to deploy a system under operationally realistic conditions, such as raids. We continue to believe the 18-month delay affords the schedule to conduct additional flight testing. We are sending copies of this report to the appropriate congressional committees, the Acting Secretary of Defense, the Undersecretary of Defense for Research and Engineering, and to the Director, MDA. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix X. Appendix I: Aegis Ballistic Missile Defense (BMD) Weapons System Key findings for Fiscal Year 2018 Aegis Ballistic Missile Defense (BMD) demonstrated integration with allies. Aegis BMD 5.1 demonstrated increased capability, but testing disruptions delayed its delivery to March 2019 and deferred raid assessment to 2020. MDA re-planned schedules for some future Aegis capabilities due to funding challenges. <8. Program Overview:> Aegis Ballistic Missile Defense is the naval component of the Missile Defense Agency s (MDA) Ballistic Missile Defense System. It consists of the Aegis combat system, including a radar, and Standard Missile-3 (SM- 3) interceptors. MDA is developing the Aegis BMD in versions called spirals that expand on preceding capabilities. Since 2015, MDA has been delivering Aegis BMD spirals that are integrated with capabilities developed by the Navy. These jointly developed Aegis Weapons System Baselines (AWS BL) allow for Integrated Air and Missile Defense (IAMD) where ballistic missiles and air threats (i.e., cruise missiles) can be engaged at the same time. Table 5 identifies Aegis BMD spirals, associated integrated Aegis Weapons System Baselines and key capabilities, and their delivery date. The first suite of integrated ballistic missile defense and anti-air warfare (AAW) capabilities was delivered with AWS Baseline 9.C1/B1, which included an overhaul of Aegis computing architecture. However, in order to expand the number ships with IAMD, MDA also began a program to integrate Aegis BMD 5.0 CU capabilities with the legacy AWS architecture. While initially scheduled for delivery in 2015, Aegis BMD 4.1 was delayed multiple times, and finally in 2017 the delivery was split into two phases. The first interim phase was completed in 2017, but did not provide integration between BMD and AAW capabilities. The second phase will integrate BMD and AAW, and is currently planned for delivery in 2020. Additional upgrades capitalizing on Navy s improvements to the AWS Baseline 5.4 computing architecture are planned for delivery in 2023. The program is also developing Aegis BMD 5.1 with capabilities to support the final phase of European Phased Adaptive Approach. This spiral is designed to control the new Standard Missile-3 Block IIA and to intercept intermediate-range ballistic missiles. It also includes the Engage on Remote (EOR) capability, where Aegis BMD intercepts a threat before it is visible to its own radar, based entirely on tracks from a forward-based sensor. Aegis BMD 5.1 is integrated with AWS Baseline 9.C2/B2. Additionally, MDA and the Navy are developing AWS Baseline 10.0, which will capitalize on the Navy s effort to replace the Aegis SPY-1 radar with a more capable SPY-6, and to overhaul the entire Aegis combat system. AWS Baseline 10.0 includes Aegis BMD 6.0 capabilities, which is planned to counter more threat types, larger raids, better discrimination, and improved communication with its interceptors. AWS Baseline 10.0 is planned for delivery in 2023. For specifics on Aegis Ashore and the Aegis SM-3 interceptors, see appendixes II, III and IV, respectively. Table 6 provides key fiscal year 2018 AWS program facts. <9. Aegis BMD demonstrated integration with allies> In fiscal year 2018, MDA demonstrated the ability of Aegis BMD to engage some simple and complex threats as well as integration with European and Asia-Pacific allies for new and legacy spirals. As table 5 above shows, Aegis BMD participated in a number of flight tests and exercises, which provided additional information about its capabilities and interoperability with allies in two regions, where MDA is currently focusing its regional integrated capability efforts. For example: Formidable Shield-17 demonstrated the ability of Aegis BMD 4.0.3, which was delivered in fiscal year 2015, to interoperate with North Atlantic Treaty Organization partners using communication architectures during cruise missile and ballistic missile engagements, and to use remote data provided by NATO partners to conduct remote engagements. Pacific Dragon demonstrated interoperability between U.S. Aegis BMD assets, Japanese destroyers, and Republic of Korea naval assets. JFTM-05 Event 2 demonstrated coordination between U.S. and Japanese destroyers using communications architecture to conduct ballistic missile engagements. <10. Aegis BMD 5.1 demonstrated increased capability, but testing disruptions delayed its delivery to March 2019 and deferred raid assessment to 2020> MDA demonstrated some aspects of Aegis BMD EOR, as well as the ability of Aegis BMD 5.1 to engage a medium range and an intermediate range ballistic threat, but testing disruptions delayed data available to inform capabilities and limitations of the Aegis BMD 5.1, contributing to a 3-month delivery delay. MDA encountered challenges during tests for Aegis BMD 5.1, which resulted in a reduction of flight tests and delays in collecting data needed to accredit models for a system-level assessment. Specifically, during the conduct of FTM-29, Aegis BMD partially demonstrated EOR capability, lacking full demonstration because the weapon system did not exercise all aspects of communication in the later stages of the engagement due to an Aegis BMD SM-3 Block IIA malfunction. MDA decided not to retest FTM-29 and adjusted its test plan to only demonstrate the fixes to the SM-3 Block IIA in a new test called FTM-45, deferring a full EOR assessment by about a year to the subsequent test named FTI-03. This reduction in flight tests affected MDA s ability to collect data for model verification which in turn, delayed the delivery of Aegis BMD 5.1. A model is a representation of an actual system that involves computer simulations and is used to predict how the system might perform or survive under various conditions. MDA, as well as independent DOD testing organizations, and the warfighter rely heavily on models to test operational performance that cannot be completely assessed using intercept flight tests because of the system s scope and complexity and safety constraints. Flight tests, however, provide important information about real-world performance that is used to verify models. In order to ensure that key aspects of Aegis BMD 5.1 performance are well understood at delivery, MDA delayed the spiral from December 2018 to March 2019. This was done in part to allow for analysis from FTM-45 (conducted in October 2018) and FTI-03 (conducted in December 2018). According to the BMDS Operational Test Agency, data from these tests provided key information about Aegis BMD EOR performance a key capability for Aegis BMD 5.1 that was used to verify its models, which were used to more thoroughly assess the extent of that capability. While EOR data will support Aegis BMD 5.1 delivery, another key aspect of its performance will not be verified until late in fiscal year 2020. Specifically, MDA planned to assess Aegis BMD 5.1 raid performance for the first time in December 2018, but the test was de-scoped to a single intercept due, in part, to a test range safety asset malfunction. The next planned raid assessment is scheduled for the fourth quarter of fiscal year 2020, well after Aegis BMD 5.1 delivery. According to the Director, Operational Test and Evaluation (DOT&E) these testing challenges, in large part, precluded MDA from testing Aegis BMD against some expected threat types, ranges and raid sizes. While some of them were outside of MDA s control, others stem from decisions about its test plan. For instance, MDA s inability to assess Aegis BMD 5.1 against an IRBM raid resulted from the malfunction of test range safety assets; however, according to DOT&E, FTM-29 failure is an example of insufficient development testing that should have discovered the SM-3 Block IIA issue prior to the flight test. DOT&E officials told us that they are currently working with MDA to ensure sufficient developmental testing is scheduled and conducted prior to undertaking operational tests. <11. Funding challenges contributed to the delay for certain Aegis BMD capabilities> In fiscal year 2018, funding challenges contributed to the delay of MDA and the Navy s effort to develop integrated AWS Baseline 5.4 and AWS Baseline 10.0. According to MDA program documentation, the delays resulted from funding reductions in fiscal year 2018. However, while AWS Baseline 5.4 which includes BMD 4.1 was delayed entirely from 2019 to 2020, AWS Baseline 10.0 which includes BMD 6.0 delayed completion of some technical content, but its delivery timeframe did not change. Specifically: Integrated AWS Baseline 5.4 was originally planned to be completed in September 2019, but MDA and the Navy delayed its certification to March 2020. While MDA delivered Aegis BMD 4.1 capabilities in fiscal year 2017, subsequent efforts focused on integrating the ballistic missile defense with the remaining suite of AWS Baseline 5.4 capabilities. According to MDA, the delay to this effort was driven by a $14 million funding reduction to the Navy s Program Executive Office Integrated Warfare System, which is jointly funding this baseline. As a result of the reduction, MDA received $16 million from the Navy, rather than $32 million it was expecting, to continue work on Baseline 5.4. According to Aegis BMD program officials, to mitigate the nine month delay, MDA renegotiated the associated contract, but it is anticipating approximately $1.5 million increase in fiscal year 2019 and approximately $4 million to fiscal year 2020 costs. MDA and the Navy re-planned AWS Baseline 10.0, after a funding reduction of $31.45 million against BMD 6.0. According to Aegis BMD program documentation, the BMD 6.0 development efforts stopped between January 2018 and May 2018. Program officials indicated that MDA renegotiated the associated contract to reflect the reduced funding, but the stop work and consequent restart incurred additional costs. Specifically, the program estimated that the disruption resulted in cost growth of approximately $51 million across the development timeline between fiscal year 2019 and 2024. Appendix II: Aegis Ashore Key findings for Fiscal Year 2018 According to Missile Defense Agency officials, deficiencies in the performance of the military construction contractor resulted in a significant delay and increased cost for the Aegis Ashore facility in Poland. The program continues to make progress despite challenges at both the Poland and Romania sites. <12. Program Overview> Aegis Ashore is a land-based, or ashore, version of the ship-based Aegis Ballistic Missile Defense (BMD). Aegis Ashore is designed to track and intercept ballistic missiles in the middle of their flight using Aegis BMD Standard Missile-3 (SM-3) interceptors. Key components include a vertical launching system, interceptors, and an enclosure, called a deckhouse, that contains the SPY-1 radar and command and control system. Aegis Ashore will share many components with the sea-based Aegis BMD and will use future versions of the Aegis weapon system currently in development, including the SM-3 Block IIA interceptor. The Missile Defense Agency (MDA) plans to equip Aegis Ashore with a modified version of the Aegis weapon system software that will share many components with the sea-based Aegis BMD. DOD constructed an Aegis Ashore test facility in Hawaii in April 2014. The test facility has been used to flight test Aegis Ashore, and in some cases, Aegis BMD SM-3 interceptors. MDA deployed its first operational site in Romania in fiscal year 2016 as part of the European Phased Adaptive Approach (EPAA) Phase II. A second site in Poland was scheduled for delivery in 2018 as part of EPAA Phase III. Both operational sites are intended to provide additional coverage for the defense of Europe. The Poland site experienced construction delays over several years until March 2018, when MDA determined with stakeholders that the site would not be complete in time for the EPAA Phase III deadline. MDA has since established a new schedule baseline which delays the delivery of the site by 18 months, to May 2020. For further details on the Aegis Weapon System and Aegis BMD interceptors, see appendixes I, III and IV. Table 7 provides key fiscal year 2018 Aegis Ashore program facts. <13. According to Missile Defense Agency officials, deficiencies in the performance of the military construction contractor resulted in a significant delay and an increased cost for the Aegis Ashore facility in Poland> According to MDA officials, construction of the Aegis Ashore site in Poland has failed to meet schedule milestones from the start of the contract. According to officials, prior to this year, MDA and the Army Corps of Engineers, which manages military construction at the site, have undertaken a number of measures to mitigate or reverse these delays, including modifying contracts to permit joint occupancy of the site, modifying the main contract to provide more granular project data to the Army Corps of Engineers, moving key personnel on site, and adding a second shift. Program officials stated that they also withheld some award fees from the contractor as a result of these delays. Despite these efforts, MDA has found the contractor s performance is still particularly poor in the areas of construction management, identification, procurement, timely delivery of important materials, and timely hiring of staff with appropriate skills. To make up for these delays, MDA introduced increasing levels of concurrency into its schedule, and shortened key phases of the delivery process. Activities such as Installation and Checkout were shortened from 16.5 months to 6.5 months, and would occur concurrently with the final phases of construction at the site. As recently as last year, GAO reported that additional delays or concurrency at the site would threaten the scheduled delivery date. Through the first quarter of fiscal year 2018, the contractor s performance did not improve. According to program officials, in December 2017, MDA participated in a meeting with the Army Corps of Engineers, the Navy, and other government stakeholders, and concluded that the schedule for delivery had become untenable and schedule recovery was not possible. MDA later concluded that the site would not be ready for delivery until May 2020, a delay of 18 months. The costs of this delay will be significant. Following the determination of the new delivery date, MDA developed a new project schedule that, officials stated, incorporated historical data from the Romania site, independent outside analysis, trends in the contractor s performance over time, and the resources that would be required at each stage of the schedule. MDA estimated that the additional efforts by MDA, the Army Corps of Engineers, and the Navy to mitigate the delay and provide assistance through the completion of the project totaled at least $90 million. According to program officials, the construction contract provides for significant liquidated damages, with the current daily assessment in excess of $125,000. <14. The program continues to make progress despite facing challenges at both the Romania and Poland sites> MDA continues to oversee work at the Aegis Ashore site in Romania, despite the Navy s acceptance of the site for operational use. MDA continues work on a variety of remaining items such as seismic hardening, shielding electrical infrastructure against high-energy electro- magnetic pulses, and cooling systems. In the case of cooling systems, the work is the result of the system failing to perform to specifications. MDA has yet to assess the full cost, schedule, and performance impacts of the necessary repairs and modifications, but MDA reported that none of the above issues had any impact on the Romania sites operational availability or performance. In the case of the Poland site, MDA sought to secure the permission of the Polish government to operate the facility s SPY-1 radar in the 3.1 to 3.5 GHz radio frequency spectrum. This section of the spectrum is important to the full functioning of the Aegis Ashore system, but portions of it have been allocated for commercial use in Poland. MDA was able to de-conflict the operations of its radar with other systems on these frequencies, and in March 2018 secured the approval of the Polish government to operate the SPY-1 radar across the full range of frequencies. Appendix III: Aegis Ballistic Missile Defense (BMD) Standard Missile-3 (SM-3) Block IB Key findings for Fiscal Year 2018 The Aegis Ballistic Missile Defense (BMD) Standard Missile-3 Block IB program received authorization for full production this year and performed successful intercepts in flight tests. Discovery of a parts quality issue partway through the year forced the program to suspend deliveries and thus miss most of its delivery target for fiscal year 2018. <15. Program overview> The Aegis Standard Missile-3 (SM-3) Block IB is a ship- and shore-based missile defense interceptor designed to intercept short- to intermediate- range ballistic missiles during the middle stage of their flight. The SM-3 interceptor has multiple versions in development or production: the SM-3 Blocks IA, IB, and IIA. Compared to the SM-3 Block IA, the Block IB features an enhanced seeker for improved target discrimination, better engagement coordination capabilities, an improved throttleable divert and attitude control system for adjusting its course, and increased range. The SM-3 Block IB interceptor is linked with Aegis Ballistic Missile Defense (BMD) Weapons System, and Aegis Ashore. For additional information about the Aegis Weapon Systems, see Appendix I and for Aegis Ashore, see Appendix II. Since fiscal year 2015, Aegis BMD SM-3 Block IB production has been delayed by several technical issues. Program officials, in 2015, delayed the decision to enter full-rate production until they could implement further testing and design changes, a decision consistent with a GAO recommendation at the time. In fiscal year 2016, two failures during testing forced a suspension of interceptor deliveries, though the program made up for this backlog in fiscal year 2017. Table 8 provides key fiscal year 2018 Aegis BMD SM-3 Block IB program facts. <16. The Aegis BMD SM-3 Block IB program received authorization for full production this year and performed several successful intercepts in flight tests> In February 2017, the Undersecretary of Defense for Acquisition, Technology, and Logistics issued an Acquisition Decision Memorandum requesting an additional flight test for the Aegis BMD SM-3 Block IB before authorizing a full production decision, as well as several independent supporting analyses. The memorandum issued these requirements in support of a planned full production decision in the first quarter of fiscal year 2018. As we previously reported, MDA has delayed full production multiple times over the life of the Aegis BMD SM-3 Block IB which was initially scheduled for fourth quarter, fiscal year 2012. MDA completed the requested intercept test, known as FS-17-4 in October 2017. The test was undertaken as part of NATO s Formidable Shield naval exercises. In this test, an Arleigh Burke-class destroyer in the northern Atlantic fired an Aegis BMD SM-3 Block IB Threat Upgrade at an MRBM target and successfully intercepted it. With this result, the interceptor was approved for full production. In September 2018, MDA participated in JFTM-05 Event 2, a joint flight test with the Japanese navy, in which a Japanese ship successfully fired an Aegis BMD SM-3 Block IB Threat Upgrade interceptor at a simple separating short-range ballistic missile. MDA participated in and supported the engagement. Upon full production authorization, MDA sought to pursue a multi-year procurement with the prime contractor for 204 interceptors through 2023. While MDA requested and the 2019 National Defense Authorization Act and the Defense Appropriations Act, 2019 authorized this procurement, the program did not receive the funding to support the request. Program officials state that they are still evaluating the impacts on their plan. MDA estimates the procurement will have a projected price of $2.021 billion. <17. Discovery of a parts quality issue partway through the year forced the program to suspend deliveries and thus miss most of its delivery target for fiscal year 2018> During routine component testing, MDA discovered an issue with the Aegis BMD SM-3 Block IB s throttleable divert and attitude control system (TDACS) resulting in delays of interceptors in fiscal year 2018. According to program officials, MDA employs a manufacturing surveillance unit whose purpose is to pro-actively assess component performance and quality at various stages of unit production. Program officials stated that the unit discovered, in January 2018, that one of several thrusters on the TDACS did not perform to specification. In response to this finding, MDA suspended deliveries of the interceptor until it could determine the impact of the deficiency on the interceptor s performance. According to program officials, MDA contracted with the Applied Physics Laboratory to act as an independent technical authority for the investigation, which took approximately six months. Once concluded, the investigation found that the performance of the component, while below the defined specification, did not endanger the overall operation of the system. The component s performance was accommodated within the margin the government and contractor built into the overall design, and was acceptable as built as a result. The investigation reached this conclusion in August 2018. MDA closely monitored the function of the component in JFTM-05, during which the system performed nominally. Program officials reported that the prime contractor has experienced similar issues defining and communicating important specifications to subcontractors at various levels of its supply chain. Similarly, the contractor has also had difficulty ensuring that all subcontracted components meet defined specifications. Program officials stated that they continue to take measures to mitigate these issues, including using the manufacturing surveillance team. Appendix IV: Aegis Ballistic Missile Defense (BMD) Standard Missile - 3 (SM-3) Block IIA Key findings for Fiscal Year 2018 A mid-year funding increase accelerated the program's schedule and increased the number of interceptors. The Aegis Ballistic Missile Defense (BMD) Standard Missile - 3 (SM-3) Block IIA experienced a test failure, leading to significant changes to the test plan. <18. Program Overview> The latest development in the Aegis BMD Standard Missile 3 (SM-3) family, the Aegis BMD SM-3 Block IIA interceptor provides increased speed, more sensitive seeker technology, and a more advanced kinetic warhead as compared to previous versions of the Aegis BMD interceptors. It is expected to defend against short-, medium-, and intermediate-range ballistic missiles, and will have significantly increased range compared to earlier Aegis BMD SM-3 models. Additionally, most of the Aegis BMD SM-3 Block IIA components will differ from other standard missile versions and therefore require new technology being developed specifically for them. For additional information on the Aegis BMD SM-3 Block IB interceptor, see appendix III. Initiated in 2006 as a cooperative development program with Japan, the Aegis BMD SM-3 Block IIA program is an essential component of the European Phased Adaptive Approach (EPAA) Phase 3 architecture, particularly its ability to defend against longer-range threats. According to program officials, the Aegis BMD SM-3 Block IIA interceptor s range exceeds that of its native radar, thus, the only way to make full use of its extended range is by relying on remote sensor data. For additional information on Aegis Weapon Systems, see Appendix I. Table 9 provides key fiscal year 2018 Aegis BMD SM-3 Block IIA program facts. <19. A mid-year funding increase accelerated the program s schedule and increased the number of interceptors> In December 2017, Congress passed and the President signed the Department of Defense Missile Defeat and Defense Enhancements Appropriations Act, 2018 , as part of a larger continuing resolution which significantly increased missile defense appropriations. According to program officials, the impetus for seeking these additional appropriations was increased levels of missile development and testing activity from North Korea. MDA intends to use $451 million in procurement funds for the purchase of 16 additional Aegis BMD SM-3 Block IIA interceptors. These were the first procurement funds the program had received. The program had yet to receive an initial production authorization, so all previous manufacturing activity occurred using research and development funds. To this point, however, the Aegis BMD SM-3 Block IIA interceptor had succeeded in only one of its two intercept flight tests, and its ability to engage a longer-range target using remote sensor data, known as engage on remote , had yet to be tested. The following month, in January 2018, the interceptor failed an important intercept test, causing significant disruption to the program s schedule which is discussed below. The Undersecretary of Defense for Acquisition and Sustainment subsequently released an acquisition decision memorandum which laid out near-term limitations on the use of procurement funds for the Aegis BMD SM-3 Block IIA, as well as providing for a series of steps MDA needed to take in order to obligate the remaining funds. These measures included the completion of an independent cost estimate, independent technical risk assessment, the successful completion of a replacement flight test, and the successful completion of the planned operational flight test scheduled for the first quarter of fiscal year 2019. Until MDA could meet these requirements, the Undersecretary authorized MDA to obligate only $162 million for the purchase of a limited subset of pacing items. According to program officials, pacing items are those with longer lead times for production, but which fall short of the threshold for long-lead procurement. Program officials also stated that the list of pacing items was restricted to components not implicated in the recent test failure. Program officials stated that they expected the Undersecretary to certify that these requirements had been met in the third quarter of fiscal year 2019. <20. The Aegis BMD SM-3 Block IIA experienced a test failure, leading to significant changes to the test plan> In January 2018, MDA conducted flight test FTM-29. In this test, the Aegis Ashore facility in Hawaii fired an Aegis BMD SM-3 Block IIA interceptor at an intermediate-range ballistic missile (IRBM), using remote sensor data, for the first time. After the interceptor launched, its third- stage rocket motor (TSRM) failed to ignite. As a result, the interceptor had inadequate thrust to complete the engagement and failed its objective to intercept the target. As a result of this test failure, MDA faced two challenges: first, identifying and remedying the source of the failure through a failure review board, and second, adjusting the program s schedule to provide opportunities to confirm these mitigations. MDA and the government of Japan convened a failure review board (FRB) to investigate the causes of the test failure. The board s conclusions found that the TSRM failed to ignite due to a combination of a faulty arm-fire device (AFD), which initiates the TSRM s firing, and incorrect programming of the TSRM ignition sequence. In the case of the Aegis BMD SM-3 Block IIA, the AFD contains two linear chains of explosive pellets, which then ignite the rocket motor. MDA documents state that the AFD s manufacturer expects a missile to ignite both chains simultaneously to ensure the highest degree of reliability. The FRB found that the Aegis BMD SM-3 Block IIA s programming did not fire the AFD s two chains simultaneously, but one after the other, or sequentially . When fired in this manner, quality issues with the AFD that would not have any material impact in a simultaneous firing can cause the AFD to malfunction when firing one after the other. The FRB concluded that the most likely cause of the AFD s failure was a missing explosive charge in the first explosive chain. When this chain ignited, it fizzled and failed to ignite the TSRM. The fizzle was powerful enough to disrupt the functioning of the second explosive chain, however, which subsequently failed to ignite the TSRM as well. To correct for this error, MDA has changed the programming of the Aegis BMD SM-3 Block IIA to fire the AFD simultaneously. MDA has also instituted new quality measures at the assembly line for the AFD. These measures include additional quality assurance checks to ensure that all explosive pellets are present in both chains, as well as the use of X-ray- like scanners which can look inside a completed AFD to confirm the presence of all of the explosive pellets. Having identified the source of the failure, MDA had to choose what form any new test would take, and how it would impact the remaining schedule, in particular the first operational test of the Aegis BMD SM-3 Block IIA, which also happened to be the first operational test of the European Phased Adaptive Approach (EPAA) Phase III, and the only such test scheduled before MDA declared it ready for delivery. This test, then known as FTO-03 Event 1 (and subsequently re-named FTI-03) was scheduled for the first quarter of fiscal year 2019. One option was for MDA to schedule a scaled-back test, known as FTM- 45, of an Aegis BMD SM-3 Block IIA against a medium-range target. MDA stated that though FTM-29 failed, analysis of sensor data and missile telemetry indicated that the Engage on Remote capability would have succeeded had the interceptor reached the target. Therefore, FTM- 45 could be an organic engagement, using only the radar co-located with the interceptor. FTM-45 would need only to test that the mitigations identified by the FRB worked, as well as testing the final phases of the interceptor s operations which had been interrupted in FTM-29. MDA had a medium-range ballistic missile (MRBM) target it could repurpose for this test, which would limit testing disruptions by not further delaying FTO-03 E1/FTI-03. FTM-45 was MDA s preferred course of action FTM-45 lacked the support of several external, Department of Defense stakeholders, such as the Deputy Assistant Secretary of Defense for Developmental Test and Evaluation, Joint Functional Component Command Integrated Missile Defense, and Office of the Director, Operational Test and Evaluation. These offices asserted that a complete re-test of FTM-29, known as FTM-29a, provided the most risk reduction in advance of FTO-03 / FTI-03. . MDA opted not to pursue FTM-29a, and cited several reasons. MDA acknowledged the differences between intermediate-range and medium- range engagements, but determined that the actual differences between FTM-45 and FTM-29a were within acceptable margins. FTM-29a would also prove more expensive and more logistically difficult. MDA concluded that FTM-45 met the requirements for risk reduction at the least disruption to the program s schedule. MDA conducted FTM-45 in October 2018 and FTI-03 in December 2018. Initial reports indicate both were successful. Appendix V: Command, Control, Battle Management, and Communications (C2BMC) Appendix V: Command, Control, Battle Management, and Communications (C2BMC) Key findings for Fiscal Year 2018 MDA re-planned schedules for some future Aegis capabilities due to funding challenges. MDA delivered Spiral 8.2-1 providing significant performance and cyber improvements, but some fixes were required after fielding. MDA mitigated prior challenges with Spiral 8.2-3 and demonstrated capability upgrades. Uncertainty in Ballistic Missile Defense System-level requirements could disrupt Spiral 8.2-5 schedule. <21. Program Overview> C2BMC is a global system of hardware workstations, servers, and network equipment and software that integrates all missile defense elements of the Ballistic Missile Defense System (BMDS). Specifically, it allows users to plan operations, see the battle develop, and manage BMDS sensors. As the integrator, C2BMC enables the defense of a larger area than the individual BMDS elements operating independently and against more missiles simultaneously, thereby conserving interceptor inventory. C2BMC is fielded at U.S. Strategic Command, U.S. Northern Command, U.S. European Command, U.S. Indo-Pacific Command and U.S. Central Command. MDA is developing C2BMC in spirals, or software and hardware upgrades that build upon prior capabilities to improve various aspects of the integrated BMDS performance. The spiral delivered in fiscal year 2018 includes BMDS Overhead Persistent Infrared Architecture (BOA) a system within the C2BMC enterprise. BOA receives spaced-based sensor information on boosting and midcourse ballistic objects and feeds that data to C2BMC for use in cueing BMDS sensors and weapon systems, and for situational awareness. The agency completed fielding and transition to operations of Spiral 8.2-1 with BOA 5.1 to U.S. Northern Command and U.S. Indo-Pacific Command in January 2018, and Spiral 8.2-3 with BOA 6.1 to U.S. European Command and U.S. Central Command in December 2018. Spiral 8.2-3 will replace Spiral 8.2-1 at the U.S. Northern Command and U.S. Indo-Pacific Command in the third quarter of Fiscal Year 2019. Table 10 provides an overview of C2BMC Spiral upgrades, planned fielding timeframes and associated capabilities, and Table 11 provides key fiscal year 2018 C2BMC program facts. <22. MDA delivered Spiral 8.2-1 providing significant performance and cyber improvements, but some fixes were required after fielding> In January 2018, C2BMC completed fielding and transition to operations of Spiral 8.2-1 providing a significant overhaul of the BMDS command and control hardware infrastructure. Spiral 8.2-1, replaced the legacy Spiral 6.4, at the U.S. Northern Command and U.S Indo-Pacific Command. Spiral 8.2-1 improves sensor coverage, ballistic missile track management, and cyber security, optimizing raid size tracking capability and capability for processing new threats to support the defense of United States. Further details on these capabilities follow: Spiral 8.2-1 delivery includes the BOA 5.1, which provides improvements in early missile launch detection, allowing more time for all subsequent BMDS actions. It cues land-based sensors allowing them to acquire threats sooner, allowing them longer time to track and thus improving engagement probability. Spiral 8.2-1 expands the capability for processing of threat tracks, called System Track, from a single sensor the Army/Navy Transportable Radar Surveillance-2 (A/N TPY-2) to include additional sensors for homeland defense and BOA. This allows for additional data sources about threat characteristics that C2BMC subsequently provides to other BMDS elements. The delivery of Spiral 8.2-1 also improves cybersecurity. Spiral 8.2-1 replaced Spiral 6.4, which, as we found in May 2018, had cyber vulnerabilities that, if exploited, could have degraded mission capabilities like BMD planning, radar control, track reporting, and situational awareness. Lastly, the program also delivered additional upgrades, to specifically augment BMDS capabilities for the Korean Peninsula. These upgrades were delivered in December 2017 and June 2018, to provide improvements in communication between THAAD and Patriot, and improved cybersecurity in that region. MDA demonstrated Spiral 8.2-1 upgrades in Ground Test-07a and Ground Test-18 Sprint 1. Table 11 above provides an unclassified overview of C2BMC testing completed in support of fiscal year 2018 deliveries. While MDA delivered these upgrades and overcame development challenges, some fixes had to be implemented after deployment. Specifically, as we found in May 2018, MDA identified performance risks for Spiral 8.2-1 that could have affected interoperability with other elements and threat tracking and delayed the delivery to address these challenges. According to MDA s fiscal year 2018 program management documentation, the program implemented the necessary mitigations to address these challenges; however fixes were also needed to be implemented after the Spiral was delivered. Moreover, the post- deployment fixes required diversion of resources from the subsequent Spiral 8.2-3, delaying demonstration of a certain aspect of that effort. <23. MDA mitigated prior challenges with Spiral 8.2- 3, and demonstrated capability upgrades> In fiscal year 2018, MDA completed most of its development effort for its next spiral named Spiral 8.2-3. In addition, MDA completed a test, demonstrating new capabilities and mitigations to earlier development challenges. As we found in May 2018, in fiscal year 2017, the program was tracking two element level risks to C2BMC capability needed for EPAA Phase 3 called Engage on Remote. Specifically, program documentation indicated that processing of data about threat missile flight paths, known as threat tracks, had issues that could reduce the likelihood of the successful engagements utilizing Aegis BMD in Engage on Remote scenarios. C2BMC has faced similar challenges with threat tracking capabilities for prior spirals, which required delaying certain aspects of integration with Aegis BMD until fixes were implemented. While the program was addressing the aforementioned performance risks in fiscal year 2018, it encountered additional challenges. First, it needed to divert some resources from Spiral 8.2-3 to implement fixes to Spiral 8.2-1 that were needed after it was deployed. Second, the program needed to divert additional resources to meet a new Warfighter request for geographic redundancy. Specifically, while the original concept was to have 8.2-3 for Central and European Command at the same location, MDA met the Warfighter request by installing the spiral at different locations so that losing one location would not result in the loss of all capability for the Warfighter. Finally, once a key mitigation was completed, the program encountered delays in availability of laboratories needed to assess it. As result, MDA decided to test the mitigation during the GT-07b campaign, along with other Spiral 8.2-3 capabilities. While assessing mitigations for the first time in a large scale campaign is risky should the mitigation be insufficient or have underseen downstream effects initial results from GT-07b campaign indicate they were successful. The test demonstrated successful collaboration between Spiral 8.2-3 and Aegis BMD in support the Engage on Remote, as well as other capabilities. Table 11 provides additional information on capabilities demonstrated during GT-07b. <24. Uncertainty in Ballistic Missile Defense System- level requirements could disrupt Spiral 8.2-5 schedule> While C2BMC program has identified element level requirements for Spiral 8.2-5, requirements for BMDS-level capabilities associated with this spiral are still under development. This Spiral is intended to integrate the Long Range Discriminating Radar (LRDR) and provide additional BMDS- level planning, track processing, and battle management capabilities, in the fiscal year 2021 timeframe, and its acquisition baselines are expected to be included for the first time in the upcoming BMDS Accountability Report. However, according to the November 2018 program execution review, emerging BMDS-level requirements may delay efforts to complete the development of the spiral in time to support LRDR functionality in 2021. Program documentation also indicates that some BMDS capabilities as well as future C2BMC spirals could be at risk of deferral, including the subsequent Spiral 8.2-7. Appendix VI: Ground-based Midcourse Defense (GMD) Appendix VI: Ground-based Midcourse Defense (GMD) Key findings for Fiscal Year 2018 MDA continues to increase GMD capacity and reliability. GMD issues uncovered during salvo test planning demonstrate the value of rigorous and frequent testing. MDA recently uncovered major design concerns with the Redesigned Kill Vehicle. <25. Program Overview> GMD is a missile defense interceptor system designed to defend the United States against a limited intermediate and intercontinental ballistic missile attack from rogue states, such as North Korea and Iran. To counter such threats to the homeland, GMD, in conjunction with a network of ground-, sea-, and space-based sensors, launches interceptors from missile fields based in Fort Greely, Alaska and Vandenberg Air Force Base, California. After launching from in-ground silos, the interceptor boosts towards the incoming enemy missile and releases an Exoatmospheric Kill Vehicle to find and destroy the threat. GMD also has ground support and fire control capabilities that the warfighter uses to operate the system. Table 12 provides key fiscal year 2018 GMD program facts. MDA fielded three new upgraded interceptors in early fiscal year 2018, meeting its directive from the Secretary of Defense to increase the total number of fielded interceptors to 44 by the end of 2017. The new interceptors are equipped with an upgraded version of the kill vehicle, called the Capability Enhancement (CE)-II Block I, and boost vehicle, called the Configuration 2. MDA completed production and fielded eight of these new interceptors after successfully conducting its first intercept flight test of the upgraded interceptor in May 2017. Although the program encountered some production challenges with the C2 boost vehicle, such as multiple components initially failing qualification testing, the issues were not significant enough to prevent the program from meeting its December 2017 fielding goal. The upgraded interceptors were designed to be more reliable than their predecessors and their addition to the fleet is intended to improve overall system reliability, as the older interceptors have a greater risk of experiencing in-flight reliability failures. Table 13 below describes the current fleet of 44 fielded interceptors and plans to field an additional 20 interceptors equipped with the Redesigned Kill Vehicle (RKV) and modified Configuration 2 boost vehicle. MDA also successfully completed two ground tests in fiscal year 2018 to provide performance assessment data; develop interceptor shot doctrine and tactics, techniques, and procedures; and assess recent performance upgrades to GMD s fire control software. In addition to adding more CE-II Block I interceptors, in fiscal year 2018, MDA accelerated RKV development and initiated plans to increase the total number of fielded interceptors to 64 by the end of 2023 in response to a North Korean missile threat escalation in 2017. In November 2017, DOD requested $2 billion for what it called the Missile Defeat and Defense Enhancements, $774 million of which was designated for GMD to: (a) build a new 20-silo missile field at Fort Greely, Alaska; (b) procure long-lead components for four additional interceptors; (c) continue booster development; (d) accelerate RKV development; and (e) add a target to an initial non-intercept RKV flight test. MDA subsequently issued an undefinitized contract action in the form of a sole-source contract modification to Boeing in January 2018 to extend the current GMD development and sustainment contract. The contract modification was awarded with a total maximum value not to exceed $6.565 billion for efforts pertaining to the Missile Defeat and Defense Enhancements and extended the current contract s period of performance 2023. In March 2019, MDA definitized $4.141 billion of the contract to build the new missile field, among other items, but deferred the production of 20 additional interceptors. According to MDA, this contract modification brings the total cumulative value of the GMD development and sustainment contract, including options, to $10.8 billion. MDA conducted its first salvo flight test of the GMD system, called Flight Test Ground-based Interceptor (FTG)-11 on March 25, 2019 after nearly three decades of GMD development. GMD demonstrated a salvo intercept by firing a CE-II Block I-equipped interceptor followed by a CE- II-equipped interceptor. The leading interceptor destroyed the target representing an intercontinental ballistic missile equipped with countermeasures designed to complicate missile defense operations. With the target reentry vehicle destroyed, the trailing interceptor struck one of the remaining objects, as it was designed to do. Demonstrating a salvo capability is particularly important because, during a ballistic missile attack, the warfighter intends to launch a number of interceptors to increase the probability of successfully intercepting the incoming missile(s). FTG-11 was further delayed from the end of fiscal year 2018 to mid-fiscal year 2019 to accommodate other BMDS testing priorities while GMD fixed software issues uncovered during pre-test planning. MDA initially planned to conduct the salvo test in fiscal year 2006 but subsequent test failures, developmental challenges, and fielding priorities delayed the salvo test to fiscal year 2018. Figure 4 below provides an overview of the multiple times MDA has delayed the salvo test over the years. By mid-2017, GMD began experiencing delays developing a software upgrade that is intended to provide the kill vehicle with the functionality needed for FTG- 11. Around that same time, MDA also realized that its BMDS-level integrated test schedule could not be executed as planned due to a lack of test range and asset availability. According to a May 2018 report MDA submitted to Congress, the agency delayed FTG-11 from the fourth quarter of fiscal year 2018 to the second quarter of fiscal year 2019 to de- conflict the integrated test schedule. Around the time MDA submitted the report to Congress, the GMD program also uncovered performance concerns with the kill vehicle software upgrade that further delayed the software s completion. As such, the delay to FTG-11 to accommodate other BMDS testing priorities also afforded MDA the time necessary to complete the software improvements and pre-test planning. The performance issues MDA uncovered in pre-test planning for FTG-11 demonstrate the value of rigorous and frequent GMD testing. Congress and DOD have recognized the need for rigorous, operationally realistic GMD testing, including conducting a salvo test. Congress also passed legislation and the president signed into law a requirement for an annual GMD flight test, subject to several exceptions. However, GMD has historically averaged less than 1 test per year whereas Aegis Ballistic Missile Defense (BMD) Standard Missile (SM)-3 averaged over 2.5 tests per year (see figure 5 below). Moreover, GMD s prior tests achieved less than 50 percent operational realism whereas Aegis BMD SM-3 averaged over 70 percent, according to Director for Operational Test and Evaluation assessments. The warfighter relies on testing to understand GMD s capabilities and limitations. Without this knowledge, the warfighter lacks the information to operate GMD effectively and efficiently. <26. MDA recently uncovered major design concerns with the Redesigned Kill Vehicle> Although MDA attempted to accelerate RKV development as part of the Missile Defeat and Defense Enhancements, the program accepted too much risk and has since experienced development challenges that set the program back likely by over two years and increased the program s cost by nearly $600 million, according to the agency. In response to advancements in the North Korean missile threat, MDA accelerated RKV development by concurrently performing development and production and reducing the number of necessary flight tests to produce and field new RKV-equipped interceptors. Moreover, the RKV had already experienced development delays prior to the acceleration and was operating with no schedule margin for any further delays as it approached a critical design review in October 2018. The program subsequently encountered design, systems engineering, quality assurance, and manufacturing issues, which resulted in the program postponing the critical design review. The most significant development issue that emerged in 2018 pertained to RKV s performance and its planned use of commercial off-the-shelf hardware and re-use of Aegis SM-3 Block IIA components. In multiple previous reports, we raised concerns regarding MDA s use of these components as well as RKV s aggressive development schedule. In our May 2017 report, we also recommended that DOD perform a comprehensive review of the RKV. Although such a review could have potentially provided DOD with a better understanding of RKV s technical and schedule risks, DOD indicated in its response that the comprehensive review we recommended was unnecessary and therefore did not perform the review. Even though some of these risks have since manifested, we continue to believe an independent, thorough vetting of RKV s acquisition risks is necessary, as we previously recommended. Although RKV continued to carry significant acquisition risks, MDA implemented a recovery plan that attempted to minimize the addition of further risks by opting to prioritize controlling technical risks over preserving the 2023 fielding goal via an aggressive schedule. At the time of our review, the program projected that it would conduct a critical design review for RKV in early fiscal year 2021 followed by a non-intercept flight test in fiscal year 2022, an intercept test in fiscal year 2023, and deployment starting a few months later. The extended design period provided the program additional time to source or design new components before moving forward with testing and production. Production decision gates also remained aligned to the critical design review and subsequent flight tests. The recovery plan also placed greater emphasis on addressing technical risks rather than fielding deadlines to determine RKV s path forward. Our prior work has shown that stabilizing system design before making major production commitments and relying on knowledge rather than deadlines to make acquisition decisions at key milestones are best practices of successful product developers. MDA S Deputy Director stated during a March 2019 press briefing that the best thing to do was to go back and assess that design and take the time to do it right. The Deputy Director also acknowledged that it would have been the wrong step to do what the Missile Defense Agency did years ago, which is to go ahead and produce what we ve got and then deal with reliability issues in the fleet and erode the confidence of the warfighter. On May 24, 2019, MDA directed the GMD prime contractor, Boeing, to stop all work for the RKV. This action occurred a few days before the issuance of our report and, as such, we were not able to assess the effects and incorporate this information into our report. Appendix VII: Targets and Countermeasures Key findings for Fiscal Year 2018 Targets program met some of its fiscal year 2018 goals. Target availability will be a risk for the Missile Defense Agency's aggressive test schedule through 2021. Medium Range Ballistic Missile T1/T2 target's continued cost growth and schedule delays have led to limited testing. <27. Program Overview> The Missile Defense Agency s (MDA) Targets and Countermeasures program (hereafter referred to as Targets program) procures missiles to serve as targets during the developmental and operational testing of independent or integrated ballistic missile defense system (BMDS) elements. Specifically, this program supplies MDA with short-, medium-, intermediate-, and intercontinental-range targets to test, verify, and validate the BMDS elements performance in threat relevant environments. As targets are solely test assets, they are not operationally fielded. The number of targets that the program supplies vary based on each element s requirements and testing schedule. While some targets have been used for years, others have been recently added or are now being developed to more closely represent current and future threats. The quality and availability of these targets is instrumental to the execution of MDA s flight test schedule. Table 14 provides information on the Targets program s performance in fiscal year 2018. <28. Targets program met some of its fiscal year 2018 goals> The Targets program delivered four of eight targets as planned for fiscal year 2018, and delayed the remaining targets based on test schedule requirements and developmental complexities. One target, the intercontinental-range ballistic missile, was delayed 9 months, from the third quarter of fiscal year 2018 to the first quarter of fiscal year 2019, to align with changes to the test schedule for the Ground-based Midcourse Defense (GMD) program. The GMD program discovered some software issues with its system during pre-test planning that had to be resolved prior to moving forward with flight test FTG-11, which will use the intercontinental-range ballistic missile. According to Targets program officials, the Targets program requested that the contractor delay the delivery of the intercontinental-range ballistic missile to avoid dealing with sensitive aspects of the target, such as fueling, that would necessitate special storage of the target. The two intermediate-range ballistic missiles for the BMDS-level operational test FTO-03 E1 were delayed from the second quarter of fiscal year 2018 to the first quarter of fiscal year 2019 to accommodate a new test for the Aegis Ballistic Missile Defense (BMD) Standard Missile-3 Block IIA program following the failure of one of its interceptors during flight test FTM-29. MDA s decision to conduct a new test FTM 45 to ensure the cause of failure had been resolved created test range and asset availability issues that necessitated delaying the BMDS-level operational test FTO-03 E1, and the targets for the test, to a later point in time. The one medium-range ballistic missile for flight test FTM-31 was delayed due to developmental complexities and test range availability. The Targets program flew a total of six targets in fiscal year 2018 to support MDA s flight test schedule, including four short-range, one medium-range, and one intermediate-range, all of which performed nominally. The risk of a target malfunction or failure was lower in fiscal year 2018 than it has been in previous years, because all of the targets had flown in flight tests previously (i.e., none of the targets were new). However, the Targets program is currently planning to fly two new medium-range targets in fiscal year 2019, and the flight tests with these targets either precede or are adjacent to other important tests in MDA test plan. We have previously reported that, new, untested targets introduce higher risk for malfunction or failure that can mean costly and time-consuming retests. Accordingly, we recommended that MDA add a non-intercept flight test for each new target type to verify its performance and reduce risks for future flight tests. MDA has not implemented this recommendation and has continued to use new targets during flight tests. The Targets program conducted one of two critical design reviews in fiscal year 2018. A critical design review assesses the final design of a target to ensure that it can proceed into production and testing and can meet its stated performance requirements within cost, schedule, and risk. The Targets program conducted a critical design review for the medium- range ballistic missile type 3 configuration two (MRBM T3c2) target in the third quarter of fiscal year 2018. The MRBM T3c2 is a new target that Targets program officials said involves minimal design because it leverages flight-proven hardware and a significant amount of heritage software from the intermediate- and intercontinental-range targets currently in production. However, the Targets program plans to conduct another critical design review for the MRBM T3c2 target in the first quarter of fiscal year 2019 due to the addition of hit detection software which will enable real-time feedback on the target s impact points. The Targets program did not complete the critical design review for the short- range ballistic missile type four G (SRBM T4-G) in the third quarter of fiscal year 2018, after it had been delayed a year, from the third quarter of fiscal year 2017. The Targets program subsequently delayed the critical design review for the SRBM T4-G target another year, to the third quarter of fiscal year 2019. According to the Targets program, the delay in the critical design review for the SRBM T4-G is due to some technical challenges associated with developing the target and the contractor s limited staffing and workload. <29. Target availability will be a risk for MDA s aggressive test schedule through 2021> The Targets program may face challenges providing some targets to support MDA s test schedule due the aggressiveness and volatility of the test schedule. We have previously found that MDA s test schedule is aggressive, in that it includes too many tests and little to no margin between tests to ensure executability. Thus, when setbacks occur, such as target or system malfunctions, the margin between tests erodes. MDA relieves pressure in its test schedule by delaying and canceling tests instead of including sufficient schedule margin to ensure executability, as we previously recommended. When the schedule slips for one test, there are often reverberating impacts to other tests. Consequently, MDA s test plan has continued to be volatile, with frequent delays, cancellations and other changes, which make it challenging for the Targets program to manage all of the resources and schedules for its various targets to ensure successful, on-time availability and execution. When targets are not available for testing as planned, the tests either receive substitute targets which can mean trade-offs in the performance aspects demonstrated during the test or the test is delayed, which prolongs the demonstration of systems for the warfighter. One way that the Targets program has tried to ensure the availability of targets for MDA s aggressive test schedule is through the use of concurrency overlap between development, testing, and production for some targets. We have previously reported that some concurrency is understandable, but committing to production before development and testing is complete is a high-risk strategy that often results in performance shortfalls, unexpected cost increases, schedule delays, and test problems. The Targets program is using concurrency for the MRBM T3c2 target. According to the Targets program, it is using concurrency for the MRBM T3c2 target due to the urgent need to support essential testing within MDA s test schedule. The first flight test with the MRBM T3c2 target is FTM-31, which is scheduled for the fourth quarter of fiscal year 2019. Qualification testing and production are ongoing and scheduled to be completed in April 2019 (third quarter of fiscal year 2019). The target must be delivered in advance of the planned test date to complete final preparations for transport to the test site. Thus, the Targets program has very little to no time to resolve any issues prior to delivering it for FTM-31, as shown in figure 6. According to the Targets program, late completion of qualification testing or failures that result in major redesigns may delay FTM-31, as well as significantly impact the cost and schedule for this target. Another way that the Targets program tries to ensure availability of targets for MDA s aggressive test schedule is to maintain aggressive delivery schedules for some targets. For example, the Targets program has an aggressive delivery schedule for its intermediate- and intercontinental-range targets through fiscal year 2021. According to the contractor for the intermediate- and intercontinental-range targets, there are specific time-spacing requirements that the contractor needs in order to produce and configure targets for a test in relation to the production and configuration of targets for other tests. The contractor said that these specific time-spacing requirements are needed due to limitations with the testing, storage, movement, and transport of these targets. Specifically, we observed that the facility where these targets go through final assembly prior to use in a flight test can currently hold two fully assembled intermediate-range targets and the component for one intercontinental-range target which is assembled at the launch site due to its size. As shown in figure 7, almost all of the tests through fiscal year 2021 are at risk of the target not being available as planned. One of the most severe risks to target availability is in fiscal year 2020 when an intermediate-range target is scheduled for a test in the third quarter, followed by a test using dual (i.e., two) intermediate-range targets in the following quarter. According the contractor s specific time-spacing requirements, it needs five months, but the approximate amount of time between these tests is three months. According to the Defense Contract Management Agency (DCMA), if MDA includes multiple intermediate- and intercontinental-range missions in the test plan within close proximity without accounting for the contractor s specific time-spacing requirements, it will be, at best, very challenging for the contractor, and at worst, unachievable. <30. MRBM T1/T2 target s continued cost growth and schedule delays have led to limited testing> The Targets program has a target the medium-range ballistic missile type one/type two (MRBM T1/T2) that continues to have cost growth and schedule delays, which we have previously reported. However, this target s costs have continued to be unstable, and despite changes and rebaselines, the contractor has been unable to meet projections. Figure 8 below shows the cost growth from 2014 through 2018. In 2017, the Targets program conducted a review of the MRBM T1/T2 target to address significant cost growth and set new projections. Again, in 2018, the Targets program and the contractor planned to conduct another review to address additional cost growth since the prior year s rebaseline. Despite relatively steady periods of performance following a rebaseline, DCMA officials believe that this contractor will continue to have cost growth. The DCMA established that some of the root causes for the cost growth are incomplete contract requirements and program requirements changes. Additionally, MDA and DCMA officials have acknowledged that the contractor did not adequately account for the costs associated with this target at the outset. How much cost growth there will be moving forward is unknown. In addition to cost growth, the MRBM T1/T2 target has continued to have schedule delays due to technical failures, which has led to the decision to forego some testing as a cost-cutting and time-saving measure. For example, the contractor s first flight of this target has been delayed approximately 5 years beyond the original plan, from third quarter fiscal year 2014 to fourth quarter fiscal year 2019. The primary reason for this delay has been an unusually high number of failures during pre-test qualification testing, according to the DCMA. The DCMA believes that the test failures are due to the elimination of sub-section testing, which it understands the program and contractor initiated as a cost-cutting and time-saving measure. According to DCMA, sub-section testing involves piecing together different components of the target and then testing that sub-section before the target is fully assembled. This type of testing can help the contractor isolate any integration issues between components in a specific area of the target. However, DCMA said that the contractor is testing the components and then fully assembling the target. Once fully assembled, they are conducting testing and experiencing the unusually high number of failures. When these types of failures occur, according to DCMA, the contractor conducts root cause analysis to make corrections and resolve the issue; however, DCMA officials noted that there is no commonality in the root causes. Thus, the contractor may not understand what steps to take to resolve the issue and ensure that the target performs as expected during a flight test. It is currently unclear how the MRBM T1/T2 target will perform during upcoming tests, because of the Targets program s decision to forego some qualification testing and not confirming the target s performance through a non-intercept test, as we have previously recommended. However, the Targets program stated it considers the MRBM T1/T2 performance a minimal risk because the MRBM T1/T2 is largely based on a prior target s design which, according to the program, was successfully flown twice. The MRBM T1/T2 is currently scheduled to fly in two critical tests in fiscal year 2019 and 2020. The first is an intercept flight test for the Terminal High Altitude Area Defense (THAAD) program in the fourth quarter of fiscal year 2019, which supports the delivery of an urgent capability to the warfighter. After this first flight test with this target, the next test with this target is MDA s third and largest operational flight test of the BMDS to-date FTO-03 E2 with five targets flying simultaneously and, three interacting weapon systems THAAD, Patriot, and Aegis BMD. This test is currently scheduled for the fourth quarter of fiscal year 2020. Both of these tests are important and the use of this new target in these tests increases the risk that the tests will not go as planned and that retests may be necessary; however, a retest for FTO-03 E2 would be extremely costly and very difficult to replan. Appendix VIII: Terminal High Altitude Area Defense (THAAD) Appendix VIII: Terminal High Altitude Area Defense (THAAD) Key findings for Fiscal Year 2018 THAAD met most of its fiscal year 2018 delivery and testing goals. THAAD is rebaselining to address Joint Emergent Operational Needs for Korea. THAAD may face challenges meeting its aggressive flight test schedule through 2021. MDA and Army closer to resolving the impasse regarding the transfer of THAAD. <31. Program Overview> THAAD is a rapidly-deployable, globally-transportable, ground-based system able to defend against short-, medium-, and limited intermediate- range ballistic missile attacks through a threat missile s middle to end stages of flight. A THAAD battery is comprised of five major components: (1) launchers, (2) a fire control unit, (3) communications system, (4) a radar, and (5) interceptors. The current program of record includes a total of seven batteries and 660 interceptors. THAAD has delivered all seven batteries to the Army for operational use and plans to continue production through fiscal year 2029 for remaining items, such as interceptors and software upgrades. The Army has THAAD batteries deployed in Guam and South Korea. Table 15 provides key fiscal year 2018 THAAD program facts. THAAD met its fiscal year 2018 goals for deliveries and flight testing. THAAD exceeded the number of interceptors it had originally planned to deliver in fiscal year 2018 because it is recovering from a parts quality issue. The parts quality issue was with a connector in the interceptor, and although THAAD stopped interceptor deliveries in order to resolve the issue, it did not stop interceptor production. Consequently, there was a stockpile of interceptors just awaiting a redesigned connector in order to be delivered. We previously reported on this parts quality issue and noted that interceptor deliveries, with the redesigned connector, resumed in April 2017 and interceptor production and deliveries have been steady since. In addition to delivering the interceptors, THAAD delivered the seventh, and final, battery of equipment. The delivery was later than previously planned to accommodate the Army s operational timelines and a new software upgrade to improve THAAD s performance against certain threats and in the presence of debris during the intercept of a threat missile. Although THAAD was successful in delivering its planned assets for fiscal year 2018, it only conducted one of two planned non-intercept tests. Specifically, FTX-36 was canceled due to target availability from an external vendor and its objectives were reassigned to FTX-35, which was successfully conducted in April 2018. FTX-35 supported the material release of the THAAD 3.0 software (i.e., it is available for use by the warfighter) and the requirement for interoperability testing. <32. THAAD is rebaselining to address Joint Emergent Operational Needs for Korea> THAAD is in the process of rebaselining from two separate acquisition efforts, known as THAAD I and II, to a single acquisition effort, known as THAAD III, to incorporate changes to address the United States Forces Korea (USFK) Joint Emergent Operational Needs (JEON). The purpose of a rebaseline is to update a program s established plans (i.e., baseline) due to a change in requirements, costs, or schedule. USFK JEON is a rapid acquisition effort to field ballistic missile solutions within the next 3 years to improve the defensive posture of Korea. Specifically, the USFK JEON s ballistic missile solutions are focused on improving integration between THAAD and Patriot as shown in figure 9, which could enable the defense of larger areas and more assets and provide the warfighter greater flexibility in planning and executing defensive actions. In fiscal year 2018, THAAD delivered software upgrades that provided the initial integration between THAAD and Patriot to improve their ability to coordinate when engaging a threat missile, in support of USFK JEON. These upgrades were assessed in an April 2018 flight test FTX-35 that demonstrated interoperability between THAAD and Patriot by exchanging messages over tactical data links while tracking a missile target, and an April 2018 BMDS-level ground test which provided further performance data in a simulation environment. THAAD currently plans to deliver USFK JEON upgrades through fiscal year 2021. We currently have ongoing work related to this and details will be included in future reports. MDA has nearly tripled THAAD s flight tests from three to eight between fiscal years 2019 and 2021 to support both USFK JEON, an urgent operational need for the Army, and interoperability testing. Consequently, the schedule margin between each test has decreased from more than a year to three to six months. According to our best practices for scheduling, a practical amount of schedule margin is needed to account for risks and uncertainties. In addition, schedule margin can provide time to analyze the results from the preceding test and correct any identified issues before moving forward with further testing which may be reliant on the results of the preceding test. We have previously reported that MDA leaves little to no schedule margin in its flight test schedule to ensure executability and the test schedule is success- oriented, in that it does not plan for failures which makes it difficult to absorb test failures when they occur. In addition to the reduced schedule margin between THAAD s tests, some of its tests in this timeframe are higher risk. For example, one test will be flying a new, untested target which increases the risks for that test, and another test will be the largest and most complex operational test to- date, flying five targets simultaneously. Therefore, the test schedule is aggressive, complex, and is at risk of not being completed as planned. However, THAAD has not identified its flight test schedule as a risk. Also, THAAD officials and an official from DOD s Director of Operational Test and Evaluation have asserted that the flight test schedule is doable, if everything goes according to plan, and that the biggest risk is fatigue among the personnel supporting the tests. While THAAD has a generally successful record for conducting flight tests, its current flight test schedule includes almost as many flight tests in 3 fiscal years as it did for the prior 9 fiscal years. Figure 10 below details the changes in THAAD s flight testing from its previous plan to its current plan. In addition to the increase in testing and lack of margin between tests, another risk to THAAD s flight test schedule is that some tests have not yet been funded, as shown in figure 10 above. Funding is essential to enable the planning and execution of each flight test. While THAAD is tracking the lack of funding for these tests as a risk, there is no mitigation strategy if all testing to support USFK JEON remains unfunded. If a single test is not funded or executed, the Army will perform a risk-based assessment using the available data to decide whether or not to deploy the capability for use by the warfighter. If THAAD does not conduct the testing as planned, it will forego the demonstration and confirmation of capability performance which leaves the warfighter with the decision to either not use the capability or use it with an increased risk that it may not perform as intended. THAAD officials noted, however, that the Army s decision to deploy a capability is based on multiple sources of data such as laboratory and ground testing, not just flight testing. <33. MDA and Army closer to resolving the impasse regarding the transfer of THAAD and the Army Navy/Transportable Radar Surveillance and Control Model-2 (AN/TPY-2)> MDA and the Army are nearing a resolution regarding the transfer of the THAAD and AN/TPY-2 programs to the Army; however, the resolution will likely resemble the current arrangement wherein MDA maintains primary responsibility through production and the Army operates and sustains them. We previously reported that MDA and the Army were at an impasse over the transfer of the THAAD and AN/TPY-2 programs because MDA was willing to transfer them as-is, but the program cannot meet the Army s mission requirements and it would take an estimated $10.1 billion to do so. Table 16 lists the differences between the programs of record and the Army s requirements. When MDA was established in 2002, it was tasked with using existing and new technologies to rapidly develop weapon systems for the warfighter, and once mature, the weapon systems were to be handed over to a military service for production, operation, and sustainment. At this point, MDA has some weapon systems where production is either nearing completion or is complete. Consequently, Congress set forth a requirement in the National Defense Authorization Act for 2018 that MDA transfer all programs in production to the military services by 2021, which includes THAAD and AN/TPY-2. As part of this requirement, Congress requested a status report on MDA s transfer of programs in production to military services not later than December 12, 2018. MDA prepared a report for the Under Secretary of Defense Acquisition and Sustainment who then requested the deadline be extended to June 2019 to enable further analysis and development of a viable option. However, according to program officials, at a March 2018 meeting between MDA and the Army, the Army stated that it prefers that THAAD and AN/TPY-2 remain with MDA. According to officials, they discussed transferring the sustainment only because MDA is best suited to maintain primary responsibility through production in order to integrate the BMDS and keep pace with the threat, as well as protect resources through the budgetary process. Appendix IX: Comments from the Department of Defense Appendix X: GAO Contact and Staff Acknowledgments <34. GAO Contact> <35. Staff Acknowledgments> In addition to the contact named above, LaTonya Miller, Assistant Director; Matthew Ambrose; Pete Anderson; James Bennett; Jon Felbinger; Kurt Gurka; Helena Johnson; Joe Kirschbaum; Wiktor Niewiadomski; Steven Stern; Brian Tittle; Hai V. Tran; and Alyssa Weir made key contributions to this report. Related GAO Products Missile Defense: Some Progress Delivering Capabilities, but Challenges with Testing Transparency and Requirements Development Need to Be Addressed. GAO-17-381. Washington, D.C.: May 2017. Missile Defense: Opportunities Exist to Reduce Acquisition Risk and Improve Reporting on System Capabilities. GAO-15-345. Washington, D.C.: May 2015. Missile Defense: Mixed Progress in Achieving Acquisition Goals and Improving Accountability. GAO-14-351. Washington, D.C.: Apr. 2014. Missile Defense: Opportunity to Refocus on Strengthening Acquisition Management. GAO-13-432. Washington, D.C.: Apr. 2013. Missile Defense: Opportunity Exists to Strengthen Acquisitions by Reducing Concurrency. GAO-12-486. Washington, D.C.: Apr. 2012. Missile Defense: Actions Needed to Improve Transparency and Accountability. GAO-11-372. Washington, D.C.: Mar. 2011. Defense Acquisitions: Missile Defense Transition Provides Opportunity to Strengthen Acquisition Approach. GAO-10-311. Washington, D.C.: Feb. 2010. Defense Acquisitions: Production and Fielding of Missile Defense Components Continue with Less Testing and Validation Than Planned. GAO-09-338. Washington, D.C.: Mar. 2009. Defense Acquisitions: Progress Made in Fielding Missile Defense, but Program is Short of Meeting Goals. GAO-08-448. Washington, D.C.: Mar. 2008. Defense Acquisitions: Missile Defense Acquisition Strategy Generates Results but Delivers Less at a Higher Cost. GAO-07-387. Washington, D.C.: Mar. 2007. Defense Acquisitions: Missile Defense Agency Fields Initial Capability but Falls Short of Original Goals. GAO-06-327. Washington, D.C.: Mar. 2006. Defense Acquisitions: Status of Ballistic Missile Defense Program in 2004. GAO-05-243. Washington, D.C.: Mar. 2005. Missile Defense: Actions Are Needed to Enhance Testing and Accountability. GAO-04-409. Washington, D.C.: Apr. 2004. | Why GAO Did This Study
For over half a century, the Department of Defense (DOD) has funded efforts to defend the United States from ballistic missile attacks. From 2002 to 2017, MDA has received about $142 billion and has requested 46.7 billion through fiscal year 2023 to develop the BMDS. The BMDS consists of diverse and highly complex land-, sea-, and space-based systems and assets located across the globe, including planned sites in Romania and Poland to protect United States forces and allies in Europe.
The National Defense Authorization Act for Fiscal Year 2012, as amended, included a provision that GAO annually assess and report on MDA's progress. Among other objectives, this report addresses for fiscal year 2018 (1) the progress MDA made in achieving delivery and testing goals and (2) the extent to which MDA made progress in developing and delivering integrated regional BMDS capabilities. GAO reviewed the planned fiscal year 2018 baselines and other program documentation and assessed them against program and baseline reviews and GAO's acquisition best practices guides, and interviewed officials from relevant agencies.
What GAO Found
In fiscal year 2018, the Missile Defense Agency (MDA) made progress toward achieving its delivery and testing goals for some of the individual systems—known as elements—that combine and integrate to create the Ballistic Missile Defense System (BMDS). MDA is also making progress testing for integrated capabilities, which are achieved by combining BMDS elements. However, MDA did not meet its planned goals. The figure below shows MDA's progress delivering assets and conducting tests against its fiscal year 2018 plans.
MDA delivered a significant integrated capability for defending the United States, meeting a goal set by the Secretary of Defense in March 2013 to increase the inventory of ground-based interceptors by December 2017.
Other on-time deliveries included software upgrades and additional assets. However, developmental challenges and testing failures contributed to MDA being unable to deliver all assets as planned.
MDA completed four of eight flight tests. MDA successfully conducted testing to support a production decision; however, it was unable to complete its annual test plan due to failures, cancellations, and delays.
MDA has delayed the delivery of the BMDS's European Phased Adaptive Approach (EPAA) Phase 3—which is intended to protect allies from Iranian threats—until 2020. Construction contractor issues at the planned Aegis Ashore site in Poland drove the delay. At the same time, testing for EPAA Phase 3 against planned threats has been substantially reduced and other vital testing has been deferred until after delivery. MDA officials consider EPAA testing for Phase 3 delivery complete. However, DOD guidance and acquisition best practices stress the importance of testing to understand the extent of capabilities and how to deploy them. The 18-month delay to EPAA Phase 3 provides MDA an opportunity to conduct additional testing and collect more performance data. This testing could provide the warfighter with more information and confidence in the system's ability to protect our allies against expected ballistic missile threats.
What GAO Recommends
GAO is recommending that MDA use the schedule margin afforded by the European Phased Adaptive Approach Phase 3 delay to conduct testing necessary to more thoroughly assess, prior to delivery, the capabilities and limitations of Phase 3 against the expected missile threat. DOD partially concurred with our recommendation. GAO continues to believe the recommendation is valid. |
gao_GAO-19-658 | gao_GAO-19-658_0 | <1. Background> <1.1. Roles and Responsibilities> CBP s Office of Field Operations (OFO) is responsible for inspecting pedestrians, passengers, and cargo at 110 land POEs, which have a combined total of 173 crossings (see figure 1). OFO has 20 field offices nationwide that oversee the operations of all POEs within their designated areas of responsibility. <1.2. Traveler and Cargo Entry Requirements> Travelers seeking entry to the United States through a land POE are required to present valid travel documents. In response to a recommendation from the 9/11 Commission and the Intelligence Reform and Terrorism Prevention Act of 2004, DHS and the Department of State implemented the Western Hemisphere Travel Initiative, which requires all travelers to present documents that denote identity and citizenship, such as a passport, when entering the United States. Foreign nationals may have particular travel document requirements, such as a visa or other entry permit, which vary based on such factors as nationality and the purpose of travel. See table 1 for examples of the types of acceptable documents for travelers coming into the United States through land POEs. There are also documentary requirements for commercial vehicles with cargo seeking entry into the United States. The Trade Act of 2002, as amended, establishes requirements for commercial vehicles with cargo to electronically submit information to CBP at least 1 hour in advance of arrival at a land POE. The information required includes data on the vehicle (e.g., Vehicle Identification Number or license plate number), the shipper, the carrier, scheduled date and time of arrival, and the description and weight of the cargo, among other things. Commercial vehicles with cargo valued less than $2,500 are considered informal entries that are exempt from the advance cargo information reporting requirements. <2. CBP Has Processes for Inspections at Land POEs, But Has Not Updated Related Policies Consistent with CBP Guidance> <2.1. CBP s Inspection Processes Include Screening to Identify Higher-Risk Travelers, Vehicles and Cargo and Conducting Physical Inspections> CBP inspects travelers and cargo seeking to enter the country through land POEs. These inspections involve a targeting process in which CBP uses law enforcement databases to identify and target higher-risk passengers, pedestrians, commercial vehicles, and cargo before arrival at a land POE. Targeting. CBP uses law enforcement, intelligence, and other enforcement data to identify higher-risk individuals, vehicles, or cargo for additional scrutiny upon their arrival at a land POE. Most cargo-carrying commercial vehicles must submit an electronic manifest (e-manifest) with information on the shipment to CBP at least 1 hour in advance of arrival at a land POE. CBP personnel at the POEs are to use the e-manifest and CBP s Automated Targeting System to identify high-risk inbound cargo. The Automated Targeting System is a decision support tool that compares traveler, cargo, and conveyance information against law enforcement, intelligence, and other enforcement data using risk-based targeting scenarios and assessments. It draws on many law enforcement, intelligence, and other enforcement databases, including the Terrorist Screening Database, the Department of Justice s National Crime Information Center, the Social Security Administration Death Master File, and the National Insurance Crime Bureau s private database of stolen vehicles. CBP policy requires that high-risk cargo be targeted for additional research and analysis and generally will also require the high- risk cargo to undergo a secondary examination once it arrives at the POE. In addition, CBP personnel at the POEs or field offices may review seizure and arrest reports, and other law enforcement information to identify individuals or vehicles that have associations with known criminals and place a lookout on them in TECS, CBP s system for processing travelers. TECS will flag travelers with lookouts for additional inspection if they arrive at the land POE. CBP personnel at the POEs or field offices may also use this information to develop products on recent trends that can help inform inspections. Once passengers, pedestrians, and commercial vehicles arrive at a land POE, CBP has various processes for inspecting them, including preprimary, primary, and secondary inspections, as explained below (see figure 2). Preprimary. In the preprimary area, both commercial vehicles and passenger vehicles will generally pass through radiation portal monitors that are designed to detect radiation and help prevent the smuggling of nuclear material into the United States (see figure 3). In the passenger vehicle environment, the preprimary area also contains license plate readers and Radio Frequency Identification (RFID) readers to capture information on vehicles and RFID-enabled travel documents. Examples of RFID-enabled travel documents include passport cards and border crossing cards. When a vehicle enters the preprimary inspection lane, a sensor grid determines that a vehicle has entered the lane. The sensors deploy a flash strobe that illuminates the area and license plate reader cameras take a picture of the front and rear of the vehicle. The information associated with the license plate number is run against law enforcement databases to alert the officer during the primary inspection if there is a potential issue with the vehicle or its occupants. Similarly, as a vehicle approaches the primary inspection area, travelers are directed to hold up their RFID travel documents to be read by RFID readers. Some land POEs may also have RFID readers for pedestrians. See figure 4 for examples of a license plate reader and RFID reader. The preprimary area is also used to direct travelers to different lanes according to the type of travel documents they have. For example, CBP may use signs to designate specific lanes for travelers with RFID or other machine readable documents ( Ready lanes ) or for trusted travelers (see figure 5). Primary inspection. During the primary inspection, CBP officers inspect travelers, vehicles, and cargo to determine compliance with U.S. law and admissibility to the United States. A CBP officer is to examine travel documents to ensure their validity and visually match the traveler to the photo identification to confirm the traveler s identity. All travelers names and license plates generally are to be screened against law enforcement databases. As previously discussed, this screening process may begin in the preprimary area when license plate and RFID readers collect data on vehicles and travelers with RFID travel documents. CBP officers may also manually enter data on travelers and vehicles during the primary inspection. A CBP officer is to interview travelers to obtain a declaration of citizenship, the purpose of travel, and items acquired outside the United States. For commercial vehicles, the CBP officer may also review the manifest and the results of targeting, if any. All CBP officers conducting primary inspections are to wear personal radiation detectors small devices designed to be worn on a belt to help detect radiation and help ensure the safety of officers and the traveling public. If the inspection cannot be completed at the primary inspection location, a more thorough inspection is required and the travelers, vehicles, or cargo are to be referred for secondary inspection. Travelers, vehicles, or cargo can be directed to secondary inspection for a wide range of issues, including when: radiation is detected (either on the traveler or from his or her vehicle), the traveler does not have required travel documents, the officer has questions about the validity of travel documents, the traveler s information matches information that may be of concern from law enforcement or intelligence data, or the officer suspects that the traveler is carrying contraband. Foreign visitors to the United States (with the exception of Canadian citizens and Mexican citizens using border crossing cards) may also be referred to secondary inspection to complete processing of their admission records, referred to as Form I-94s. Additionally, CBP selects passenger vehicles at random to be sent to a secondary inspection for a Compliance Examination (COMPEX). COMPEX is a program designed to help measure the effectiveness of CBP s inspections and is discussed in more detail later in this report. Secondary inspection. A secondary inspection may include a CBP officer conducting further questioning of travelers or additional examination of the traveler, vehicle, or cargo. CBP may use canines, non- intrusive inspection (NII) X-ray, Gamma-ray, or radiation detection equipment, or physically examine the traveler, vehicle, or cargo. CBP may also examine a traveler s electronic devices, such as computers, tablets, and mobile phones. To examine cargo, CBP may require the contents to be offloaded. When foreign visitors are referred to a secondary inspection to process Form I-94 admission records, CBP officers are to conduct interviews and additional database screening, including biometric checks of fingerprints. CBP policy calls for documentation, immigration, and other admissibility issues to be resolved before a traveler or vehicle is permitted to enter the country. Below, figure 6 shows a canine examination and figure 7 shows an example of NII equipment and scans of vehicles with indicators of contraband smuggling. CBP also has additional processes to enhance preprimary, primary, or secondary inspections at land POEs, including: Canines. CBP has canines that can detect concealed humans, narcotics, currency, firearms, and agriculture products. Depending on availability, land POEs may deploy officers with canines to walk among the vehicles in preprimary waiting to reach an inspection booth. Canines may also be used in the pedestrian and commercial vehicle environments. As previously mentioned, canines are also used for some secondary searches. Anti-Terrorism Contraband Enforcement Teams. These teams conduct special operations that focus on anti-terrorism and the interdiction of narcotics, alien smugglers, and fraudulent documents, among other contraband. For example, at one POE we visited, members of the Anti-Terrorism Contraband Enforcement Team told us they often walk among the passenger vehicles in the preprimary area to look for indicators of illicit activity. Tactical Terrorism Response Teams. These teams provide immediate counterterrorism response capabilities at some land POEs. Members of Tactical Terrorism Response Teams receive counterterrorism training and are responsible for interviewing known and suspected terrorists at ports of entry to help determine admissibility and collect intelligence. Blitzes and other local practices. CBP officers at land POEs may perform blitzes , in which inspections are enhanced for a period of time. For example, CBP officials told us that blitzes may include looking in all vehicle trunks during the primary inspection or sending additional vehicles for NII (X-ray) exams during a certain period of time. Officers at the POEs we visited also discussed other local initiatives to enhance inspections. For example, one POE we visited used NII to screen all commercial vehicles. Another POE we visited partnered with the local authority that manages an international bridge to deploy license plate readers for commercial vehicles before the vehicles enter the bridge into the United States. The bridge authority uses the license plate reading to check if the commercial vehicle has submitted the required e-manifest to CBP; only those commercial vehicles that have submitted the required e-manifests are allowed to cross. Officials from CBP told us that, in the future, CBP and the bridge authority plan to deploy additional technology in the preprimary area on the non-U.S. side of the border, including facial recognition and NII. In addition, CBP has plans to make future improvements to inspection processes. For example, CBP is conducting tests to use facial recognition technology as part of inspections at land POEs. According to CBP, facial recognition technology may enhance its ability to detect imposters by matching facial images of those arriving with images on file. CBP began a facial recognition test in the passenger vehicle environment at the Anzalduas, Texas land POE in August 2018 and expects the test to run for up to 1 year. In September 2018, CBP initiated a project at the Port of San Luis, Arizona to demonstrate the feasibility of acquiring photos of all arriving pedestrians and comparing those photos to photos on file. Subsequently, in October 2018, CBP officials stated they extended this demonstration project to the Port of Nogales, Arizona. According to CBP, these pedestrian demonstration projects built upon an earlier pilot project at the Port of Otay Mesa, California, which ran from February through May 2016. Testing this technology is one of CBP s key efforts in developing the capability to fulfill DHS statutory responsibility to collect biometric information from arriving and departing aliens. <2.2. Many of CBP s Policies Related to Inspections at Land POEs Have Not Been Reviewed and Updated to Reflect Changes Consistent with CBP Guidance> CBP has numerous directives, handbooks and other official instructions that specify policies and procedures for inspections at land POEs. However, many of these documents have not been reviewed and updated as required by OFO s January 2016 OFO Policy Management Handbook. This guidance states that all of OFO s policies must be reviewed and updated, as necessary at least once every 3 years to help ensure the timely provision of uniform and relevant policy. In some cases, the policy documents issued by OFO or its program offices have not been reviewed and updated for almost two decades. See table 2 below for a list of such policies we identified that have not been reviewed and updated to reflect changes in processes since their issuance consistent with OFO s policy management requirements. As a result of policies not being reviewed and updated by OFO, these policies, as currently written, do not fully reflect changes in technology, operating conditions, or inspection processes. For example: The 2008 policy on processing travelers and vehicles at land POEs does not include information on the Consolidated Secondary Inspection System, the current system used to record secondary inspections. It also directs officers to follow guidance in the Inspector s Field Manual, which has since been discontinued. The 1999 Compliance Measurement directive refers to procedures for a paper-based system, while the system is now electronic, according to officials. The 2004 Personal Search Handbook does not incorporate the 2015 National Standards on Transport, Escort, Detention, and Search policy that prohibited CBP officers from observing personal cavity searches conducted by medical personnel. The 1999 Narcotics Interdiction Handbook and the 2002 canine policies do not address fentanyl. Fentanyl is a synthetic opioid that requires special handling and has been a main contributor to the recent spike in overdose deaths in the United States, according to the Centers for Disease Control and Prevention. OFO s Planning, Program, Analysis, and Evaluation (PPAE) Quality Assurance Enterprise Division (QAED) is responsible for monitoring that each program office review and update, as needed, the policies for its programs. QAED has an internal tracking system and sends out reminders to CBP program offices about policies that need to be reviewed, and updated, if necessary. QAED officials acknowledged that many policies need to be updated because some are almost 20 years old and many technological and other changes have occurred that may not be described in existing policies. CBP officials stated that they are in the process of updating some policies, including the 1999 Compliance Measurement directive, the 2002 Canine Enforcement Program Handbook, the 2004 Personal Search Handbook, and the 2008 Primary Processing of Travelers and Vehicles Seeking Entry to the United States at Land Ports of Entry directive. Officials attributed the lack of timely updating to several factors. OFO officials responsible for reviewing and updating policies said that the process can be time-consuming and difficult, as there may be many needed changes or may include conducting site visits to identify best practices and areas for improvement. In addition, QAED officials responsible for monitoring policy updates said QAED has 12 staff and is responsible for three OFO-wide mission areas in addition to policy management, as well as a number of other responsibilities within PPAE. Further, according to QAED officials, they do not have authority to require cognizant program offices to review and update their policies in line with the OFO Policy Management Handbook. QAED officials agreed that CBP and OFO could better ensure compliance with OFO s policy updating requirements. OFO s 2016 OFO Policy Management Handbook states that the timely provision of uniform and relevant policy facilitates informed decision- making at all levels of the organization and that an effective policy management program is critical to the success of any organization. By reviewing and updating as necessary all relevant policies related to land POE inspections consistent with OFO s policy handbook, CBP could better ensure that officers have guidance needed to consistently and properly inspect vehicles and their passengers, pedestrians, and commercial vehicles. <3. CBP Uses Various Mechanisms to Monitor Inspection Activities at Land POEs, But Does Not Fully Analyze the Results of Some National Monitoring Programs CBP Monitors Inspections at Land POEs Using Mechanisms Deployed at the Port, Field Office, and National Levels> CBP uses various mechanisms at the port, field office, and national levels to monitor inspection activities at land POEs to help ensure that CBP officers are following policies and procedures. At the POE level, supervisors and port management monitor many of the inspection tasks in real-time by reviewing computer-based records and logs of inspections and observing inspections. CBP also provides tools to the ports to assist with supervisory monitoring efforts, such as Enforcement Link Mobile Operations Red Flag (ELMOrf) a computer application that provides alerts to supervisors via mobile device when certain types of events occur during primary inspections that warrant supervisory oversight. Table 3 below provides key monitoring mechanisms CBP uses for its land POE inspections at the port level. At the field office level, field office staff may monitor land POE activities within their area of responsibility through periodic assessments of supervisor monitoring duties, such as inspection report reviews. In addition, all field offices have Integrity Officers tasked with identifying potential corruption and officer training issues at the ports. Table 4 below provides key monitoring mechanisms CBP uses for its land POE inspections at the field office level. CBP s national level initiatives include its Self-Inspection Program (SIP) and the Operational Field Testing Division s covert testing program. The Self-Inspection Program is an annual internal self-assessment of various CBP component offices and includes assessment of various inspection activities at POEs. Table 5 below provides key monitoring mechanisms CBP uses for its land POE inspections at the national level. <3.1. CBP Conducts Analysis of the Results of National Level Monitoring Programs, But Opportunities Exist to Enhance Analyses> <3.1.1. CBP Analyzes Self-Inspection Program Results Each Year, But Does Not Analyze Results of Individual POEs to Identify Reoccurring Deficiencies> CBP produces CBP-wide analyses of the SIP results it collects annually, but the analyses are not done in a manner such as at the port level and over multiple years that would allow CBP to identify potentially reoccurring deficiencies at individual POEs. The Management Inspections Division issues a report each year which provides comprehensive SIP results across CBP offices for that year and highlights compliance issues identified (referred to as the SIP Summary Analysis Report). Similarly, OFO issues an annual report which provides comprehensive results and highlights compliance issues identified across OFO s programs for that year. See figure 8 for an overview of the SIP process. With regard to the 2018 SIP Summary Analysis Report, the Management Inspections Division reported that approximately 80 percent of all SIP worksheets, which document the results of the self-assessments, submitted across CBP in the 2018 cycle had no deficient conditions. The report also identified the six worksheets with the highest number of deficient conditions across OFO and the questions associated with the most corrective actions for those worksheets. For worksheets that the report did not highlight, additional summaries of the OFO data are provided, including the number of worksheets submitted and the number of worksheets reporting corrective actions. OFO s SIP annual report also provides summaries of the SIP results, but with additional analysis specific to OFO. The 2018 OFO SIP annual report calculated an overall compliance rate of 92.4 percent across the 31,947 questions for worksheets completed by OFO that year. The report also provided summaries of data used to calculate compliance rates for each worksheet assigned to OFO and included trends in compliance rates for each over 3 years. Additionally, the report provided summaries of the data for each OFO field office that includes number of worksheets submitted, the number of deficient conditions in the given year, and the number of corrective actions for each POE under the field office. Beginning in 2017, the OFO report provided an analysis of any SIP worksheet question with a compliance rate below 90 percent in a given year and the actions planned or taken to increase future compliance. While these reports provide useful summary data of CBP s monitoring of inspections activities and recommendations for increasing compliance for some programs and processes, our analysis of SIP results showed that opportunities exist for CBP to identify potential reoccurring deficiencies at individual land POEs over time. Specifically, our analysis of SIP results from 2013 through 2018 identified reoccurring instances of noncompliance at individual land POEs indicating the possibility that the corrective actions taken each year to address the deficiencies did not fully remediate them. We found that management at the land POEs with reoccurring instances of deficiencies took corrective actions each year to address the identified deficiencies, and in some instances, management proposed and implemented the same corrective action in multiple years to try to resolve the identified deficiency. While the Management Inspections Division and OFO reports provide some useful analysis to identify programs or specific activities across CBP to target for remediation each year, these reports have not positioned CBP to identify and more effectively address reoccurring deficiencies at individual POEs. Standards for Internal Control in the Federal Government provides that management should use quality information to achieve the entity s objectives and management should process the obtained data into quality information that supports the internal control system. Furthermore, management should remediate identified internal control deficiencies on a timely basis and the audit resolution process is completed only after action has been taken that (1) corrects identified deficiencies, (2) produces improvements, or (3) demonstrates that the findings and recommendations do not warrant management action. Additionally, management, with oversight from the oversight body, is to monitor the status of remediation efforts so that they are completed on a timely basis. Management Inspections Division and OFO officials stated that their analyses are designed to identify systemic compliance issues across OFO. In addition, OFO officials stated that port management is responsible for addressing compliance issues of individual land POEs. However, without an analysis to identify reoccurring deficiencies at all individual land POEs, the Management Inspections Division and OFO are not well positioned to determine whether CBP may need to take additional or alternative actions to more effectively address the deficiencies at these ports. By enhancing analysis of the SIP data to include analysis at the port level over time, CBP could better identify potential reoccurring deficiencies with inspections at land POEs and could be better positioned to more fully remediate them and ensure compliance with inspection policies. <3.1.2. CBP Has Produced Comprehensive Analyses of Some Covert Testing Results, But Does Not Have a Policy to Conduct These Analyses on a Periodic Basis> CBP has produced comprehensive analyses of the results from some of its covert operational tests conducted at land POEs in fiscal years 2013, 2014 and 2018. These comprehensive assessments of aggregated covert test results provide analysis of trends, common vulnerabilities, and best practices used in inspections across land POEs; however, CBP has not developed comprehensive assessments for various other covert tests it conducted during this time frame. Of the 213 land POE tests conducted from fiscal years 2013 through 2018, 78 were included in comprehensive assessments. CBP s Operational Field Testing Division (OFTD) is responsible for covertly assessing and evaluating the integrity of CBP s personnel, technologies, and policies and procedures at land POEs. From fiscal years 2013 through 2018, OFTD conducted a variety of tests of inspections at land POEs including: fraudulent document and imposter tests, canine contraband detection tests, biological agent detection tests, NII equipment contraband detection tests, radiation detection capabilities tests, and assessments of Tactical Terrorism Response Teams. See figure 9 for an overview of the process for fraudulent document and imposter covert testing. For tests conducted from fiscal years 2013 to 2018, OFTD produced three comprehensive assessments related to tests it conducted at land POEs. One assessment compiled the results of 129 fraudulent document and imposter tests conducted at 10 land POEs and 14 airports in fiscal years 2012 and 2013. Another assessment covered 34 NII equipment tests conducted in fiscal years 2013 and 2014 at land POEs and seaports, of which nine of the tests were at land POEs. The third assessment, issued in 2018, covered 33 NII equipment tests conducted in fiscal year 2018 at six land POEs. While OFTD produced comprehensive assessments for these tests, OFTD did not comprehensively analyze the results of various other types of covert tests conducted from fiscal years 2013 through 2018. Such covert tests included 34 tests for canine detection of contraband, 11 for agricultural and biological agent detection, seven for radiation detection, and seven for Tactical Terrorism Response Team response. Additionally, OFTD conducted another 72 fraudulent document and imposter tests and six NII equipment tests over this time period that were not included in the comprehensive assessments described above. Overall, we found that 135 of 213 tests conducted from fiscal years 2013 through 2018 were not included in comprehensive assessments. For tests not included in comprehensive assessments, analysis of the test is limited to a test summary document that is produced following a test or group of tests conducted during a field visit to one location. The summaries identify officer actions during the test and record whether the test resulted in an interdiction of the test subject. Some of the summaries also include findings, identify leading practices, and provide recommendations to the POE where the test or tests were conducted to improve the inspections. While these summaries provide useful information, they encompass the results of tests at individual POEs and do not provide an evaluation of aggregated test results that could more broadly identify vulnerabilities, trends, and best practices across land POEs as provided in the comprehensive assessments. According to OFTD officials, they have drafted a policy and standard operating procedures that would address comprehensive analysis of covert testing results, but these have been in development for 3 years and have not been finalized. OFTD did not provide further details or documentation of the draft policy or procedures or a date for completion. Additionally, OFTD officials stated that in some cases they did not have a sufficient number of covert test results to conduct a comprehensive analysis. OFTD officials also stated that an additional comprehensive assessment of fraudulent document and imposter tests was not needed as OFTD completed this type of assessment in 2013 and no new findings were generated by subsequent tests. We recognize that the small number of certain tests limit OFTD s ability to conduct comprehensive analyses. However, we found that from fiscal years 2013 through 2018 over half (135 of 213) of the tests conducted at land POEs were not included in a comprehensive assessment and a formalized policy could better position OFTD to be able to conduct these analyses moving forward. Further, our analysis of covert test interdiction rates suggests that additional periodic comprehensive analysis could help inform CBP management of vulnerabilities, systemic inspection deficiencies, leading practices observed, and ways to improve inspection processes. Moreover, the reasons for non-interdiction in the fraudulent document and imposter covert tests conducted since the last comprehensive assessment may be different due to changes in inspection technologies, training, personnel, or the threat environment. OFTD officials agreed and stated that another comprehensive assessment is being developed based on covert tests focused on facial recognition technologies. Standards for Internal Control in the Federal Government provides that management should implement control activities through policies, including documenting such policies. In addition, management should monitor the internal control system through ongoing monitoring and separate evaluations. These evaluations are to be used periodically and may provide feedback on the effectiveness of ongoing monitoring. Furthermore, management should evaluate and document issues identified through separate evaluations to identify internal control deficiencies and monitor changes in the internal control system. By implementing a policy for conducting periodic comprehensive analyses of its covert operational test results, CBP would be better positioned to understand the effectiveness of inspection policies, personnel, and technologies across land POEs over time. Furthermore, periodic analyses could help identify inspection vulnerabilities that may be occurring more broadly, trends in these vulnerabilities, and best practices in mitigating such vulnerabilities on a more consistent basis. <4. CBP Has Performance Measures to Assess Its Land POE Inspections but Has Not Set a Target for One Measure That Drives Performance Improvements> CBP uses various sets of performance measures including organizational performance measures, internal performance measures, program and port-specific measures, and measures required by the National Defense Authorization Act for Fiscal Year 2017 (NDAA). CBP reports organizational measures externally to inform program management while internal measures track additional areas of performance to inform OFO management. In addition, some CBP programs and ports track measures specific to their performance at land POEs. DHS also reports measures that cover CBP s efforts to detect illegal activity at land POEs as required by the NDAA. These performance measures generally reflect attributes of effective measures, however, CBP has not set an ambitious target for one measure the land border interception rate. <4.1. CBP Uses Various Sets of Measures to Evaluate Its Efforts to Detect Illegal Activity at Land POEs> <4.1.1. Organizational Performance Measures> CBP tracks and externally reports the results of performance measures annually in its Organizational Performance Measures Overview. The Overview states that it serves as a tool for leadership to manage programs using performance information and includes performance measure descriptions, targets, results, and trends over time. CBP developed and reports on two measures that cover the detection of illegal activity among inbound passenger vehicle and cargo traffic at land POEs: (1) the estimated percentage of land border privately-owned vehicles with passengers who are compliant with laws, rules, and regulations; and (2) the percentage of inbound cargo identified as high-risk that is assessed or scanned prior to departure or at arrival at a U.S. air, land, and sea POE. CBP also tracks, but does not report, data on the percentage of high-risk inbound cargo assessed or scanned prior to departure or upon arrival at U.S. land POEs, which in fiscal year 2018 was 97.7 percent. See figures 10 and 11 for CBP s reported results for these measures by fiscal year. CBP measures the percentage of privately-owned vehicles with passengers who are compliant with all federal, state, and local laws and regulations through its COMPEX program. COMPEX is a statistical survey in which vehicles cleared for entry into the United States by CBP are randomly selected for a comprehensive audit through a computer- generated random sample. CBP is to conduct an audit of the selected vehicles by doing a secondary inspection using a standardized system of checks to identify any violations that were missed during the routine inspection. Violations found in the COMPEX audits represent violations missed by CBP and are used by CBP to estimate the total number of violations missed by CBP operations. According to officials, CBP uses these data along with data on violations CBP officers identify during the normal inspection process to calculate the overall estimated percentage of land border privately-owned vehicles with passengers compliant with laws, rules, and regulations. As shown in Figure 10, CBP has set a target rate of 99.5 percent compliance. From fiscal years 2015 through 2018, CBP reported estimated rates of over 99 percent compliance. While CBP nearly met its target across all of these years, CBP plans to work with field office management and review COMPEX secondary inspection findings to identify noncompliance trends and identify the underlying reasons for noncompliance. In addition, CBP plans to develop materials to educate travelers on relevant laws and requirements. As previously discussed, in the cargo environment, CBP identifies potentially high-risk cargo through the Automated Targeting System. CBP then tracks the percentage of such cargo assessed or scanned prior to arrival or at a land POE. As shown in Figure 11, CBP has set a target rate of identifying 100 percent of potentially high-risk cargo. For fiscal years 2014 through 2017, CBP reported rates of 99 percent or higher, and in 2018, the rate was 97.9 percent. According to CBP, it did not meet its target rate of 100 percent in fiscal year 2018 because of challenges related to changes in high-risk status that occur en route, data entry errors, and logistical or scheduling errors. OFO plans to address these challenges by working with internal stakeholders to resolve status- tracking problems and information-processing errors and by working with shippers and carriers to rectify logistical and scheduling issues. In addition to its externally-reported organizational performance measures, OFO tracks two performance measures internally that relate to efforts to detect illegal activity among inbound traffic at land POEs: the percentage of individuals screened against law enforcement databases for entry into the United States and the land border interception rate for passengers in privately-owned vehicles with major violations. See figure 12 for CBP s performance by fiscal year. CBP uses COMPEX data to estimate the land border interception rate for privately-owned vehicles containing passengers with major violations (interception rate). This represents the number of major violations in privately-owned vehicles at the border that CBP intercepts divided by the estimated total number of major violations. CBP tracked the percentage of individuals screened against law enforcement databases for entry into the United States across fiscal years 2013 through 2018, but plans to discontinue use of this measure beginning in fiscal year 2019 according to CBP officials. CBP officials stated that this measure was originally created to track progress toward electronic screening of travel documents as part of the Western Hemisphere Travel Initiative. This measure tracks the percentage of travelers screened against law enforcement databases using electronically readable documents. According to CBP officials, there have been a variety of technology infrastructure upgrades and changes to vehicle processing software at land POEs that have reduced the relevance of this measure for land POE operations and CBP plans to discontinue its use as a result. <4.1.2. Program and Port-Specific Measures> Some CBP programs that operate as part of the inspection process track performance data on the results of their program activities. For example, CBP tracks results from the Canine Program. Canine handlers are to enter performance data into the Canine Tracking System locally at land POEs. They track data on the numbers of days canine officers worked, searches conducted, and fines and arrests that result from canine searches. In addition, some land POEs track performance data on local efforts to detect illegal activity. For example, officials at one POE we visited track data on the numbers and types of seizures, arrests, and immigration enforcement actions that occur at the port. <4.1.3. Metrics Required by National Defense Authorization Act for Fiscal Year 2017> In 2018, DHS began reporting additional metrics to measure the effectiveness of border security at land POEs in response to the National Defense Authorization Act for Fiscal Year 2017 (NDAA). The NDAA requires DHS to produce an annual report for appropriate congressional committees, the Comptroller General, and certain other entities. This report is to include certain metrics to measure the effectiveness of border security between POEs, at POEs, in the maritime environment, and with respect to aviation assets and other air and marine operations in the land domain. DHS submitted the fiscal year 2017 Border Security Metrics Report in response to the NDAA requirement in May 2018. Nine of the metrics in DHS s fiscal year 2017 report cover CBP s efforts to detect illegal activity at land POEs, although many of these measures group land POE data with other types of ports. DHS reported data for 7 of these 9 metrics. In some instances, DHS reported that it did not have the specific data needed for a required metric and provided other available data instead. DHS reported data in response to the following required metrics related to land ports of entry in the fiscal year 2017 Border Security Metrics Report: total inadmissible travelers at ports of entry (DHS does not have a methodology to estimate total inadmissible travelers, and therefore presented data on known inadmissible travelers), refusal rate at ports of entry, illicit drugs seized at ports of entry, port of entry illicit drug seizure rate, major infractions at ports of entry (DHS does not have a methodology to estimate all major infractions, and therefore included data on known passenger infractions), cocaine seizures effectiveness rate at land ports of entry, and secondary examination rate. CBP did not leverage existing data from the COMPEX program to estimate all major infractions in the fiscal year 2017 Border Security Metrics Report, but began reporting these data in the fiscal year 2018 report. The NDAA requires DHS to report the number of infractions related to travelers and cargo committed by major violators who are interdicted by OFO at ports of entry and the estimated number of such infractions committed by major violators who are not so interdicted. In the fiscal year 2017 DHS Border Security Metrics Report, DHS reported the number of known major infractions at ports of entry. DHS also reported that they did not have a methodology to estimate the number of infractions among those who are not interdicted. However, CBP estimates the number of undetected major infractions through the COMPEX program. CBP officials stated there was likely a miscommunication within CBP that led to the DHS Office of Immigration Statistics the DHS office that compiled the Border Security Metrics Report not using COMPEX data to report the estimated number of major infractions in the 2017 Border Security Metrics Report. In addition, the DHS Office of Immigration Statistics was not aware that CBP s COMPEX was applicable for purposes of reporting this metric. As a result of our review, DHS included an estimate of the number of major infractions not interdicted by CBP using data from the COMPEX program in the fiscal year 2018 Border Security Metrics Report. <4.2. CBP Performance Measures Generally Reflect Key Attributes of Effective Measures but CBP Does Not Set an Ambitious Target for One Measure> CBP organizational and internal performance measures for detecting illegal activity at land POEs generally reflect key attributes of effective performance measures that we previously identified. Based on our analysis of CBP s organizational and internal performance measures, these measures generally reflect the key attributes listed in table 6. For example, CBP clearly defines its externally-reported organizational measures and presents baselines and trends in its Organizational Performance Measures Overview. In addition, CBP s Organizational Performance Measures Overview provides linkage between its externally-reported organizational measures and DHS mission. CBP performance measures also have limited overlap with each other presenting new information beyond what other measures provide. Our analysis of CBP s measures found that they focus on the commercial and passenger-owned vehicle environments and currently provide limited coverage of the pedestrian traveler environment. According to CBP officials, the agency is in the process of expanding the two COMPEX measures to include pedestrian travelers at land POEs, which would provide greater coverage of CBP s core program activities for detecting illegal activity at land POEs. According to CBP officials, CBP began collecting COMPEX data for all pedestrian POEs in 2015. CBP officials stated they are in the process of reviewing the collected data and are working to refine the methodology and operational issues that may impact the reliability of the results. After CBP resolves these data issues, CBP will begin reporting the results of COMPEX audits in the pedestrian environment, according to CBP officials. Our analysis of CBP s measures also found that CBP generally sets ambitious but realistic targets for its organizational and internal performance measures. However, CBP s target for the land border interception rate is lower than the actual reported rate for fiscal years 2015 through 2018. We previously identified critical success factors for goal-setting and performance measurement efforts. Creating ambitious but realistic and measurable stretch goals based on current performance levels, among other things, supports the organization in achieving performance improvements. In addition, the Office of Management and Budget Circular A-11 states that agencies are expected to set ambitious goals to push them to achieve significant performance improvements beyond current levels. OFO officials stated they set the target for the land border interception rate following methodological changes OFO implemented in the COMPEX program in 2015. However since that time, OFO officials in the Strategic Transformation Office the office that reviews and provides input into targets for CBP s organizational performance measures stated they have not reviewed this target because it is an internal measure and they do not review these as they would for the externally-reported organizational measures. Nevertheless, OFO officials stated they use this measure internally for performance management and to report results to OFO management. Because OFO sets a target for the interception rate and uses this measure internally, a more ambitious target for the measure would better encourage CBP to review its performance of inspection activities that impact the measure and challenge them to identify ways of improving performance. <5. Conclusions> Inspecting travelers and cargo seeking entry to the United States through land POEs is critical to preventing terrorists and other inadmissible persons, as well as nuclear materials, narcotics, and other contraband, from entering the country. OFO has implemented processes and deployed technology to screen and examine travelers and cargo at POEs; however, by reviewing and updating its inspection policies in accordance with its own established time frames, CBP could better ensure that officers have guidance needed to consistently and properly inspect passengers, pedestrians, and commercial vehicles. Further, while CBP has taken steps to monitor compliance with inspection policies through the SIP and covert operational tests, it could more fully analyze the results. By identifying and addressing reoccurring SIP deficiencies at individual land POEs and implementing a policy to conduct periodic comprehensive analyses of covert test findings, CBP could be better positioned to enhance inspections and address vulnerabilities. Lastly, CBP has established various measures to assess the effectiveness of its inspections; however, establishing an ambitious and realistic target for its major violations interception rate could encourage additional improvements in performance. <6. Recommendations for Executive Action> We are making the following four recommendations to CBP: The Commissioner of CBP should review and update policies related to land port of entry inspections in accordance with OFO guidance. (Recommendation 1) The Commissioner of CBP should analyze the results of the Self- Inspection Program over time and at a level necessary to identify and address potentially reoccurring inspection deficiencies at individual ports of entry. (Recommendation 2) The Commissioner of CBP should implement a policy to conduct periodic comprehensive analyses of covert test findings. (Recommendation 3) The Commissioner of CBP should develop a new target for the land border interception rate for passengers in privately-owned vehicles with major violations that sets an ambitious and realistic goal based on past performance. (Recommendation 4) <7. Agency Comments and Our Evaluation> We provided a draft of this report to DHS for its review and comment. DHS provided comments, which are reproduced in appendix I. In its comments, DHS concurred with the four recommendations. DHS also provided technical comments, which we incorporated as appropriate. With regard to the first recommendation that CBP update policies related to land POE inspections in accordance with OFO guidance, DHS stated that OFO has initiated a process to modernize handbooks, policy memoranda, and directives. With regard to the second recommendation that CBP analyze SIP results over time and at a level necessary to identify and address potentially reoccurring deficiencies at individual POEs, DHS stated that OFO plans to begin training on how to conduct this analysis so it may be conducted for 2021 SIP results. With regard to the third recommendation that CBP implement a policy to conduct periodic comprehensive analyses of covert test findings, DHS stated that CBP is in the process of writing a policy that will document procedures for comprehensive reporting, including periodic reviews of corrective actions taken to mitigate vulnerabilities. With regard to the fourth recommendation that CBP develop a new target for the land border interception rate, DHS stated that OFO will set a new target for fiscal year 2020 using data from the previous three fiscal years. If fully implemented, these actions will meet the intent of our recommendations. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Homeland Security Appendix II: GAO Contact and Staff Acknowledgments <8. GAO Contact> <9. Staff Acknowledgments> In addition to the contact named above, Kirk Kiester (Assistant Director), Heather May (Analyst in Charge), Carl Barden, Michele Fejfar, Eric Hauswirth, Susan Hsu, Richard Hung, Jeff Love, Mara McMillen, Sasan J. Jon Najmi, and Jonathan Tumin made key contributions to this report. | Why GAO Did This Study
CBP, within the Department of Homeland Security (DHS), is the lead federal agency charged with a dual mission of facilitating the flow of legitimate travel and trade at the nation's borders while keeping terrorists and their weapons, criminals and their contraband, and inadmissible aliens out of the country. GAO was asked to review CBP's process for inspecting passenger vehicles, pedestrians, and commercial vehicles at land POEs to secure the border. This report examines to what extent CBP (1) has processes and policies for inspections, (2) monitors inspection activities, and (3) has measures to assess its efforts to detect illegal activity of passengers, pedestrians, and commercial vehicles at land POEs. To address these questions, GAO analyzed CBP documents and data related to inbound inspections; interviewed officials; and observed operations at a non-generalizable sample of seven land POEs, selected to reflect a range of traffic volumes and geographic locations, among other things. This is a public version of a sensitive report that GAO issued in June 2019. Information that DHS deemed sensitive has been omitted.
What GAO Found
U.S. Customs and Border Protection (CBP) has processes for inspecting passenger vehicles, pedestrians, and commercial vehicles at U.S. land ports of entry (POE). These processes include reviewing travel documents, screening against law enforcement databases, and using canines and X-ray equipment (see figure below). However, because CBP has not updated many of its policies—in a few cases for almost 20 years—they do not always reflect changes in technology or processes, such as those for conducting searches and handling fentanyl. By reviewing and updating policies, CBP could help ensure officers have guidance needed to consistently and properly perform inspections.
CBP has various mechanisms at the port, field office, and national levels to monitor inspection activities at land POEs, but opportunities exist to enhance analysis of the results from its national level Self-Inspection Program (SIP) and covert operational testing. The SIP is an annual self-assessment that POEs are to conduct to determine compliance with CBP policies. CBP analyzes the results of the SIP annually to identify systemic compliance issues across CBP that year; however, it does not analyze noncompliance at individual POEs over time. By analyzing these data, CBP could better identify and address deficiencies at individual POEs. In addition, CBP has produced three comprehensive assessments, which analyzed aggregated results for certain types of covert tests, such as fraudulent document tests, conducted at land POEs in fiscal years 2013, 2014, and 2018. However, CBP has not done so for other types of tests, such as canine contraband detection tests, conducted from fiscal years 2013 through 2018. By implementing a policy for periodically conducting such analyses, CBP could identify vulnerabilities, trends, and best practices occurring more broadly.
CBP uses various sets of measures to assess its efforts to detect illegal activity at land POEs. CBP performance measures generally reflect the key attributes of effective measures, but CBP does not set an ambitious and realistic target for one measure. CBP's target for the land border interception rate—the estimated percentage of major violations in privately-owned vehicles that CBP intercepts out of the projected total number of major violations—is lower than the actual reported rate for fiscal years 2015 through 2018. A more ambitious target for the interception rate would better encourage CBP to review past performance of inspection activities that impact the measure and challenge CBP to identify ways to improve performance .
What GAO Recommends
GAO recommends that CBP: (1) review and update policies related to land POE inspections in accordance with CBP guidance; (2) analyze the SIP results to identify and address reoccurring inspection deficiencies at individual POEs; (3) implement a policy to conduct periodic comprehensive analyses of covert test findings; and (4) develop a more ambitious target for the land border interception rate measure. DHS concurred. |
gao_GAO-19-376T | gao_GAO-19-376T_0 | <1. Background> <1.1. U.S.-CNMI Relations> The United States took control of the Northern Mariana Islands from Japan during the latter part of World War II. After the war, the U.S. Congress approved a trusteeship agreement making the United States responsible to the United Nations for the administration of the islands. In 1976, the District of the Mariana Islands entered into a covenant with the United States establishing the island territory s status as a self-governing commonwealth in political union with the United States. The covenant granted the CNMI the right of self-governance over internal affairs and granted the United States complete responsibility and authority for matters relating to foreign affairs and defense affecting the CNMI. The covenant also preserved the CNMI s exemption from certain federal laws that had previously been inapplicable to the Trust Territory of the Pacific Islands, including certain federal minimum wage provisions and immigration laws, with certain limited exceptions. <1.2. Application of Federal Immigration Law to the CNMI> In 2008, the CNRA amended the joint resolution approving the U.S. CNMI covenant to generally apply federal immigration law, including the INA, to the CNMI, with a transition period for foreign workers and investors. In addition, the INA provides DHS with discretionary authority to grant parole to certain noncitizens, on a case-by-case basis, allowing them to be temporarily present in the United States, including the CNMI. <1.2.1. Foreign Worker Provisions> To provide for an orderly transition from the CNMI immigration system to the U.S. federal immigration system under the immigration laws of the United States, DHS, through USCIS, established the CNMI-Only Transitional Worker program in 2011. Through the program, employers petition for nonimmigrant CW-1 permits that allow foreign workers who meet certain requirements to work temporarily in the CNMI. The CNRA limits the number of permits DHS may issue annually and reduces that number each year until the end of the transition period. Since 2008, Congress has amended the CNRA several times, with provisions that affected the length of the transition period, the number of CW-1 permits allocated, and the distribution of permits (see table 1). Figure 1 shows the past numerical limits on CW-1 permits established by DHS and the current and future numerical limits for permits specified in the Northern Mariana Islands U.S. Workforce Act of 2018, Pub. L. No. 115-218. The limits shown are the maximum number of permits available for each fiscal year through the end of the transition period and may not reflect the number of permits for which employers would petition and that DHS would approve. In addition, the INA provides authorization for several types of visas for nonimmigrant workers and their families for example, H-2B visas for temporary nonagricultural workers that became applicable to the CNMI with the passage of the CNRA. The CNRA allows CNMI employers to bring temporary workers to the CNMI under the H-2B program without counting against the numerical restriction for H-2B visas. <1.2.2. Investor Provisions> The CNRA and its implementing regulations established E-2 CNMI Investor (E-2C) status, a classification for certain foreign investors who previously had been lawfully admitted to the CNMI under the territory s immigration system and who met certain eligibility requirements. Such investors could petition for E-2C status prior to January 18, 2013, according to USCIS. Eligibility criteria include, among others, providing evidence of maintaining financial investments in the CNMI of at least $50,000. DHS may grant E-2C status for up to 2 years, and such status can be renewed. <1.2.3. Parole Provisions> Under the INA, DHS has discretionary parole authority to allow certain noncitizens, on a case-by-case basis, to be temporarily present in the United States. DHS has used this authority to grant parole to individuals who may be inadmissible or otherwise ineligible for admission to allow them to remain in the CNMI, according to DHS. In 2017, the President issued Executive Order 13767, calling for, among other things, the Secretary of Homeland Security to take appropriate action to ensure that parole authority is exercised only on a case-by-case basis in accordance with the plain language of the statute and, in all circumstances, only when an individual demonstrates urgent humanitarian reasons or a significant public benefit derived from such parole. <1.3. Proposed Legislative Changes Affecting the CNRA> Proposed bill H.R. 560 includes several provisions, among others, that would provide CNMI resident status to eligible individuals. To be eligible for CNMI resident status under H.R. 560, an individual must have been lawfully present in the CNMI under U.S. immigration laws on the date of enactment or on December 31, 2018; be admissible as an immigrant to the United States under the INA, although no immigrant visa is required; have resided continuously and lawfully in the CNMI from November 28, 2009, through the date of enactment; and not be a citizen of the Federated States of Micronesia, Republic of the Marshall Islands, or Republic of Palau. Individuals who meet each of these four criteria would be eligible to apply for CNMI resident status if they fall into one of the categories shown in table 2. <2. DHS Implementation of CNRA Foreign Worker and Investor Provisions> <2.1. Foreign Workers> <2.1.1. CW-1 Permits> As figure 2 shows, the number of CW-1 permits approved by USCIS remained well under the annual numerical limits established by DHS for fiscal years 2012 through 2015 and exceeded or neared the annual limits for fiscal years 2016 and 2017. According to USCIS data, most individuals with approved CW-1 permits for fiscal years 2015 through 2018 were born in the Philippines or China. In addition, as table 3 shows, four times more CW-1 permits were issued to Chinese nationals for fiscal years 2016 and 2017 than for fiscal year 2015. As we reported in 2017, firms involved in building a new casino in Saipan have primarily employed Chinese workers. About one-third of fiscal year 2018 CW-1 permit holders had maintained continuous employment in the CNMI since 2015 and could be eligible for CNMI resident status under H.R. 560, if they had been admitted every year under CW-1 status and were otherwise eligible. USCIS CW-1 permit data for fiscal years 2015 through 2018 show that, of the 8,995 foreign workers with CW-1 permits approved by USCIS for fiscal year 2018, 2,875 workers (about 32 percent) had maintained continuous employment in the CNMI since fiscal year 2015. (Of this group, 2,287 80 percent were born in the Philippines.) Under H.R. 560, a foreign national who meets additional eligibility requirements, including having resided continuously and lawfully in the CNMI from November 28, 2009, through the date of enactment, may be admitted to the CNMI under CNMI resident status if that individual was admitted to the CNMI as a CW-1 worker during fiscal year 2015 and during every subsequent fiscal year beginning before July 24, 2018. As a result, according to our analysis of USCIS data, 2,875 workers could be eligible under H.R. 560 to apply for CNMI resident status if they were admitted as CW-1 workers every fiscal year until 2018 and met all other eligibility conditions. Table 4 shows the numbers of foreign workers who received CW-1 permits for fiscal year 2018 and had maintained continuous employment in the CNMI since fiscal years 2012 through 2017. USCIS data show a reduction from fiscal year 2017 to fiscal year 2018 in the number of CW-1 permit holders and a significant increase in the number of H-2B beneficiaries. While the number of approved CW-1 permit holders declined from 12,889 in fiscal year 2017 to 8,995 in fiscal year 2018, the number of H-2B beneficiaries for those years increased from 0 to 3,058. In addition, our analysis of USCIS data found that the number of CW-1 permit holders for the construction trade declined from 2,981 to 545 by 82 percent from fiscal year 2017 to fiscal year 2018. Meanwhile, the number of H-2B beneficiaries for the construction trade in the CNMI increased from 0 for fiscal year 2017 to 1,801 for fiscal year 2018. In August 2017, Congress amended the CNRA to, among other things, restrict CW-1 permits for workers in construction and extraction occupations (as defined in the U.S. Department of Labor s Standard Occupational Classification system) by allowing only extensions of CW-1 permits first issued before October 1, 2015. The CNRA allows CNMI employers to petition for H-2 visas to bring temporary workers, such as construction workers, to the CNMI without counting against the numerical restriction for such visas. According to a senior USCIS official, the new casino employer in Saipan began petitioning in 2018 for foreign workers under the H-2B program instead of petitioning for CW-1 permits for its construction workers. The official noted that Pub. L. No. 115-53 s restriction on the use of CW-1 permits for construction trade workers may account for the decrease in petitions for CW-1 permit holders and increase in petitions for H-2B beneficiaries from fiscal year 2017 to fiscal year 2018. Table 5 shows the numbers of approved CW-1 permit holders and H-2B beneficiaries for the construction trade in fiscal years 2016 through 2018. In October 2016, DHS announced the list of countries whose citizens were eligible to participate in the H-2 program from January 18, 2017, to January 18, 2018. Asian countries on the list included the Philippines, South Korea, Taiwan, and Thailand, among others, but did not include China. In January 2019, because of concerns about overstays and human trafficking, DHS removed the Philippines from the list of countries eligible for the H-2B program. CNMI government and Chamber of Commerce officials have voiced concerns that the removal of the Philippines from the list will make it difficult to hire construction workers in the aftermath of two recent typhoons. <2.2. Investors> USCIS began approving 2-year E-2C status for eligible foreign long-term investors and their dependents in the territory in fiscal year 2011. According to USCIS, as of February 5, 2019, 56 investors who had previously resided in the CNMI as investors under CNMI immigration law were residing in the CNMI with E-2C status. Under H.R. 560, foreign nationals who otherwise meet additional eligibility requirements may be granted CNMI resident status if they resided in the CNMI as investors under CNMI immigration law and are presently resident under E-2C status. As a result, under H.R. 560, these 56 investors could be eligible to apply for CNMI resident status if they met all other eligibility conditions. <3. DHS Implementation of Parole Authority under the INA> According to USCIS testimony, after the CNRA was passed in 2008, USCIS implemented DHS s discretionary parole authority by making parole available to groups of individuals residing in the CNMI who would not be covered by INA classifications and for whom the classifications established in the CNRA did not appear to be appropriate. These individuals previously had immigration status under CNMI immigration law that allowed them to potentially remain in the CNMI indefinitely, according to USCIS. Without USCIS action, these individuals would have been deemed unlawfully present in the United States, according to USCIS documents. To provide such individuals with a means to remain temporarily in the CNMI during the transition period, USCIS announced several discretionary parole policies to cover the following groups, among others, which were potentially eligible for parole: CNMI permanent residents, immediate relatives of CNMI permanent residents, spouses and children of deceased CNMI permanent residents, and immediate relatives of citizens of the freely associated states (November 2009) Certain in-home foreign national caregivers of CNMI residents (October 2011) Immediate relatives of U.S. citizens, especially parents of U.S. citizen children, and stateless individuals in the CNMI (November 2011) In response to Executive Order 13767, on December 27, 2018, USCIS announced the termination of parole for immediate relatives of U.S. citizens and certain stateless individuals; CNMI permanent residents, immediate relatives of CNMI permanent residents, and immediate relatives of citizens of the freely associated states; and certain in-home foreign worker caregivers of CNMI residents. To provide an opportunity for individuals in these categories to prepare to depart or seek a different lawful status, USCIS announced that the affected individuals were allowed to remain in the CNMI with a transitional parole status for up to 180 days, not to extend beyond June 29, 2019. According to a senior USCIS official, from December 2, 2016, through December 14, 2018, USCIS had granted parole until December 31, 2018, to 1,039 individuals in the terminated parole categories. Under H.R. 560, some of these individuals could be eligible to apply for CNMI resident status if they met all other eligibility conditions. Vice Chairman Sablan, Republican Leader Gonzalez-Colon, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions you may have at this time. <4. GAO Contact and Staff Acknowledgements> If you or your staff have any questions about this testimony, please contact David Gootnick, Director, International Affairs and Trade, at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Emil Friberg (Assistant Director), Julia Ann Roberts (Analyst in Charge), Sada Aksartova, Andrew Kurtzman, Reid Lowe, and Alexander Welsh. Technical support was provided by Kathryn Bernet, Justin Fisher, Christopher Keblitis, Mary Moutsos, and Moon Parks. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
The 1976 covenant defining the political relationship between the CNMI and the United States exempted the CNMI—a U.S. territory north of Guam—from certain federal immigration laws. However, the covenant preserved the right of the U.S. government to apply federal law in these exempted areas. The CNRA, which amended a joint resolution approving the covenant, generally established federal control of CNMI immigration beginning in 2009.
In 2009, DHS began implementing, among other things, a foreign worker permit program to address CNRA provisions specific to the CNMI. DHS also began using its discretionary authority under the INA to parole certain groups of individuals into the CNMI (i.e., allow them to be temporarily present). Congress has amended the CNRA several times with provisions that affected the total number of permits allocated and the distribution of permits. Proposed bill H.R. 560 would further modify the CNRA by establishing a CNMI resident status for certain individuals. Among its other provisions, the CNRA allows CNMI employers to petition for H-2 visas for temporary workers without counting the visas against a numerical restriction.
Drawing from ongoing work, this testimony discusses DHS's implementation of (1) selected CNRA provisions regarding foreign workers, among others, in the CNMI and (2) its discretionary parole authority under the INA as applied in the CNMI. GAO updated information from May 2017 ( GAO-17-437 ) and February 2018 ( GAO-18-373T ), reviewed relevant legal documents, and analyzed DHS data.
What GAO Found
Under the Consolidated Natural Resources Act of 2008 (CNRA), the Department of Homeland Security (DHS) established the nonimmigrant Commonwealth of the Northern Mariana Islands (CNMI)–Only Transitional Worker program in 2011. Through the program, eligible foreign nationals can obtain CNMI-Only Transitional Worker (CW-1) permits to work temporarily in the CNMI. Under H.R. 560, foreign nationals who meet additional eligibility requirements could be eligible to receive CNMI resident status if they were admitted annually to the CNMI as a CW-1 worker in fiscal years 2015 through 2018. GAO's preliminary analysis of DHS data found that 2,875 (about 32 percent) of 8,995 workers with CW-1 permits for fiscal year 2018 had maintained continuous employment each fiscal year since 2015 (i.e., received a CW-1 permit annually). While DHS data show the number of approved CW-1 permit holders declined from fiscal year 2017 to fiscal year 2018 (see figure), the number of H-2B beneficiaries—who often fill construction jobs—increased from 0 to 3,058. In January 2019, DHS removed the Philippines from the list of countries eligible for the H-2B program.
In 2009, DHS began granting discretionary parole that authorized temporary stays for certain CNMI residents, such as spouses and children of U.S. citizens. These individuals may have been inadmissible or otherwise ineligible for admission to the United States, according to DHS. However, in December 2018, DHS announced that it was terminating parole for certain categories of residents in response to Executive Order 13767, issued in 2017. The order called on DHS to take appropriate action to ensure that parole authority is exercised only on a case-by-case basis, among other things. According to DHS, 1,039 individuals in the terminated categories had been granted parole until December 31, 2018. Under H.R. 560, some of these individuals could be eligible to apply for CNMI resident status. |
gao_GAO-20-413T | gao_GAO-20-413T_0 | <1. Selected Agencies Collect Some Information from Commenters and Accept Anonymous Comments through Regulations.gov and Agency-Specific Websites> Consistent with the discretion afforded by the APA, Regulations.gov and agency-specific comment websites use required and optional fields on comment forms to collect some identity information from commenters. In addition to the text of the comment, agencies may choose to collect identity information by requiring commenters to fill in other fields, such as name, address, and email address before they are able to submit a comment. Regardless of the fields required by the comment form, the selected agencies all accept anonymous comments in practice. Specifically, in the comment forms on Regulations.gov and agency- specific comment websites, a commenter can submit under a fictitious name, such as Anonymous Anonymous, enter a single letter in each required field, or provide a fabricated address. In each of these scenarios, as long as a character or characters are entered into the required fields, the comment will be accepted. Further, because the APA does not require agencies to authenticate submitted identity information, neither Regulations.gov nor the agency-specific comment websites contain mechanisms to check the validity of identity information that commenters submit through comment forms. Regulations.gov and agency-specific comment websites also collect some information about public users interaction with their websites through application event logs and proxy server logs, though the APA does not require agencies to collect or verify it as part of the rulemaking process. This information, which can include a public user s Internet Protocol (IP) address, browser type and operating system, and the time and date of webpage visits, is collected separately from the comment submission process as part of routine information technology management for system security and performance, and cannot be reliably connected to specific comments. <2. Most Selected Agencies Have Some Internal Guidance Related to Commenter Identity> Seven of the 10 selected agencies have documented some internal guidance associated with the identity of commenters during the three phases of the public comment process: intake, analysis, and response to comments. However, the focus and substance of this guidance varies by agency and phase of the comment process. As shown in Table 1, for selected agencies that have guidance associated with the identity of commenters, it most frequently relates to the comment intake or response to comment phases of the public comment process. The guidance for these phases addresses activities such as managing duplicate comments (those with identical or near-identical comment text but varied identity information) or referring to commenters in a final rule. Agencies are not required by the APA to develop internal guidance associated with the public comment process generally, or identity information specifically. <3. Selected Agencies Treatment of Identity Information Collected during the Public Comment Process Varies> Within the discretion afforded by the APA, the 10 selected agencies treatment of identity information varies during the three phases of the public comment process. Selected agencies differ in how they treat identity information during the comment intake phase, particularly in terms of how they post duplicate comments, which can lead to identity information being inconsistently presented to public users of comment systems. Generally, officials told us that their agencies either (1) maintain all comments within the comment system, or (2) maintain some duplicate comment records outside of the comment system, for instance, in email file archives. When an agency chooses to post a sample of duplicate comments, the identity information and unique comment contents for all duplicate comments may not be present on the public website. For example, for all duplicate comments received, Securities and Exchange Commission (SEC) posts a single example for each set of duplicate comments and indicates the total number of comments received. As a result, the identity information and any unique comment content beyond the first example are not present on the public website. (See fig. 1.) Selected agencies treatment of identity information during the comment analysis phase also varies. Specifically, program offices with the responsibility for analyzing comments place varied importance on identity information during the analysis phase. Finally, all agencies draft a response to comments with their final rule, but the extent to which the agencies identify commenters or commenter types in their response also varies across the selected agencies. <4. Selected Agencies Practices Associated with Posting Identity Information Are Not Clearly Communicated to Public Users of Comment Websites> Our analysis of Regulations.gov and agency-specific comment websites shows that the varied comment posting practices of the 10 selected agencies are not always documented or clearly communicated to public users of the websites. The E-Government Act of 2002 requires that all public comments and other materials associated with a given rulemaking should be made publicly available online to the extent practicable. In addition to the requirements of the E-Government Act, key practices for transparently reporting open government data state that federal government websites like those used to facilitate the public comment process should fully describe the data that are made available to the public, including by disclosing data sources and limitations. We found that the selected agencies we reviewed do not effectively communicate the limitations and inconsistencies in how they post identity information associated with public comments. As a result, public users of the comment websites lack information related to data availability and limitations that could affect their ability to use and make informed decisions about the comment data and effectively participate in the rulemaking process themselves. <4.1. Regulations.gov and Participating Agency Websites> Public users of Regulations.gov seeking to submit a comment are provided with a blanket disclosure statement related to how their identity information may be disclosed, and are generally directed to individual agency websites for additional detail about submitting comments. While additional information is provided in the Privacy Notice, User Notice, and Privacy Impact Assessment for Regulations.gov, public users are not provided any further detail on Regulations.gov regarding what information, including identity information, they should expect to find in the comment data. Additionally, there is not enough information to help public users determine whether all of the individual comments and associated identity information are posted. Available resources on Regulations.gov direct public users to participating agencies websites for additional information about agency-specific review and posting policies. Seven of the eight participating agencies websites direct public users back to Regulations.gov and the Federal Register, either on webpages that are about the public comment process in general, or on pages containing information about specific NPRMs. Three of these participating agencies the Environmental Protection Agency (EPA), the Fish and Wildlife Service (FWS), and the Food and Drug Administration (FDA) do provide public users with information beyond directing them back to Regulations.gov or the Federal Register, but only FDA provides users with details about posting practices that are not also made available on Regulations.gov. The eighth participating agency the Employee Benefits Security Administration (EBSA) does not direct public users back to Regulations.gov, and instead recreates all rulemaking materials for each NPRM on its own website, including individual links to each submitted comment. However, these links go directly to comment files, and do not link to Regulations.gov. While EBSA follows departmental guidance associated with posting duplicate comments, which allows some discretion in posting practices, the agency does not have a policy for how comments are posted to Regulations.gov or its own website. Further, in the examples we reviewed, the content of the NPRM-specific pages on EBSA s website does not always match what is posted to Regulations.gov. Because participating agencies are not required to adhere to standardized posting practices, Regulations.gov directs public users to participating agency websites for additional information about posting practices and potential data limitations. However, these websites do not describe the limitations associated with the identity information contained in publicly posted comments. As allowed for under the APA, all of the participating agencies in our review vary in the way in which they post identity information associated with comments particularly duplicate comments. However, the lack of accompanying disclosures may potentially lead users to assume, for example, that only one entity has weighed in on an issue when, actually, that comment represents 500 comments. Without better information about the posting process, the inconsistency in the way in which duplicate comments are presented to public users of Regulations.gov limits public users ability to explore and use the data and could lead users to draw inaccurate conclusions about the public comments that were submitted and how agencies considered them during the rulemaking process. <4.2. Agency-Specific Comment Sites> Both nonparticipating agencies use comment systems other than Regulations.gov and follow standardized posting processes associated with public comments submitted to their respective comment systems, but SEC has not clearly communicated these practices to the public. Although it appears to users of the SEC website that the agency follows a consistent process for posting duplicate comments, at the time of our June 2019 report, this practice had not been documented or communicated to public users of its website. In contrast, FCC identifies its policies for posting comments and their associated identity information in a number of places on the FCC.gov website, and on its Electronic Comment Filing System (ECFS) web page within the general website. Regarding comments submitted to rulemaking proceedings through ECFS, public users are informed that all information submitted with comments, including identity information, will be made public. Our review of ECFS comment data did not identify discrepancies with this practice. Although the public comment process allows interested parties to state their views about prospective rules, the lack of communication with the public about the way in which agencies treat identity information during the posting process, particularly for duplicate comments, may inhibit users meaningful participation in the rulemaking process. While the APA does not include requirements for commenters to provide identity information, or for agency officials to include commenters identity as part of their consideration of comments, key practices for transparently reporting open government data state that federal government websites like those used to facilitate the public comment process should fully describe the data that are made available to the public, including by disclosing data sources and limitations. <5. Selected Agencies Are in the Process of Implementing GAO Recommendations> As shown in Table 2, we recommended in our June 2019 report that five of the selected agencies establish a policy for posting comments, and that eight selected agencies take action to more clearly communicate their policies for posting comments, particularly with regard to identity information and duplicate comments. These agencies generally agreed with our recommendations and identified actions they planned to take in response, such as developing policies for posting duplicate comments and communicating those in various ways to public users. Since issuing our June 2019 report, all of the agencies to which we made recommendations have provided us with additional updates. Specifically, SEC completed actions that are responsive to the recommendation we made to it. In this regard, in September 2019, SEC issued a memorandum that reflects SEC s internal policies for posting duplicate comments and associated identity information. SEC has also communicated these policies to public users on the SEC.gov website by adding a disclaimer on the main comment posting page that describes how the agency posts comments. These measures will help public users better determine whether and how they can use the data associated with public comments. The other seven agencies have provided updates, but have not yet implemented the recommendations. In December 2019 and January 2020, the Bureau of Land Management (BLM), Consumer Financial Protection Bureau (CFPB), EPA, and FWS notified us that they are in the process of developing or updating policies for posting public comments as well as statements for their websites to communicate these policies to the public. Similarly, in January 2020, the Department of Health and Human Services (HHS) stated that the Centers for Medicare and Medicaid Services (CMS) would update its comment posting policy and communicate it on the CMS website. However, the excerpt of the policy language provided does not include information about how the agency posts duplicate comments. Further, CMS did not provide us with the finalized policy, and our review of the website does not indicate any changes have been made. HHS officials stated they would provide additional follow up actions by July 2020. In September 2019, EBSA also stated that it will develop a written policy regarding posting of comments, including duplicate comments, which will be available on its website. However, the agency did not provide evidence that a formal evaluation of its current practice of replicating rulemaking dockets had been conducted, and did not identify plans to do so. The Wage and Hour Division (WHD) indicated that it will add text to each webpage for any rulemaking that invites public comments that states any personal information included in the comments (including duplicate) will be posted to Regulations.gov without change. However, the preliminary text provided by officials in August 2019 does not explain WHD s policy of posting duplicate comments as a group under a single document ID, and therefore does not clearly communicate the agency s posting practices to the public. Chairman Green, Ranking Member Barr, and Members of the Subcommittee, this concludes my prepared remarks. I would be happy to answer any questions you may have at this time. <6. GAO Contact and Staff Acknowledgments> For further information regarding this testimony, please contact Seto J. Bagdoyan, (202) 512-6722 or bagdoyans@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are David Bruno (Assistant Director), Allison Gunn (Analyst in Charge), Elizabeth Kowalewski, and Roger Gildersleeve. Individuals who contributed to the report on which this testimony is based include Enyinnaya David Aja, Gretel Clarke, Lauren Kirkpatrick, James Murphy, Alexandria Palmer, Carl Ramirez, Shana Wallace, and April Yeaney. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
Federal agencies publish on average 3,700 proposed rules yearly and are generally required to provide interested persons (commenters) an opportunity to comment on these rules. In recent years, some high-profile rulemakings have received extremely large numbers of comments, raising questions about how agencies manage the identity information associated with comments. While the APA does not require the disclosure of identifying information from a commenter, agencies may choose to collect this information.
This testimony summarizes GAO's June 2019 report on public comment posting practices (GAO-19-483). In that report, GAO examined (1) the identity information collected by comment websites; (2) the guidance agencies have related to the identity of commenters; (3) how 10 selected agencies treat identity information; and (4) the extent to which the selected agencies clearly communicate their practices associated with identity information. The 10 agencies were selected on the basis of the volume of public comments they received on rulemakings. For this testimony, GAO obtained updates on the status of recommendations made to the selected agencies.
What GAO Found
The Administrative Procedure Act (APA) governs the process by which many federal agencies develop and issue regulations, which includes the public comment process (see figure below).
In June 2019, GAO found that Regulations.gov and agency-specific comment websites collect some identity information—such as name, email, or address—from commenters who choose to provide it during the public comment process. The APA does not require commenters to disclose identity information when submitting comments. In addition, agencies have no obligation under the APA to verify the identity of such parties during the rulemaking process, and all selected agencies accept anonymous comments in practice.
GAO found in the June 2019 report that seven of 10 selected agencies have some internal guidance associated with the identity of commenters, but the substance of this guidance varies. This reflects the differences in the way that the selected agencies handle commenter identity information internally.
GAO also found that the selected agencies' practices for posting public comments to comment websites vary considerably, particularly for duplicate comments (identical or near-identical comment text but varied identity information). For example, one agency posts a single example of duplicate comments and indicates the total number of comments received, but only the example is available to public users of Regulations.gov. In contrast, other agencies post all comments individually. As a result, identity information submitted with comments is inconsistently presented on public websites.
The APA allows agencies discretion in how they post comments, but GAO found that some of the selected agencies do not clearly communicate their practices for how comments and identity information are posted. GAO's key practices for transparently reporting government data state that federal government websites should disclose data sources and limitations to help public users make informed decisions about how to use the data. If not, public users of the comment websites could reach inaccurate conclusions about who submitted a particular comment, or how many individuals commented on an issue.
What GAO Recommends
In June 2019, GAO made recommendations to eight of the selected agencies regarding implementing and communicating public comment posting policies. The agencies generally agreed with the recommendations and identified actions they planned to take in response. Since the June 2019 report, one agency has implemented GAO's recommendation and seven agencies have identified additional planned actions. |
gao_GAO-19-384 | gao_GAO-19-384_0 | <1. Background> Federal agencies are dependent on information technology (IT) systems and electronic data to carry out operations and to process, maintain, and report essential information. These systems are highly complex and dynamic, technologically diverse, and often geographically dispersed. However, the IT systems supporting federal agencies and our nation s critical infrastructures are at risk. Information and systems are subject to serious threats that can have adverse impacts on organizational operations and assets, individuals, other organizations, and the nation. These threats can include purposeful attacks, environmental disruptions, and human/machine errors, and may result in harm to the national and economic security interests of the United States. In recognition of the growing threat, we designated information security as a government-wide high-risk area since 1997. In 2003, we expanded the information security high-risk area to include the protection of critical cyber infrastructure. We further expanded the information security high- risk area in 2015 to include protecting the privacy of personally identifiable information. Cybersecurity incidents continue to impact federal agencies, as well as entities across various critical infrastructure sectors. In fiscal year 2017, federal executive branch civilian agencies reported 35,277 incidents to the U.S. Computer Emergency Readiness Team. These incidents included web-based attacks, phishing, and the loss or theft of computing equipment. These incidents and others like them can pose a serious challenge to economic and national security and personal privacy. The following examples highlight the impact of such incidents: In January 2019, the Department of Justice (Justice) announced that it had indicted two Ukrainian men for their roles in a large-scale, international conspiracy to hack into the Securities and Exchange Commission s computer systems and profit by trading on critical information they stole. The indictment alleges that the two hacked into the Commission s Electronic Data Gathering, Analysis, and Retrieval system and stole thousands of files, including annual and quarterly earnings reports containing confidential, non-public, financial information, which publicly traded companies are required to disclose to the Commission. In March 2018, a joint alert from DHS and the Federal Bureau of Investigation stated that Russian government actors had been targeting the systems of multiple U.S. government entities and critical infrastructure sectors since at least March 2016. These Russian government actors had affected multiple organizations in various sectors, to include energy, nuclear, water, aviation, construction, and critical manufacturing. DHS and the Federal Bureau of Investigation characterized this activity as a multi-stage intrusion campaign by Russian government cyber actors who targeted small commercial facilities networks where they staged malware, conducted spear phishing, and gained remote access into energy sector networks. In June 2015, the Office of Personnel Management (OPM) reported that an intrusion into its systems had affected the personnel records of about 4.2 million current and former federal employees. Then, in July 2015, the agency reported that a separate, but related, incident had compromised its systems and the files related to background investigations for 21.5 million individuals. In total, OPM estimated 22.1 million individuals had some form of personally identifiable information stolen, with 3.6 million being a victim of both breaches. The risks to IT systems supporting the federal government and the nation s critical infrastructure are increasing as security threats continue to evolve and become more sophisticated. These risks include insider threats from witting or unwitting employees, escalating and emerging threats from around the globe, steady advances in the sophistication of attack technology, and the emergence of new and more destructive attacks. Therefore, it is imperative for agency leaders and managers at all levels to manage the risks associated with the operation and use of information systems that support their missions and business functions. Cybersecurity risk management comprises a full range of activities undertaken to protect IT and data from unauthorized access and other cyber threats; maintain awareness of cyber threats; detect anomalies and incidents adversely affecting IT and data; and mitigate the impact of, respond to, and recover from incidents. Information sharing facilitates and supports all of these activities. <1.1. Federal Law and Policy Set Roles and Responsibilities for Protecting Federal Systems and Managing Cybersecurity Risk> Several federal laws, executive orders, and policies establish requirements for protecting federal systems and managing cybersecurity risks. Specifically, FISMA is intended to provide a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets, as well as the effective oversight of information security risks. The act requires each agency to develop, document, and implement an agency- wide information security program to provide risk-based protections for the information and information systems that support the operations and assets of the agency, including those provided or managed by another entity. FISMA also assigns government-wide responsibilities to key agencies: OMB is responsible for developing and overseeing implementation of policies, principles, standards, and guidelines on information security in federal agencies, except with regard to national security systems. DHS is responsible for certain operational aspects of agencies information security policies and practices, including assisting OMB in fulfilling its FISMA authorities, issuing binding operational directives, monitoring agencies security policies and practices, and assisting them with implementation. NIST is responsible for developing standards for categorizing information and information systems, security requirements for information and systems, and guidelines for detection and handling of security incidents. More recently, the administration has re-emphasized the importance of improving agencies cybersecurity risk management capabilities through the issuance of an executive order. Further, OMB has issued minimum requirements, standards, and guidance to ensure federal managers are effectively managing cybersecurity risks. OMB has also issued policies for enterprise risk management (ERM), which considers all key risks that agencies face and their potential impacts on the agency s mission. Cybersecurity risk is just one type of risk that agencies consider in their enterprise approach to risk management. Table 1 identifies the administration s May 2017 executive order and relevant OMB publications and guidance on cybersecurity risk management. In its responsibility for certain operational aspects of agencies implementation of cybersecurity practices, DHS is spearheading several initiatives to assist federal agencies in protecting their computer networks and electronic information. Examples of DHS s initiatives are described in table 2. <1.2. NIST Has Established a Framework for Federal Cybersecurity Risk Management Activities> Implementing effective cybersecurity requires any organization whether a private sector company; a non-profit entity; or an agency at the state, local, or federal level to identify, prioritize, and manage cyber risks across its enterprise. Risk management is a comprehensive process that requires organizations to (1) frame risk (i.e., establish the context for risk- based decisions), (2) assess risk, (3) respond to risk once determined, and (4) monitor risk on an ongoing basis using effective organizational communications and a feedback loop for continuous improvement in the risk-related activities of organizations. In accordance with its responsibilities under FISMA, as well as other laws and executive orders, NIST has developed a framework for managing risk to federal information and information assets. This framework calls for a multi-tiered approach to risk management, with activities at the information system (system), business/mission, and organization (e.g., agency) level. Cybersecurity risk management activities at the organization level provide the foundation for activities at the mission/business process and system levels, such as the selection and implementation of security controls and decisions about the operation of systems based on a determination of risk. Figure 1 illustrates an organization-wide approach to cybersecurity risk management. Guidance for federal agencies cybersecurity risk management processes is found in a suite of NIST special publications. Table 3 highlights key NIST cybersecurity risk management publications. <1.3. Federal Guidance Includes Key Steps for Establishing Cybersecurity Risk Management Programs> OMB and NIST guidance identify practices for establishing agency-wide cybersecurity risk management programs. Among other things, these activities are intended to facilitate better communication between senior leaders and executives and system owners and operators; align agency priorities with resource allocation and prioritization at the system level; and convey acceptable limits regarding the selection and implementation of controls within the established organizational risk tolerance. Practices that provide a foundation for an agency s cybersecurity risk management program are summarized in table 4. Establish the role of a cybersecurity risk executive: In order to ensure that cybersecurity risks are being addressed across the agency, NIST Special Publication 800-39 states that agencies should establish a cybersecurity risk executive. This can take the form of an individual or group that provides agency-wide oversight of cybersecurity risk activities and facilitates collaboration among stakeholders and consistent application of the cybersecurity risk management strategy. The cybersecurity risk executive should ensure that risk-related considerations for information systems are viewed from an agency-wide perspective regarding the strategic goals and objectives. The cybersecurity risk executive also should ensure that cybersecurity risk is managed consistently across the agency, reflects organizational risk tolerance, and is considered along with other types of risk to ensure mission/business success. Develop a cybersecurity risk management strategy: According to NIST Special Publication 800-39 and other guidance, agencies should develop a cybersecurity risk management strategy to provide a foundation for managing risk and delineate the boundaries for risk-based decisions. The strategy should describe the strategic-level decisions and considerations that senior leaders and executives are to use to manage security and privacy risks to agency operations, assets, individuals, other organizations, and the nation. The strategy should also guide and inform how security and privacy risks are framed, assessed, responded to, and monitored. The strategy should include (1) a statement of the agency s risk tolerance, (2) how it intends to assess risk (e.g., acceptable risk assessment methodologies), (3) acceptable risk response strategies (e.g., acceptance, mitigation, avoidance), and (4) how the agency intends to monitor risk over time. Document risk-based policies: NIST Special Publication 800-37 identifies foundational activities at the agency and information system levels that should be included in policies to help prepare agencies to manage security and privacy risks. These activities should be guided by risk-based decisions. Specific elements of such risk-based policies include (1) identifying and assigning individuals with key roles for executing the risk management framework; (2) requiring an agency-wide assessment of cyber risks; (3) identifying and documenting common security controls that can be inherited by multiple information systems; (4) developing an agency-wide strategy for monitoring control effectiveness; (5) requiring system-level risk assessments to be performed and regularly updated; (6) tailoring system security controls based on risk; (7) prioritizing remedial actions to correct vulnerabilities identified in plans of action and milestones (POA&M) based on risk; and (8) using a determination of risk to make decisions about system operation and use. Conduct an agency-wide cybersecurity risk assessment: According to NIST Special Publications 800-39 and 800-37, agencies should assess cybersecurity and privacy risks and update the results on an ongoing basis. Risk assessment at the agency level is based primarily on aggregated information from system-level risk assessment results, continuous monitoring, and any relevant strategic risk considerations. The assessment is intended to help the agency consider the totality of risk derived from the operation and use of its information systems and from information exchanges and connections with other internally and externally owned systems. Such assessments may identify systemic weaknesses or deficiencies discovered in multiple information systems and assess the overall risks that these present to operations, assets, and individuals. Establish coordination between cybersecurity and enterprise risk management: ERM, as a discipline, deals with identifying, assessing, and managing risks. OMB has stated that an effective enterprise risk management program should promote a common understanding for recognizing and describing potential risks that can impact an agency s mission and the delivery of services to the public. Such risks include strategic, market, cyber, legal, reputational, political, and a broad range of operational risks. Toward this end, OMB Circular A-123 directs agencies to implement a capability for enterprise risk management. Specifically, it encourages agencies to establish a risk management governance structure, such as a risk management council, which may be integrated with existing management structures; develop risk profiles that identify risks arising from mission and mission-support operations; and consider those risks as part of the annual strategic review process. Because cybersecurity is a key risk facing virtually every federal agency, it is important for coordination to exist between agencies ERM functions and their cybersecurity risk management programs, particularly the cybersecurity risk executive. NIST SP 800-39 states that effective risk management requires an agency s mission/business processes to explicitly account for information security risk when making operational decisions and that cybersecurity risk information should be shared with key stakeholders throughout the organization. According to NIST, the risk executive should serve as a common risk management resource for senior leaders, mission/business owners, and other organization officials and as a focal point for communicating and sharing information security risk-related information among key stakeholders. OMB has also raised concerns that agencies ERM programs do not effectively identify, assess, and prioritize actions to mitigate cybersecurity risks in the context of other enterprise risks. GAO has also emphasized the importance of sharing risk information with stakeholders as part of an effective risk management program. <2. Agencies Have Not Fully Established Elements of Their Cybersecurity Risk Management Programs> The 23 civilian CFO Act agencies varied in the extent to which they had established key elements of their cybersecurity risk management programs. Specifically, 22 of the 23 agencies established the role of cybersecurity risk executive, and most of the 23 agencies had established policies that include elements to ensure their activities are guided by risk- based decisions. However, fewer than half of the agencies developed an agency-wide cybersecurity risk management strategy or fully established coordination with their enterprise risk management function. Figure 2 summarizes the extent to which the agencies had established these elements as of April 2019. <2.1. Most Agencies Established the Role of Cybersecurity Risk Executive> Twenty-two of the 23 civilian CFO Act agencies established a cybersecurity risk executive to provide agency-wide oversight of cybersecurity risk activities. Agencies varied in assigning this responsibility to the chief information officer (CIO), chief information security officer (CISO), or another official or entity. For example: At the Department of Health and Human Services (HHS), the CIO serves as the risk executive for the department, and is responsible for executing the Risk Management Framework tasks outlined in NIST SP 800-37. The United States Agency for International Development (USAID) designated the CISO with responsibility for carrying out the risk executive functions for the agency. Among other things, the CISO is responsible for developing, implementing, and managing an agency- wide security authorization process and a threat awareness program. The Department of the Treasury (Treasury) assigned the function of risk executive to its department CIO Council. The council s responsibilities include ensuring the cybersecurity program is consistent with the provisions of NIST SP 800-39; providing guidance to and oversight of the organization s risk management program and developing the cybersecurity risk management strategy; communicating organization-wide threat, vulnerability, and risk-related information; and providing a strategic view for managing cyber risk throughout the organization. One agency, the General Services Administration (GSA), had not defined the role of its cybersecurity risk executive in its policy. Officials in GSA s Office of the CIO stated that they had not formally designated this role because the agency s risk executive responsibilities were shared among the CIO, CISO, authorizing officials, and other GSA officials for risk management. However, without clearly defining and documenting the responsibility for the risk executive function, the agency may lack consistent implementation and oversight of cybersecurity risk management activities and an effective agency-wide view for managing risk. Additional details on the 23 agencies cyber risk executive positions are provided in appendix II. <2.2. Most Agencies Did Not Develop an Agency-Wide Cybersecurity Risk Management Strategy to Guide Their Risk Decisions> Among the 23 civilian CFO Act agencies, seven had developed a cybersecurity risk management strategy that fully addressed the four elements called for in the NIST guidance. Specifically, each of the seven agencies (the Department of Commerce (Commerce), the Department of Labor (Labor), the Department of State (State), USAID, GSA, OPM, and the Social Security Administration (SSA)) had developed a strategy to guide how cybersecurity risk is to be framed, assessed, responded to, and monitored. For example, some of the strategies discussed risk tolerance in terms of thresholds based on essential mission functions and the processing of personally identifiable information or system impact levels, types of data processed, and accessibility of systems, among other factors. The strategies also included breakdowns of appropriate risk response strategies and how the agencies intended to assess and monitor risk. In addition, five of the 23 agencies (the Department of Education (Education), Environmental Protection Agency (EPA), National Science Foundation (NSF), the Department of Transportation (Transportation), and the Small Business Administration (SBA)) had partially developed cybersecurity risk management strategies, but their strategies did not address certain required elements. Specifically, while these agencies developed strategic documents, these documents did not include all of the required elements, such as a statement of risk tolerance or acceptable risk mitigation strategies. EPA officials stated that they intended to update their strategy documents to address how the agency intends to assess risk, while Education and NSF officials did not state whether they intended to update their strategy to include a statement of risk tolerance, among other missing elements. Transportation and SBA officials stated that they believed their existing strategy documents addressed all the elements; however, neither agency s strategy included an expression of departmental risk tolerance and risk mitigation strategies. Further, Transportation s strategy did not include a description of acceptable risk assessment methodologies. The remaining 11 agencies had not developed an agency-wide cybersecurity risk management strategy. These agencies offered a variety of reasons for not doing so. Seven agencies the Department of Agriculture (Agriculture), Department of Energy (Energy), HHS, Department of the Interior (Interior), Treasury, the National Aeronautics and Space Administration (NASA), and the Nuclear Regulatory Commission (NRC) acknowledged that they had not developed a cybersecurity risk management strategy that includes the key elements. According to agency officials, this was due to the federated nature of the agency or difficulty in establishing an agency-wide understanding of risk tolerance, among other factors. Further, these agencies stated that they intended to develop such a strategy or were considering doing so. The other four agencies DHS, the Department of Housing and Urban Development (HUD), Department of Justice (Justice), and Department of Veterans Affairs (VA) stated that they believed their existing documents and policies constituted a risk management strategy. However, we determined that these documents did not constitute an integrated strategy that addressed key elements such as risk tolerance and risk mitigation strategies. Without a comprehensive risk management strategy, the agencies may lack an organization-wide understanding of acceptable risk levels and appropriate risk response strategies to protect their systems and data. Additional details regarding the 23 agencies establishment of cybersecurity risk management strategies are discussed in appendix III. <2.3. Agencies Established Policies for Implementing Risk Management Activities, but Gaps Remain in Some Areas> Most of the 23 agencies had established policies that include elements to ensure their activities are guided by risk-based decisions. However, many agencies had gaps in one or more of these areas. Specifically, six agencies (DHS, Education, Justice, Treasury, NSF, and SSA) addressed all of these areas in their policies and procedures, while the remaining 17 agencies had not addressed at least one area. Table 5 discusses, for each of these elements, which of the 23 agencies had addressed it in their policies. Eleven agencies Agriculture, Commerce, Energy, HHS, Interior, Labor, EPA, GSA, NASA, NRC, and OPM generally agreed that their policies lacked identified elements and either stated that they intended to update policies to include them or would consider doing so. The remaining six agencies HUD, State, Transportation, VA, USAID, and SBA stated that they believed their policies addressed these elements or that they carried out these activities in practice, but did not provide documentation of policies that addressed them. Without ensuring that their policies include all key risk management activities, the agencies may not be taking the foundational steps needed to effectively identify and prioritize activities to mitigate cybersecurity risks that could result in the loss of sensitive data or compromise of agency systems. Additional details on the agencies risk management policies are provided in appendix IV. <2.4. About Half of the Agencies Developed an Agency- Wide Cybersecurity Risk Assessment Process> Twelve of the 23 civilian CFO Act agencies had developed a process or mechanism for conducting an agency-wide cybersecurity risk assessment. Specifically, these agencies (Agriculture, Education, Energy, DHS, HUD, Interior, Justice, Labor, State, Transportation, NSF, and SSA) had developed processes for aggregating system-level data and analyzing them to assess overall cybersecurity risk to agency operations and assets. For example, these 12 agencies developed scorecards or dashboards that provided agency-wide views of key indicators aggregated from system-level information and risk scores for agency components. Officials from seven of these agencies described how these assessments enable them to make enterprise-wide decisions on prioritizing and remediating risks. The remaining 11 agencies (Commerce, GSA, HHS, NASA, NRC, Treasury, VA, EPA, OPM, SBA, and USAID) offered a variety of reasons for why they did not develop a process for assessing cybersecurity risks at the agency level. Five agencies stated that they were still working to develop or acquire tools that will allow them to aggregate system-level data, and three of these noted that they expected further implementation of DHS s CDM initiative to provide this capability. The other six agencies stated that they did conduct such an assessment in practice, but did not provide sufficient documentation of the process they use. Without a means of aggregating and assessing cybersecurity risks arising from their information systems to the organizational level, these 11 agencies may be missing opportunities to identify trends or prioritize investments in cybersecurity risk mitigation activities in order to target widespread or systemic risks to the systems and organization. Additional details of agencies processes for conducting organization-wide cyber risk assessments are contained in appendix V. <2.5. Most Agencies Did Not Fully Establish Their Approach to Coordinating between Cybersecurity and Enterprise Risk Management> Ten of the 23 civilian CFO Act agencies provided evidence of having a fully established process for coordination between their cybersecurity risk executive and the entity responsible for overall ERM functions. Five agencies provided evidence of a partially established process, and eight could not provide evidence of such a process. The ten agencies with fully established processes included this coordination as part of their defined and documented ERM governance structure and process. The agencies took steps to ensure such coordination in a variety of ways. For example, eight agencies, including Education and USAID, established a specific body, such as a risk management council, with responsibility for ERM. These agencies included their cybersecurity risk executive in the council s membership in order to facilitate coordination. Other agencies, such as the National Science Foundation, ensured coordination through regular reporting or briefings between their cybersecurity risk executive and their ERM governance structure. In addition, five agencies partially established an approach to coordination in this area. These agencies provided some evidence of coordination activities, but had not formally defined or documented this coordination as part of their ERM structure or process. Specifically, four of these agencies (Justice, the Department of Transportation (Transportation), the Environmental Protection Agency (EPA), and the Social Security Administration (SSA)), provided evidence of occasional coordination between their cybersecurity risk executive and officials responsible for ERM. However, they did not fully define and document their ERM governance structures and processes, including how coordination with the cybersecurity risk executive was to take place. One agency GSA had not formally documented the position or responsibilities of the cybersecurity risk executive in its policy. Thus, the agency could not show that the risk executive was involved in ERM activities, although the agency board responsible for ERM does include the agency CIO as a co-chair. Although they did not provide evidence of a fully documented process, officials from these five agencies stated that they perform this coordination in practice. However, documenting these processes would help ensure a consistent, rather than ad-hoc, approach to communication and coordination. Lastly, eight agencies had not established an approach to coordination in this area. In particular, these agencies (Agriculture, HHS, Interior, VA, DHS, State, Treasury, and NRC) either did not have an ERM governance structure and/or did not provide evidence of a process for coordination between their ERM governance structure and their cybersecurity risk executive. Officials from two of these agencies stated that they were still in the process of formalizing their approach to ERM, while the other six stated that such coordination occurs, even if processes may not be fully documented. However, as noted previously, documenting these processes would help ensure a consistent, rather than ad-hoc, approach to communication and coordination. Without regular coordination between the cybersecurity risk executive and broader ERM entity, senior leadership responsible for ERM may not be fully aware of significant cybersecurity risks and, thus, may not be positioned to address them in the context of other risks and their potential impacts on the mission of the agency. Additional details on agencies coordination processes are provided in appendix VI. <3. Agencies Identified a Variety of Challenges in Developing and Implementing Cybersecurity Risk Management Programs> Officials responsible for cybersecurity risk management at a majority of the 23 civilian CFO Act agencies reported eight challenges in establishing and implementing cybersecurity risk management programs. Most commonly cited were challenges related to hiring and retaining qualified personnel, competing priorities between cybersecurity and agency mission or operations, and establishing and implementing consistent cybersecurity risk management policies and procedures. Figure 3 shows the challenges identified and the number of agencies reporting each challenge. <3.1. Hiring and Retaining Key Cybersecurity Risk Management Personnel> All of the 23 civilian CFO Act agencies reported hiring and retaining personnel to fill key cybersecurity risk management positions as a challenge in establishing a cybersecurity risk management program. In particular, six agencies cited the lengthy federal hiring process, and 14 noted the difficulty in competing with private-sector companies in salary and other benefits. Further, 11 agencies noted that there is a shortfall in candidates with the skills needed for cybersecurity risk management. For example: NASA s Chief Cyber Risk Officer noted that cybersecurity risk management is a multi-disciplinary field that blends technical cyber expertise with project management principles and a business-focused management background. This official stated that it is difficult to find talent that possesses this multi-disciplinary experience, in part, because current government marketing for cybersecurity skill sets advertise for purely technical skills. The official added that, currently, the government lacks clearly defined roles for cyber risk management as a dedicated job function. HUD s CIO saw this challenge as part of a larger shortfall of this highly in-demand resource and noted that HUD must compete with tech giants and Silicon Valley startups for qualified personnel. The official stated that the executive order providing direct hiring authorities for cybersecurity positions provides assistance, though the department still needs to be creative in enhancing retention and recruitment efforts through bonuses and other incentives. A key to having a successful cybersecurity program is having a well- trained, highly qualified workforce that is versed in identifying cyber threats and recognizes steps to take once confronted with them. Our work has identified difficulties in recruiting and retaining qualified cybersecurity professionals as a continuing challenge. If agencies are unable to hire and retain qualified cybersecurity risk management personnel, they will be hindered in establishing effective programs for cybersecurity risk management. <3.2. Managing Competing Priorities between Operations and Cybersecurity> Nineteen of the 23 civilian CFO Act agencies reported competing priorities between agency mission operations and cybersecurity as a challenge. In particular, 12 agencies noted that cybersecurity requirements are sometimes perceived as impeding mission activities, such as deploying systems, sharing information, or providing public services. In addition, four agencies highlighted the competition for limited resources between cybersecurity risk management activities and operational or mission needs. For example: HHS s Acting Deputy CISO stated that, due to the federated nature of the agency and the broad spectrum of its missions and business functions, there is often a disconnect between security and operational personnel. As an example, the official stated that Operating Divisions that are research or academics focused will require increased information sharing and flexibility, but this often conflicts with cybersecurity concepts and processes. Interior s Deputy CIO stated that the need to balance mission priorities with those related to cybersecurity risk management leads to fiscal and operational challenges when making investment, architectural, and operational decisions. NIST emphasizes determining the relative importance of the mission/business functions in order to make the appropriate level of risk management investment. If agencies are unable to establish priorities among cybersecurity and operational needs, they may be challenged in allocating resources appropriately to ensure their systems and information are appropriately secured. <3.3. Establishing and Implementing Consistent Cybersecurity Risk Management Policies and Procedures> Eighteen of the 23 civilian CFO Act agencies reported challenges in establishing and implementing consistent cybersecurity risk management policies and procedures across the organization. Eight agencies cited challenges in this area arising from the difficulty in ensuring consistency across a federated or decentralized organization, while other factors included training staff and making them aware of policies, and the need to integrate cybersecurity policies with missions and operations. For example: EPA s CISO related that challenges in consistent implementation of policies and procedures include the need to train individuals involved in the risk management process, address different views of risk appetite within the agency, and deal with varying perspectives on the importance of cybersecurity, among other things. OPM s Deputy CISO highlighted that frequent changes in the agency s leadership (e.g., having eight CIOs since 2012) had led to challenges with the agency s ability to implement consistent policies in an ongoing, streamlined manner. As we have previously reported, CIOs and former agency IT executives believed it was necessary for a CIO to stay in office for 3 to 5 years to be effective and 5 to 7 years to fully implement major change initiatives in large public sector organizations. In addition, the Deputy CISO stated that the establishment and implementation of cybersecurity risk management policies and procedures has been viewed as a secondary responsibility, to be accomplished when more pressing and immediate operational concerns do not need attention. NIST has emphasized the importance of a consistent approach in order for cybersecurity risk management to succeed at all levels of an agency. If agencies are unable to establish consistent cybersecurity risk management policies and procedures, they may not be able to effectively prioritize and implement security and privacy activities to protect their most critical assets and systems. <3.4. Establishing and Implementing Standardized IT Capabilities> Eighteen of the 23 civilian CFO Act agencies reported challenges in establishing and implementing standardized IT capabilities across the organization. Eleven of these agencies noted that decentralized or federated organizations create difficulty in implementing standardized, agency-wide tools and solutions to manage cybersecurity risks. In addition, four agencies cited issues with legacy systems, which may not always be compatible with capabilities intended to be used agency wide. For example: The Department of Commerce s (Commerce) Deputy CISO stated that, because Commerce is a largely federated agency, with each bureau operating and maintaining its own environment, managing a truly enterprise solution is challenging in numerous areas. For example, the official stated that the department cannot control access at bureaus due to disconnected networks, different security offices and policies, and even different logical access policies. The official added that a change in governance and thinking toward common enterprise tools and solutions requires a shift in management and thinking across the department and its bureaus. Energy s Acting Deputy CIO for Cybersecurity stated that the department is working, to the degree possible, to implement enterprise solutions for cybersecurity and continuous monitoring; however, because the enterprise is comprised of laboratories and sites with very diverse mission sets, doing so is always challenging. This official added that the department has embraced the DHS CDM initiative, which will be leveraged to standardize some IT cybersecurity capabilities, but it does not have a single standardized solution across the enterprise. OMB recently noted that an agency s ability to mitigate security vulnerabilities becomes more complex in federated agencies, where there are not standardized procedures or technology across the organization. The challenges in implementing standardized IT capabilities may hinder these agencies in applying a consistent level of protection to their systems and data. <3.5. Receiving Quality Data to Provide Visibility into Risks> Eighteen of the 23 civilian CFO Act agencies reported that they had experienced challenges in receiving quality data (e.g., accurate, timely information on threats and vulnerabilities). Twelve of these agencies expressed challenges in receiving data from all parts of their agencies or stated that they relied on manual reporting from their components, which did not provide real-time visibility into risks. In addition, six agencies cited difficulties in combining data from disparate sources into an agency-wide view of risk. For example: DHS s Acting Director of Governance and Executive Management noted that the department s management currently depends on its components to submit timely and accurate information on cybersecurity vulnerabilities instead of having real-time, centralized reporting of data. The official added that DHS expects to address this challenge through implementation of CDM centralized reporting to the DHS Dashboard on a near real-time basis and other tools and processes for enterprise data collection. State s Enterprise Risk Officer for Cybersecurity reported that threat information is difficult to gather with the specificity needed to make strategic decisions. The official added that, with regard to vulnerability data, sufficient data exist and are gathered on a regular basis; however, it is difficult in a large global enterprise to prioritize actions without credible information on the likelihood of a threat or its impact on the agency s mission. NIST emphasizes that risk monitoring tools, techniques, and procedures can increase risk awareness and help senior leaders develop a better understanding of the ongoing risk to organizational operations and assets. If the agencies are unable to consistently receive quality, timely data from their entire organizations, they will continue to be challenged in making effective decisions to address organization-wide cybersecurity risks. <3.6. Using NIST and OMB Guidance> Sixteen of 23 civilian CFO Act agencies reported the lack of sufficiency, clarity, or usefulness of NIST and/or OMB guidance for cybersecurity risk management as a challenge. Six agencies stated that there was a lack of practical instruction to assist agencies in implementing guidance. Six agencies also stated that various guidance documents are not always consistent or easy to understand. Six agencies also expressed a need for guidance to address new technologies or emerging areas such as the use of cloud providers or establishing cybersecurity risk management programs at all levels of an organization. For example: HHS s Acting Deputy CISO stated that, for all the positive aspects of the NIST guidance, there is a lack of a centralized document or road map that ties all the documents together from a cybersecurity standpoint. Also, the official stated that the guidance from NIST provides limited direction for producing specific metrics and checklists in support of laws, policies, directives, instructions, and standards. Transportation s CISO stated that current guidance does not always provide agencies with practical ways to implement requirements. For example, the official noted that current OMB guidance on cyber and privacy risk management does not tell agencies how to practically integrate these disciplines, and that frequent updates to NIST guidance that agencies have to respond to might be better applied to identifying practical implementations. The official added that a lack of practical implementation guidance may lead to duplication of effort and inconsistency of outcomes. OMB and NIST play important roles in issuing policies, standards, and guidelines for agencies cybersecurity risk management programs. However, if agencies find guidance unclear or insufficient, they will be challenged in implementing key cybersecurity risk management requirements. <3.7. Developing a Strategy to Manage Cybersecurity Risks> Fifteen of the 23 CFO Act agencies reported challenges in developing an agency-wide cybersecurity risk management strategy that includes a statement of risk tolerance and how the agency will assess, respond to, and monitor risks. Ten agencies stated that they faced challenges in establishing an agency-wide risk tolerance statement, while five noted that they faced challenges in implementing a strategy across the agency. For example: Education s Audit Liaison Officer from its Office of the CIO noted that it was a challenge to develop an enterprise-level statement of risk tolerance and that currently risk tolerance decisions were made at the system level by the authorizing official. EPA s CISO reported that it was challenge to establish an agency- wide statement of risk tolerance. This is because it was difficult to determine such factors as how much the mission s operation is worth, how much information resources are worth, and how much negative public perception of the agency costs in terms of money or resources. NIST notes that framing risk through the creation of a cybersecurity risk management strategy establishes a foundation for managing risk and delineates the boundaries for risk-based decisions within an agency. If agencies are challenged in developing cybersecurity risk management strategies, they may be hindered in making consistent decisions for identifying, assessing, and responding to cybersecurity risks. <3.8. Incorporating Cyber Risks into Enterprise Risk Management> Fourteen of the 23 civilian CFO Act agencies reported that incorporating cyber risks into the enterprise risk management process was a challenge. Nine of these agencies noted challenges related to coordination between cybersecurity and ERM, such as establishing effective channels of communication or developing vocabularies for discussing risk that were understandable by all stakeholders. In addition, five agencies noted that their ERM process was still maturing. For example: GSA s Associate Chief Information Officer for Enterprise Planning & Governance stated that a process was implemented to assess cyber risks as part of the formalized ERM process; however, this official noted that additional work is still needed to align and incorporate other regular cybersecurity risk management reporting processes and communication channels into the broader ERM framework. Treasury s Enterprise Cybersecurity Risk Management Officer stated that incorporating cyber risks into ERM is a challenge because cybersecurity risk is not currently quantified in the same way as other risks. The official expressed the need for a standard vocabulary for discussing cyber alongside other risks, adding that this makes it very challenging to integrate cybersecurity risk management into ERM. OMB has stated that an effective enterprise risk management program promotes a common understanding for recognizing and describing potential risks that can impact an agency s mission and the delivery of services to the public. Such risks include strategic, market, cyber, legal, reputational, political, and a broad range of operational risks. If agencies do not successfully integrate cyber risks into their ERM processes, they may be hindered in making effective decisions about addressing cybersecurity risks in the context of other risks and their potential impact on agency missions. <4. OMB and DHS Took Steps to Improve Cybersecurity Risk Management; Current Initiatives Address Some but Not All Identified Challenges> In accordance with a recent executive order, OMB and DHS took steps to assess agencies cybersecurity management capabilities. They also identified core actions to be taken, in coordination with agencies, to address cybersecurity risks across the executive branch. Accordingly, OMB and DHS have several initiatives under way to address these risks, and several of these initiatives should help address some of the challenges in establishing cybersecurity risk management programs that the agencies in our review identified. However, these initiatives do not address other challenges identified by a majority of the agencies. <4.1. OMB and DHS Assessed Government-Wide Cybersecurity Risks and Identified Findings Related to Federal Cybersecurity> EO 13800 on Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure emphasizes the importance of reducing cybersecurity risks while also providing exceptional service to the public. The EO aligns with FISMA by holding agency heads accountable for managing cybersecurity risks. Toward this end, it directed agency heads to provide a risk management report to OMB and DHS that documented the agency s risk mitigation and acceptance choices as of May 2017 and describe the agency s action plan to implement the NIST cybersecurity framework. The EO required OMB and DHS to assess each agency s risk management report and OMB, in coordination with DHS, to develop and deliver a risk determination report to the President on whether the risk mitigation and acceptance choices set forth in the agencies reports were appropriate and sufficient to manage the cybersecurity risk to the executive branch as a whole. OMB s and DHS s report was also to include an action plan to, among other things, adequately protect the executive branch, should the risk determination identify insufficiencies in agencies risk mitigation and acceptance choices; establish a regular process to reassess and, if appropriate, reissue the determination and address future recurring and unmet budgetary needs necessary to manage risk to the executive branch; and if appropriate, clarify, reconcile, and reissue policies, standards, and guidelines issued in furtherance of FISMA and the EO, and align them with the NIST cybersecurity framework. In May 2017, OMB issued guidance to agencies for implementing the provisions in EO 13800 on managing cybersecurity risks. This guidance required agencies to, among other things, report on their cybersecurity risk management capabilities using the metrics established for monitoring FISMA implementation. OMB and DHS used the results of the agencies risk management reports and responses to the FISMA reporting metrics to assess agencies capabilities and make risk determinations of agencies performance ( high risk, at risk, or managing risk ). OMB and DHS s process included an assessment of 96 agencies across the executive branch, including the 23 civilian CFO Act agencies in the scope of our review. In May 2018, OMB published the Federal Cybersecurity Risk Determination Report and Action Plan, in which OMB and DHS determined that 74 percent of the federal agencies participating in the risk assessment process had cybersecurity programs that were either at risk or high risk. The report identified four key findings and actions necessary to address cybersecurity risks across the federal enterprise, as summarized in table 6. The report also described OMB s plans to work with DHS and other federal entities to implement these actions and reduce cybersecurity risks across the government. OMB and DHS also established a process for reassessing and, if necessary, reissuing the agency risk determinations. Specifically, OMB and DHS use the metrics collected during the FISMA reporting process to update each agency s risk management assessment on an ongoing basis. At a minimum, CFO Act agencies must update their metrics quarterly. The quarterly risk management assessment process allows for the monitoring of agency-level risks, and OMB issues guidance yearly codifying this process. In addition, OMB staff stated that they plan to incorporate the overall risk determination into the office s annual FISMA report to Congress, although they noted that this is subject to change. Further, OMB and DHS took steps to align government-wide cybersecurity guidance with the NIST cybersecurity framework. For example, OMB and DHS, in coordination with the federal cybersecurity community, updated the reporting guidance on CIO and Inspector General FISMA metrics to align with the framework. The FISMA metrics leverage the framework as a standard for managing and reducing cybersecurity risks, and the metrics are aligned with the five main functions of the framework to provide agencies with a comprehensive structure for making more informed, risk-based decisions, managing cybersecurity risks across their enterprise, and providing a view of agencies capabilities and potential gaps. <4.2. OMB and DHS Have Several Initiatives Under Way That Can Help Address Some, but Not All, Agency-Identified Challenges> OMB and DHS have several initiatives under way some of them also outlined in OMB s federal cybersecurity report that can assist agencies in meeting challenges related to hiring and retaining cybersecurity risk management personnel, establishing standardized IT capabilities, receiving quality data, and using NIST and OMB guidance. Workforce education initiatives: In November 2018, OMB announced the launch of the Federal Cyber Reskilling Academy pilot program, which is being sponsored by the CIO Council. This program offers current federal employees who do not work in the IT field the opportunity for hands-on training in cybersecurity for 3 months to help them build foundational skills in cyber defense analysis. In addition, the National Initiative for Cybersecurity Careers and Studies is an online resource for cybersecurity training managed by DHS that connects government employees, students, educators, and industry with cybersecurity training providers throughout the nation. The initiative s Federal Virtual Training Environment, for example, is an on-demand cybersecurity training system that contains more than 800 hours of training on a variety of topics, including risk management. These initiatives, if effectively implemented, could help address challenges agencies identified in hiring and retaining cybersecurity risk management personnel. Specifically, the Cyber Reskilling Academy has the potential to increase the pool of federal employees with skills that agencies need for cyber risk management. In addition, the Federal Virtual Training Environment can enhance federal employees knowledge of and skills in cybersecurity risk management. Continuous Diagnostics and Monitoring (CDM): DHS s CDM initiative is to provide federal agencies with tools and services that have the intended capability to automate network monitoring, correlate and analyze security-related information, and enhance risk- based decision making at agency and government-wide levels. These tools include sensors that perform automated scans or searches for known cyber vulnerabilities, the results of which can feed into a dashboard that, at an agency level, is intended to alert network managers and enable the agency to allocate resources based on the risk. Summary data from each participating agency s dashboard is expected to be transmitted to the Federal Dashboard, where the data can be used to inform decisions about cybersecurity risks across the federal government. A DHS CDM program official stated that the department plans to continue to deploy capabilities in fiscal year 2019 for asset management, identity and access management, and monitoring network controls and activity. The CDM initiative, if effectively implemented, has the potential to assist in addressing challenges agencies identified in establishing standardized IT capabilities for cybersecurity risk management and improving the quality of data to provide visibility into cyber risks. In particular, the tools and services offered through the program can provide agencies with standardized capabilities for collecting and analyzing cyber risk information. In addition, automated network monitoring and analysis can help agencies that currently must manually collect data from components based on self-reporting. Such data may be less timely and accurate than those collected through the tools available through CDM. Security operations center (SOC) consolidation and maturation: A SOC defends an organization against unauthorized activity within computer networks, including, at a minimum, detecting, monitoring, and analyzing suspicious activity. According to OMB, CISOs report that these centers do not communicate with each other and that they hoard, rather than share, threat information and intelligence. SOC consolidation focuses on centralizing information sharing across the agency, which is intended to improve the data agencies receive to provide visibility into cybersecurity risks. OMB and DHS are working with agencies to assess and enhance the maturity of their SOCs and streamline security operations across their enterprise. Specifically, agencies are required to develop and submit a Cybersecurity operations maturation plan to OMB and DHS by April 2019. Following submission of the plan, agencies are then required to complete SOC maturation, consolidation, or migration to a SOC-as-a-Service provider by September 2020. Similar to CDM, SOC consolidation and maturation initiatives may help address challenges related to standardizing capabilities and collecting quality data, while enhancing enterprise-wide visibility. Consolidation can provide agencies with a standardized set of SOC services, while maturation can increase the quality of data on risks by establishing a baseline set of expected SOC capabilities for executive branch agencies. Cyber threat framework: OMB and DHS are developing and disseminating a framework, working with the Department of Defense, Office of the Director of National Intelligence, and the National Security Agency, to enable consistent characterization and categorization of cyber threat events. Specifically, the Cyber Threat Framework provides a hierarchical, structured, transparent, and repeatable methodology for characterizing adversarial activities in a standardized way across the federal government. The framework and the related methodology provide for a cybersecurity architecture review that allows an agency to assess its cyber capabilities against its actual threat environment. This includes a gap analysis to determine where agencies may need to enhance their capabilities to defend against key threats. To foster the adoption of the Cyber Threat Framework across the government, DHS in coordination with OMB and the Department of Defense intends to develop and implement a solution that will be available for agencies to use by the end of December 2019. The Cyber Threat Framework, if effectively implemented by civilian federal agencies, can also help address agency challenges related to the quality of data about cyber risks. By providing a standardized framework for understanding cyber threats, it is intended to assist agencies to better identify and prioritize risks, as well as the gaps in their capabilities for protecting against such threats. Inter-agency cyber-focused working groups: In coordination with DHS, OMB established CyberStat review sessions to assist agencies in protecting their systems, networks, and data. Specifically, agency cyber professionals, from the working level to the CIO, meet with DHS subject matter experts to participate in working sessions throughout a 4- to 6-week period to overcome barriers to success in specific cybersecurity programs. During a CyberStat review, DHS provides agencies with guidance on best practices and connects them with other subject matter experts who can provide advice on implementing the NIST framework and cybersecurity risk management practices. In addition, the federal CIO Council has recently issued the CISO Handbook, which was created to educate and inform new and existing CISOs about their role in federal cybersecurity. The council is the principal interagency forum for improving agency practices related to the use, sharing, and performance of federal information resources and part of its governing principles are to adopt and share IT management best practices and to manage risk and ensure privacy and security. Within the CIO Council, the CISO Council is specifically tasked with developing IT security policy and sharing best practices to improve the cybersecurity posture of the United States. Among other things, the CISO Handbook includes information on NIST s cybersecurity framework and how it can be leveraged in conjunction with other NIST risk management publications. CyberStat reviews and the federal CIO Council can provide channels to help agencies in better understanding and implementing guidance from NIST and OMB on cybersecurity risk management. By connecting agencies with best practices and subject matter experts, CyberStat sessions are intended to help agencies, for example, apply the NIST framework and cyber risk management practices. In addition, the CIO Council, through sharing of best practices and issuing publications, can provide guidance on how to more effectively implement federal cybersecurity risk management guidance. Although the initiatives under way could address challenges related to hiring and retaining cybersecurity risk management personnel, developing standardized capabilities, acquiring quality data about cyber risks, and using NIST and OMB guidance, the existing initiatives do not address challenges related to managing competing priorities, establishing consistent policies and procedures, incorporating cyber risks into enterprise risk management, and developing an agency-wide strategy for managing cybersecurity risks. Managing competing priorities between cybersecurity and operations: OMB staff stated that its newly developed risk-based budgeting model could help agencies prioritize their cybersecurity investments. This model is intended to tie agencies cybersecurity spending to the FISMA metrics process in order to identify capability and process gaps that pose risks to an agency. OMB plans to disseminate the risk-based budgeting process to enable agency CIOs, CISOs, and Chief Financial Officers to communicate cyber risks effectively across their agencies and to budget strategically for cyber capabilities that address the agency s most critical cybersecurity needs. OMB anticipates being able to provide agencies with additional details surrounding this model in the cybersecurity section of its upcoming fiscal year 2020 guidance to the President s budget. However, while this risk-based approach to cybersecurity budgeting should help agencies prioritize their cybersecurity investments, it does not address issues related to prioritizing between cybersecurity and mission or operational needs. The agencies in our review highlighted that mission or operational priorities can conflict with cybersecurity requirements when, for example, components within an agency have differing views about the relative importance of mission and cybersecurity activities. These issues do not relate to prioritizing investments in cybersecurity but to managing conflicts, or potential conflicts, between cybersecurity and mission needs. Implementing consistent cybersecurity risk management policies and procedures: OMB staff stated that several of OMB s and DHS s initiatives emphasize driving performance through centralized visibility, authority, and reporting. For example, OMB staff stated CDM is intended to establish agencies visibility across the enterprise, as well as government-wide visibility. OMB staff stated the implementation of provisions commonly referred to as the Federal Information Technology Acquisition Reform Act is intended to enhance the role and authority of agency CIOs, particularly with respect to relationships with agency components and accountability for IT costs, performance, and security. Additionally, OMB staff stated the risk management assessment process established in response to EO 13800 emphasizes centralized visibility, authority, and reporting. While these efforts could provide increased visibility and CIO authority, they do not address factors identified by agencies that affected their ability to implement consistent cybersecurity risk management policies and procedures. These include differing views among staff regarding the importance of risks, and frequent changes in leadership, all of which, according to agencies, make consistency difficult to achieve. Incorporating cyber risks into ERM: While existing OMB guidance requires agencies to establish ERM programs and NIST guidance requires agencies to establish cybersecurity risk management programs, this guidance does not address how these efforts should be integrated or coordinated. For example, OMB A-123 outlines agencies responsibilities for establishing an ERM capability but does not specifically address how enterprise risk management should incorporate cyber risks. In addition, NIST guidance on cybersecurity risk management recognizes that cybersecurity can be an important component of an organization s overall risk management and states that its information security risk management guidance should be used as part of a more comprehensive ERM program. However, it does not explicitly discuss how to integrate or coordinate cybersecurity risk management and enterprise risk management. Establishing a cybersecurity risk management strategy: OMB noted that the cyber threat framework will provide a more tangible way for agencies to identify and prioritize cyber risks. However, while this framework will allow agencies to better identify and categorize threats and the capabilities needed to counter them, it does not address key aspects of risk framing such as establishing an agency-wide statement of risk tolerance and acceptable risk mitigation strategies. Several agencies noted that they struggled to define risk tolerance and establish criteria for different risk responses that could provide a consistent, agency-wide approach to risk management. Without additional guidance or other processes to identify successful approaches for addressing these challenges, agencies will continue to be hindered in establishing programs for effectively managing their cybersecurity risks. <5. Conclusions> Given the increasing number and sophistication of cyber threats facing federal agencies, it is critical that agencies are well positioned to make consistent, informed risk-based decisions in protecting their systems and information against these threats. While all the agencies in our review have taken steps to establish cybersecurity risk management programs, they have not fully addressed key practices that are foundational to effectively managing cybersecurity risks. In particular, without developing an agency-wide cybersecurity risk management strategy, agencies may lack a consistent approach to managing cybersecurity risks. In addition, while agencies have documented policies and procedures that include many key practices, gaps remain that may hinder their ability to ensure a consistent implementation of risk-based practices. Further, without a process for an agency-wide cybersecurity risk assessment, agencies may be missing opportunities to identify risks that affect their entire organization, and to implement solutions to address them. Finally, establishing processes for coordinating cybersecurity risk information with the entity responsible for enterprise risk management would help ensure that cyber risks are being considered by senior leadership in the context of other risks facing the agency. This inconsistent establishment of cybersecurity risk management practices can be partially attributed to challenges agencies identified in establishing and implementing their cybersecurity risk management programs. Specifically, agencies noted a variety of challenges such as hiring qualified staff, competing priorities between cybersecurity and mission needs, implementing consistent policies and procedures, incorporating cyber risks into enterprise risk management processes, and developing a cybersecurity risk management strategy. Addressing these challenges will be an important step toward establishing more effective cybersecurity risk management programs across the 23 agencies. OMB and DHS have taken steps to carry out their responsibilities to identify and address weaknesses across the executive branch, including actions that would address many of the challenges identified by agencies. However, without fully addressing challenges related to prioritization between cybersecurity needs and mission priorities, implementing consistent risk management policies and procedures, incorporating cyber risks into enterprise risk management, and establishing a cybersecurity risk management strategy, OMB and DHS are likely to be missing opportunities to assist agencies in these key areas. Clarified or updated guidance, along with sharing successful practices or lessons learned, could help agencies more fully establish their cybersecurity risk management capacity. <6. Recommendations for Executive Action> We are making the following recommendation to OMB: The Director of OMB should, in coordination with the Secretary of Homeland Security, establish guidance or other means to facilitate the sharing of successful approaches for agencies to address challenges in the areas of (1) managing competing priorities between cybersecurity and operations, such as when operational needs appear to conflict with cybersecurity requirements; (2) implementing consistent cybersecurity risk management policies and procedures across an agency; (3) incorporating cyber risks into enterprise risk management, and (4) establishing agencies cybersecurity risk management strategies. (Recommendation 1) We are also making a total of 57 recommendations to the 23 civilian CFO Act agencies in our review to fully address key practices in their cybersecurity risk management policies and procedures. Appendix VII contains these recommendations. <7. Agency Comments and Our Evaluation> We requested comments on a draft of this report from OMB and the 23 civilian CFO Act agencies included in our review. All the agencies provided responses, as further discussed. In an email from the office s GAO audit liaison on July 8, 2019, OMB did not state whether it agreed or disagreed with our recommendations. However, the office provided technical comments, which we incorporated as appropriate. Of the 23 civilian CFO Act agencies, 17 agencies (Education, Energy, DHS, HUD, Interior, Labor, State, Transportation, VA, USAID, GSA, NASA, NSF, NRC, OPM, SBA, and SSA) concurred with our recommendations; one agency (HHS) partially concurred with our recommendations; three agencies (Commerce, Justice, and Treasury) provided comments but did not state whether they agreed or disagreed with our recommendations; and two agencies (Agriculture and EPA) stated that they had no comments on the report. Multiple agencies also provided technical comments, which we incorporated as appropriate. The following 17 agencies concurred with our recommendations and, in most cases, described steps planned or under way to address them: The Department of Education provided written comments in which it concurred with our recommendation and stated that the department will continue its efforts to fully develop a cybersecurity risk management strategy that includes the definition of risk tolerance and acceptable risk response strategies. Education s comments are reprinted in appendix VIII. The Department of Energy provided written comments in which it concurred with our two recommendations and described steps and time frames for addressing them. In one case, regarding our recommendation to update the department s policies to address missing elements, Energy stated that, as of May 2019, it had already completed an update of its policies to implement this recommendation. We intend to follow up with the department and obtain and assess evidence to determine its implementation of this recommendation. Energy s comments are reprinted in appendix IX. In written comments, the Department of Homeland Security stated that it was pleased that our report noted steps that DHS and OMB have taken to improve agencies capabilities for managing cyber risks. DHS also concurred with our two recommendations and described steps it intends to take to address them, along with estimated completion dates. DHS s comments are reprinted in appendix XI. The department also provided technical comments, which we have incorporated as appropriate. The Department of Housing and Urban Development provided written comments in which it thanked GAO for the opportunity to review the report and stated that it concurred with the recommendations. HUD s comments are reprinted in appendix XII. The Department of the Interior provided written comments in which it concurred with our three recommendations. Interior also described planned steps to address the recommendations, such as developing a cybersecurity risk management strategy that includes the key elements and updating its policies. The department s comments are reprinted in appendix XIII. In written comments, the Department of Labor concurred with our recommendation. Labor stated that it intends to take necessary steps to update the department s policies. The department s comments are reprinted in appendix XIV. The Department of State provided written comments in which it concurred with our two recommendations. State also described steps planned or under way to address the recommendations. For example, State described ongoing policy updates to address control monitoring, system-level risk assessments, and the use of risk assessments to inform control tailoring. It also described ongoing steps to align its cybersecurity risk management activities with its ERM governance structure. State s comments are reprinted in appendix XV. The Department of Transportation s Director of Audit Relations & Program Improvement provided comments via email on June 25, 2019, which stated that the department concurs with the findings and recommendations in the draft report. The Department of Veterans Affairs provided written comments in which it concurred with our four recommendations. VA also described actions planned or under way to address the recommendations. Regarding our recommendation to establish and document a process for coordination between its cybersecurity and enterprise risk management functions, the department stated that it had already established such a process and requested closure of the recommendation. We intend to follow up with the department and obtain and assess evidence to determine if its actions fully address our recommendation. VA s comments are reprinted in appendix XVI. The U.S. Agency for International Development provided written comments in which it agreed with our two recommendations. USAID also described steps it has planned or under way to address the recommendations, such as amending its guidance to address an organization-wide cybersecurity risk assessment. The agency s comments are reprinted in appendix XVII. In written comments, the General Services Administration stated that it appreciated the opportunity to review the report and concurred with its findings. The agency added that it is implementing an action plan to address the four recommendations. GSA s comments are reprinted in appendix XVIII. The National Aeronautics and Space Administration provided written comments in which it concurred with our two recommendations. NASA also described planned steps to address the recommendations, such as updating its policies and establishing a process for an organization-wide cybersecurity risk assessment, along with estimated completion dates. The agency s comments are reprinted in appendix XIX. The National Science Foundation s GAO liaison provided comments via email on July 3, 2019, which stated that the agency concurred with our recommendation and intends to update its cybersecurity risk management strategy to address the missing elements. The Nuclear Regulatory Commission provided written comments in which it stated that the agency was in general agreement with the findings and recommendations in our draft report. NRC s comments are reprinted in appendix XX. The Office of Personnel Management provided written comments in which it stated that it concurred with our two recommendations. OPM also described planned steps to address the recommendations, such as updating its policies and establishing a process for an organization- wide cybersecurity risk assessment. The agency s comments are reprinted in appendix XXI. In written comments, the Small Business Administration concurred with our three recommendations. SBA described steps planned or under way to address the recommendations, such as updating its cybersecurity risk management strategy and policies and establishing a process for an organization-wide cybersecurity risk assessment, along with estimated completion dates. The agency s comments are reprinted in appendix XXII. In written comments, the Social Security Administration agreed with our recommendation and described planned efforts to further integrate its cybersecurity and enterprise risk management functions. SSA s comments are reprinted in appendix XXIII. One agency the Department of Health and Human Services concurred with three of our recommendations and partially concurred with one recommendation. Specifically, HHS concurred with our recommendations to develop a risk management strategy that includes key elements, establish a process for conducting an agency-wide cybersecurity risk assessment, and establish and document a process for coordination between cybersecurity risk management and enterprise risk management functions. Further, HHS described steps planned or under way to address these recommendations. Regarding our recommendation to update department policies to require an organization-wide cybersecurity risk assessment and the use of risk assessments to inform control tailoring, HHS stated that it concurred with the first part of the recommendation, but did not concur with the second part of the recommendation. Specifically, the department described steps it has planned or under way to update its policies to require an organization-wide risk assessment, in accordance with the first part of the recommendation. With respect to the second part of the recommendation, the department pointed to portions of its information security and privacy policy that address the selection of security and privacy controls. However, while these policy statements require adherence to NIST and OMB standards for selecting security controls and require a rationale for tailoring decisions, they do not specifically require the use of risk assessments to inform the tailoring of security controls. As NIST states, organizations apply the tailoring process to align the controls more closely with the specific conditions within the organization and should use risk assessments to inform and guide the tailoring process for organizational information systems and environments of operation. Making this requirement explicit in policy would help HHS ensure that it is applying the appropriate set of controls to its systems; thus, we maintain that our recommendation is still warranted. HHS s comments are reprinted in appendix X. The department also provided technical comments, which we incorporated as appropriate. We received technical comments via email from the GAO audit liaisons at three agencies the Department of Commerce (on June 21, 2019), the Department of Justice (on July 8, 2019), and the Department of the Treasury (on July 3, 2019). The agencies did not state whether they agreed or disagreed with our recommendations. We incorporated their technical comments as appropriate. We received emails from Agriculture s Director of Strategic Planning, Egovernment and Audits on June 19, 2019, and from a Division Director in the Environmental Protection Agency s Office of Information Security and Privacy on July 8, 2019, which stated that their agencies had no comments on the draft report. We are sending copies of this report to the appropriate congressional committees, the heads of the agencies in our review, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9342 or marinosn@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix XXIV. Appendix I: Objectives, Scope, and Methodology The objectives of our review were to examine (1) the extent to which agencies established key elements of a cybersecurity risk management program; (2) what challenges, if any, agencies identified in developing and implementing cybersecurity risk management programs; and (3) what steps the Office of Management and Budget (OMB) and Department of Homeland Security (DHS) have taken to meet their risk management responsibilities under Executive Order (EO) 13800 and to address any challenges agencies face in implementing cybersecurity risk management practices. In conducting this engagement, we focused on 23 of the 24 agencies covered by the Chief Financial Officers Act of 1990. To address our first objective, we collected agency policies, procedures, and other documentation and compared them to selected key practices from OMB and National Institute of Standards and Technology (NIST) guidance for cybersecurity risk management. To identify the key practices, we reviewed OMB guidance pertaining to cybersecurity risk management, including OMB Circular A-130: Managing Information as a Strategic Resource, as well as Circular A-123: Management s Responsibility for Enterprise Risk Management and Internal Control, which outlines agency responsibilities for enterprise risk management. We also reviewed NIST guidance, including the Framework for Improving Critical Infrastructure Cybersecurity; Special Publication 800-30: Guide for Conducting Risk Assessments; Special Publication 800-37: Guide for Applying the Risk Management Framework to Federal Information Systems, and Special Publication 800-39: Managing Information Security Risk: Organization, Mission, and Information System View. In selecting the key practices for our assessment, we focused on those practices identified by OMB and NIST as foundational for providing an organization-wide approach to cybersecurity risk management. We collected and analyzed documentation and other information from each agency related to cybersecurity risk management and compared it to the identified key practices. We supplemented our analyses with interviews with relevant agency officials to discuss the development of their policies. We discussed the results of our initial analysis of agency documentation with agency officials to validate our findings, collect additional evidence, and identify causes for any gaps. We then determined whether the evidence provided by the agency addressed each identified criteria element. Specifically, for each criteria element, we determined if the evidence fully addressed the element ( met ), addressed some, but not all, aspects of the element ( partially met ), or did not address any aspects of the element ( not met ). To address the second objective, we administered structured interview questions to the agencies to determine what challenges, if any, they face in developing and implementing policies and procedures for managing cybersecurity risk. We developed a list of potential challenges based on our assessment of agencies policies and procedures, a review of OMB s risk report on agencies cybersecurity risk management capabilities, and reviews of prior GAO reports in areas related to cybersecurity risk management. We worked with GAO methodologists to develop a set of structured interview questions that were sent to the agencies and asked them to indicate if they faced each of these, as well as any additional, challenges, and to provide specific examples. We received responses from all 23 agencies in our review and analyzed them to identify those challenges that were indicated by a majority of the agencies. We excluded from our counts agencies that stated they did not have challenges in a particular area. We also identified common themes within the challenge areas. To address the third objective, we reviewed EO 13800 and implementation guidance issued by OMB, as well as relevant reports and other documents, including OMB s Federal Cybersecurity Risk Determination Report and Action Plan, OMB memos, and supporting documentation for DHS initiatives. We also interviewed OMB and DHS officials with government-wide cybersecurity responsibilities to gain an understanding of initiatives under way to address their responsibilities under the order, and that could help address challenges identified by the agencies. We then compared these initiatives to the responses we received from agencies to determine if there were any gaps between the challenges and the ongoing initiatives. Specifically, for each challenge identified by a majority of the agencies in our review, we determined if any of the initiatives under way would address them based on a review of documentation associated with the initiatives as well as discussions with OMB and DHS officials. We conducted this performance audit from February 2018 to July 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Details on the Extent to Which Agencies Established a Cybersecurity Risk Executive Function Twenty-two of the 23 civilian Chief Financial Officers Act agencies in our review established and documented the role of the cybersecurity risk executive. Agencies varied in assigning this responsibility to the chief information officer (CIO), chief information security officer (CISO), or another official or entity. Table 7 provides details on our assessment. Appendix III: Details on the Extent to Which Agencies Developed a Cybersecurity Risk Management Strategy Of the 23 civilian Chief Financial Officers Act agencies, seven fully established a cybersecurity risk management strategy that included key elements recommended by National Institute of Standards and Technology (NIST) guidance. Specifically, these seven agencies developed strategies to guide how cybersecurity risk is to be framed, assessed, responded to, and monitored. In addition, five of the 23 agencies partially developed a cybersecurity risk management strategy, but their strategies did not address certain required elements. The remaining 11 agencies did not develop an agency-wide cybersecurity risk management strategy. Table 8 provides details on our assessment. Appendix IV: Details on the Extent to Which Agencies Developed Risk-Based Policies and Procedures The following elements, identified in NIST guidance, should be addressed in policies and procedures to facilitate risk-based decision making in securing information systems and data. Most of the 23 civilian Chief Financial Officers Act agencies addressed the majority of the key practices for incorporating risk-based decision- making in their policies and procedures. However, most of the agencies also had gaps in one or more of these areas. Specifically, six agencies addressed all the elements in their policies and procedures, and the remaining 17 were missing at least one. Table 10 provides details on our assessment of the agencies policies. Appendix V: Details on the Extent to Which Agencies Developed an Organization-Wide Cybersecurity Risk Assessment Of the 23 civilian Chief Financial Officers Act agencies, 12 developed a process for an agency-wide cybersecurity risk assessment. Specifically, these agencies developed processes for aggregating system-level data and analyzing them to assess overall cybersecurity risk to agency operations and assets. The remaining 11 agencies did not establish such a process. Table 11 provides details on our assessment. Appendix VI: Details on Agencies Processes for Coordination between Cybersecurity and Enterprise Risk Management Of the 23 civilian Chief Financial Officers Act agencies, 10 fully established a process or mechanism for coordination between their cybersecurity risk executive and their enterprise risk management (ERM) governance structure, five agencies partially established such a process, and the remaining eight agencies did not provide evidence of coordination. Table 12 provides details on our assessment. Appendix VII: Recommendations to Departments and Agencies We are making a total of 57 recommendations to the 23 civilian Chief Financial Officers Act agencies in our review to fully address key practices in their cybersecurity risk management policies and procedures. The Secretary of Agriculture should take the following three actions: Develop a cybersecurity risk management strategy that includes the key elements identified in this report. (Recommendation 2) Update the department s policies to require (1) the use of risk assessments to inform security control tailoring and (2) the use of risk assessments to inform plan of actions and milestones (POA&M) prioritization. (Recommendation 3) Establish and document a process for coordination between cybersecurity risk management and enterprise risk management functions. (Recommendation 4) The Secretary of Commerce should take the following two actions: Update the department s policies to require (1) an organization-wide cybersecurity risk assessment and (2) the use of risk assessments to inform POA&M prioritization. (Recommendation 5) Establish a process for conducting an organization-wide cybersecurity risk assessment. (Recommendation 6) The Secretary of Education should take the following action: Fully develop a cybersecurity risk management strategy that includes the key elements identified in this report. (Recommendation 7) The Secretary of Energy should take the following two actions: Develop a cybersecurity risk management strategy that includes the key elements identified in this report. (Recommendation 8) Update the department s policies to require (1) an organization-wide cybersecurity risk assessment and (2) the identification of common controls. (Recommendation 9) The Secretary of Health and Human Services should take the following four actions: Develop a cybersecurity risk management strategy that includes the key elements identified in this report. (Recommendation 10) Update the department s policies to require (1) an organization-wide cybersecurity risk assessment and (2) the use of risk assessments to inform security control tailoring. (Recommendation 11) Establish a process for conducting an organization-wide cybersecurity risk assessment. (Recommendation 12) Establish and document a process for coordination between cybersecurity risk management and enterprise risk management functions. (Recommendation 13) The Secretary of Homeland Security should take the following two actions: Develop a cybersecurity risk management strategy that includes the key elements identified in this report. (Recommendation 14) Establish and document a process for coordination between cybersecurity risk management and enterprise risk management functions. (Recommendation 15) The Secretary of Housing and Urban Developing should take the following two actions: Develop a cybersecurity risk management strategy that includes the key elements identified in this report. (Recommendation 16) Update the department s policies to require the use of risk assessments to inform POA&M prioritization. (Recommendation 17) The Secretary of the Interior should take the following three actions: Develop a cybersecurity risk management strategy that includes the key elements identified in this report. (Recommendation 18) Update the department s policies to require an organization-wide cybersecurity risk assessment. (Recommendation 19) Establish and document a process for coordination between cybersecurity risk management and enterprise risk management functions. (Recommendation 20) The Attorney General should take the following two actions: Develop a cybersecurity risk management strategy that includes the key elements identified in this report. (Recommendation 21) Fully establish and document a process for coordination between cybersecurity risk management and enterprise risk management functions. (Recommendation 22) The Secretary of Labor should take the following action: Update the department s policies to require (1) the use of risk assessments to inform control tailoring and (2) the use of risk assessments to inform POA&M prioritization. (Recommendation 23) The Secretary of State should take the following two actions: Update the department s policies to require (1) an organization-wide risk assessment, (2) an organization-wide strategy for monitoring control effectiveness, (3) system-level risk assessments, (4) the use of risk assessments to inform security control tailoring, and (5) the use of risk assessments to inform POA&M prioritization. (Recommendation 24) Establish and document a process for coordination between cybersecurity risk management and enterprise risk management functions. (Recommendation 25) The Secretary of Transportation should take the following three actions: Fully develop a cybersecurity risk management strategy that includes the key elements identified in this report. (Recommendation 26) Update the department s policies to require an organization-wide risk assessment. (Recommendation 27) Fully establish and document a process for coordination between cybersecurity risk management and enterprise risk management functions. (Recommendation 28) The Secretary of the Treasury should take the following three actions: Develop a cybersecurity risk management strategy that includes the key elements identified in this report. (Recommendation 29) Establish a process for conducting an organization-wide cybersecurity risk assessment. (Recommendation 30) Establish and document a process for coordination between cybersecurity risk management and enterprise risk management functions. (Recommendation 31) The Secretary of Veterans Affairs should take the following four actions: Develop a cybersecurity risk management strategy that includes the key elements identified in this report. (Recommendation 32) Update the department s policies to require an organization-wide cybersecurity risk assessment. (Recommendation 33) Establish a process for conducting an organization-wide cybersecurity risk assessment. (Recommendation 34) Establish and document a process for coordination between cybersecurity risk management and enterprise risk management functions. (Recommendation 35) The Administrator of USAID should take the following two actions: Update the agency s policies to require (1) an organization-wide cybersecurity risk assessment and (2) the use of risk assessments to inform control tailoring. (Recommendation 36) Establish a process for conducting an organization-wide cybersecurity risk assessment. (Recommendation 37) The Administrator of EPA should take the following four actions: Fully develop a cybersecurity risk management strategy that includes the key elements identified in this report. (Recommendation 38) Update the agency s policies to require an organization-wide cybersecurity risk assessment. (Recommendation 39) Establish a process for conducting an organization-wide cybersecurity risk assessment. (Recommendation 40) Fully establish and document a process for coordination between cybersecurity risk management and enterprise risk management functions. (Recommendation 41) The Administrator of General Services should take the following four actions: Designate and document a risk executive function with responsibilities for organization-wide cybersecurity risk management. (Recommendation 42) Update the agency s policies to require an organization-wide cybersecurity risk assessment. (Recommendation 43) Establish a process for conducting an organization-wide cybersecurity risk assessment. (Recommendation 44) Fully establish and document a process for coordination between cybersecurity risk management and enterprise risk management functions. (Recommendation 45) The Administrator of NASA should take the following two actions: Update the agency s policies to require (1) an organization-wide risk assessment and (2) the use of risk assessments to inform POA&M prioritization. (Recommendation 46) Establish a process for conducting an organization-wide cybersecurity risk assessment. (Recommendation 47) We are not making a recommendation to NASA to establish a cybersecurity risk management strategy because we previously made such a recommendation, which remains open. The Director of NSF should take the following action: Fully develop a cybersecurity risk management strategy that includes the key elements identified in this report. (Recommendation 48) The Chairman of NRC should take the following four actions: Develop a cybersecurity risk management strategy that includes the key elements identified in this report. (Recommendation 49) Update the agency s policies to require (1) an organization-wide cybersecurity risk assessment and (2) the use of risk assessments to inform POA&M prioritization. (Recommendation 50) Establish a process for conducting an organization-wide cybersecurity risk assessment. (Recommendation 51) Establish and document a process for coordination between cybersecurity risk management and enterprise risk management functions. (Recommendation 52) The Director of OPM should take the following two actions: Update the agency s policies to require (1) an organization-wide cybersecurity risk assessment and (2) the use of risk assessments to inform control tailoring. (Recommendation 53) Establish a process for conducting an organization-wide cybersecurity risk assessment. (Recommendation 54) The Administrator of SBA should take the following three actions: Fully develop a cybersecurity risk management strategy that includes the key elements identified in this report. (Recommendation 55) Update the agency s policies to require (1) an organization-wide cybersecurity risk assessment and (2) the use of risk assessments to inform POA&M prioritization. (Recommendation 56) Establish a process for conducting an organization-wide cybersecurity risk assessment. (Recommendation 57) The Commissioner of SSA should take the following action: Fully establish and document a process for coordination between cybersecurity risk management and enterprise risk management functions. (Recommendation 58) Appendix VIII: Comments from the Department of Education Appendix IX: Comments from the Department of Energy Appendix X: Comments from the Department of Health and Human Services Appendix XI: Comments from the Department of Homeland Security Appendix XII: Comments from the Department of Housing and Urban Development Appendix XIII: Comments from the Department of the Interior Appendix XIV: Comments from the Department of Labor Appendix XV: Comments from the Department of State Appendix XVI: Comments from the Department of Veterans Affairs Appendix XVII: Comments from the U.S. Agency for International Development Appendix XVIII: Comments from the General Services Administration Appendix XIX: Comments from the National Aeronautics and Space Administration Appendix XX: Comments from the Nuclear Regulatory Commission Appendix XXI: Comments from the Office of Personnel Management Appendix XXII: Comments from the Small Business Administration Appendix XXIII: Comments from the Social Security Administration Appendix XXIV: GAO Contact and Staff Acknowledgments <8. GAO Contact> <9. Staff Acknowledgments> In addition to the individual named above, Marisol Cruz Cain (assistant director), Lee McCracken (analyst in charge), Kiana Beshir, Roger Bracy, Chris Businsky, Alan Daigle, John de Ferrari, Nancy Glover, Franklin Jackson, Vernetta Marquis, Carlton Maynard, Scott Pettis, Tomas Ramirez, Andrew Stavisky, and Shaunyce Wallace made significant contributions to this report. | Why GAO Did This Study
Federal agencies face a growing number of cyber threats to their systems and data. To protect against these threats, federal law and policies emphasize that agencies take a risk-based approach to cybersecurity by effectively identifying, prioritizing, and managing their cyber risks. In addition, OMB and DHS play important roles in overseeing and supporting agencies' cybersecurity risk management efforts.
GAO was asked to review federal agencies' cybersecurity risk management programs. GAO examined (1) the extent to which agencies established key elements of a cybersecurity risk management program; (2) what challenges, if any, agencies identified in developing and implementing cybersecurity risk management programs; and (3) steps OMB and DHS have taken to meet their risk management responsibilities and address any challenges agencies face. To do this, GAO reviewed policies and procedures from 23 civilian Chief Financial Officers Act of 1990 agencies and compared them to key federal cybersecurity risk management practices, obtained agencies' views on challenges they faced, identified and analyzed actions taken by OMB and DHS to determine whether they address agency challenges, and interviewed responsible agency officials.
What GAO Found
Key practices for establishing an agency-wide cybersecurity risk management program include designating a cybersecurity risk executive, developing a risk management strategy and policies to facilitate risk-based decisions, assessing cyber risks to the agency, and establishing coordination with the agency's enterprise risk management (ERM) program. Although the 23 agencies GAO reviewed almost always designated a risk executive, they often did not fully incorporate other key practices in their programs:
Twenty-two agencies established the role of cybersecurity risk executive, to provide agency-wide management and oversight of risk management.
Sixteen agencies have not fully established a cybersecurity risk management strategy to delineate the boundaries for risk-based decisions.
Seventeen agencies have not fully established agency- and system-level policies for assessing, responding to, and monitoring risk.
Eleven agencies have not fully established a process for assessing agency-wide cybersecurity risks based on an aggregation of system-level risks.
Thirteen agencies have not fully established a process for coordinating between their cybersecurity and ERM programs for managing all major risks.
Until they address these practices, agencies will face an increased risk of cyber-based incidents that threaten national security and personal privacy.
Agencies identified multiple challenges in establishing and implementing cybersecurity risk management programs (see table).
What GAO Recommends
GAO is making 57 recommendations to the 23 agencies and one to OMB, in coordination with DHS, to assist agencies in addressing challenges. Seventeen agencies agreed with the recommendations, one partially agreed, and four, including OMB, did not state whether they agreed or disagreed. GAO continues to believe all its recommendations are warranted. |
gao_GAO-20-477 | gao_GAO-20-477_0 | <1. Background> <1.1. Requirements and Guidance Related to Federal Workforce Diversity> Title VII of the Civil Rights Act of 1964 and Section 501 of the Rehabilitation Act of 1973 mandate that all federal personnel decisions be made without discrimination on the basis of race, color, religion, sex, national origin, or disability and require that agencies establish a program of equal employment opportunity for all federal employees and applicants. EEOC has oversight responsibility for federal agencies compliance with EEOC regulations, which direct agencies to maintain a continuing affirmative program to promote equal opportunity and to identify and eliminate discriminatory practices and policies. In order to implement the programs described above, each federal agency is required to designate an EEO director. The EEO director s responsibilities include, among others, providing for counseling of aggrieved individuals, providing for the receipt and processing of individual and class complaints of discrimination, and advising agency leadership regarding equal employment opportunity matters. EEOC calls for federal agencies to conduct a continuing campaign to eradicate every form of prejudice or discrimination from the agency s personnel policies, practices, and working conditions. EEOC s Management Directive 715 (MD-715) calls for agencies to take appropriate steps to ensure that all employment decisions are free from discrimination and provides policy guidance and standards for establishing and maintaining effective affirmative programs of equal employment opportunity. The directive also sets forth the standards by which EEOC will review the sufficiency of agencies Title VII and Rehabilitation Act programs, including periodic agency self-assessments and the removal of barriers to free and open workplace competition. MD- 715 guidance further requires agencies to report annually on the status of activities undertaken pursuant to their equal employment opportunity programs and activities. Federal agencies are required to submit an annual MD-715 report to EEOC on the status of their EEO programs. In addition to including employee demographic data, among other things, the MD-715 reports are to include an agency self-assessment checklist, plans to correct any program deficiencies, and a description of any barrier analysis conducted and any plans to eliminate identified barriers. As part of a model EEO program to prevent unlawful discrimination, federal agencies are to regularly evaluate their employment practices to identify barriers to EEO in the workplace, take measures to eliminate identified barriers, and report annually on these efforts to EEOC, according to MD-715. EEOC s MD-715 defines a barrier as an agency policy, procedure, practice, or condition that limits, or tends to limit, employment opportunities for members of a particular gender, race, or ethnic background or for individuals on the basis of disability status. According to EEOC s MD-715 instructions, many employment barriers are built into the organizational and operational structures of an agency and are embedded in the agency s day-to-day procedures and practices. <1.2. USAID s Efforts to Increase Workforce Diversity> USAID s Office of Civil Rights and Diversity (OCRD) administers programs intended to promote equal opportunity, foster diversity at all levels and occupations, and sustain an inclusive workforce. According to USAID, OCRD strives to maintain a model EEO program. As table 1 shows, OCRD consists of the Complaints and Resolution Division, the Reasonable Accommodations Division, the Diversity and Inclusion Division, and the Program Operations Division. OCRD collaborates with the Office of Human Capital and Talent Management (HCTM) to develop and implement recruitment strategies intended to support a diverse and well-qualified workforce; consults with agency officials such as the Executive Diversity Council; partners with USAID employee resource groups to extend outreach opportunities and develop strategies of inclusion within USAID; and addresses allegations of discrimination, harassment, or retaliation. <1.2.1. Recruitment> According to a June 2019 testimony by USAID s Chief Human Capital Officer, OCRD collaborates with HCTM on the following recruitment programs intended to increase diversity: Donald Payne International Development Fellowship. Launched in 2012, the Donald Payne International Development Fellowship targets underrepresented groups in USAID s Foreign Service. According to USAID officials, the purpose of the Payne Fellowship is to enhance diversity in the Foreign Service through outreach and strategic efforts focused on minority serving institutions. USAID provides support for selected candidates for 2 years of graduate school as well as an internship on Capitol Hill and another at a USAID mission overseas. On completion of the graduate program and internships, the selected candidate is appointed as a Foreign Service officer with a 5-year service agreement. According to USAID, each year the Payne Fellowship supports 10 fellows entering USAID s Foreign Service. Development Diplomats in Residence. Established in 2016, the Development Diplomats in Residence program aims to educate, recruit, and channel talent to USAID by placing senior USAID officials at universities. These officials provide guidance and advice on careers, internships, and fellowships to students, professionals, and faculty members at minority-serving institutions. Two USAID career Senior Foreign Service officers serve in this role at California State University, Long Beach, and at Morehouse College, respectively. Pathways Internship Program. The Pathways Internship Program provides targeted diversity recruitment, salaries, and payments for Pathways Interns, according to the USAID Chief Human Capital Officer s June 2019 testimony. The testimony states that the overall racial or ethnic minority representation rate in fiscal year 2018 for the Pathways Internship Program was 69 percent and that Hispanics, at 31 percent, represented the largest minority demographic. USAID officials said that the agency views its internship programs as a succession-planning tool designed to convert as many internships as possible into full-time positions. According to USAID, the agency had no Pathways Interns in 2019, as a result of funding limitations, but as of April 2020 was planning 21 internships for 2020. <1.2.2. Training and Career Development> USAID provides training as well as a formal mentoring program intended to support diversity and inclusion, according to USAID officials. OCRD is responsible for providing mandatory agency-wide training on diversity awareness and equal opportunity. USAID officials stated that the agency has mandatory and nonmandatory training on diversity and inclusion issues. For example, USAID provides online mandatory training classes on the No FEAR Act and sexual harassment. According to USAID data, 326 people took versions of these courses in 2019. USAID also offers nonmandatory in-person classes such as EEO counselor training and unconscious bias training. In 2019, 17 people took EEO counselor training, and 36 people took USAID s in-person unconscious bias training. Additionally, USAID officials said that external partners of USAID have developed training related to diversity and inclusion, to which OCRD refers employees on request. According to USAID, the agency s mentoring programs build on informal mentoring efforts and support strategic human capital initiatives for recruitment and retention, employee development, succession planning, and diversity. USAID officials stated that the mentoring program includes a facilitated process for matching mentors and mentees, formal mentoring training, an established tracking system, and goals used to measure success. According to the officials, the mentoring program is open to all employees. <1.3. USAID Workforce Categories> USAID reported to Congress on its workforce categories in 2018. USAID defines its core workforce as those who have an employer employee relationship with the agency. This includes the following employment categories: Civil Service employees. USAID s Civil Service employees are U.S. U.S. Personal Services Contractors U.S. personal services contractors represent a significant and growing proportion of USAID s workforce whose demographic composition is not included in USAID s Management Directive 715 reports. As we reported in 2017, USAID uses personal services contractors for a broader range of functions than other agencies, as its regulations permit (see GAO-17-610). Those regulations provide that personal services contractors who are U.S. citizens may be delegated or assigned any authority, duty, or responsibility that direct-hire government employees might have, although they generally cannot supervise direct-hire government employees or sign obligating documents except when specifically designated as a contracting officer. Until recently, when looking to fill a vacancy through outside hiring or by promotions and reassignments, USAID bureaus and offices had to submit that action to USAID s Hiring and Reassignment Review Board for review. The board s guidelines exempted personal services contracts from review and approval. In April 2020, USAID officials told us that hiring decisions no longer required the board s approval. From June 2016 to September 2018, U.S. personal services contractors were USAID s fastest growing workforce category, increasing from 759 to 1,015 according to USAID s staffing reports to Congress. During this period, USAID s Civil and Foreign Service employees decreased from 3,548 to 3,002. contractors are non direct hire U.S. citizens on contract for the specific services of those individuals. As we reported in 2017, USAID uses personal services contracts for a broad range of functions, such as program management, security analysis, and logistics. According to its staffing report to Congress, USAID had 1,015 U.S. personal services contractors at the end of fiscal year 2018. Foreign nationals. USAID s foreign national employees are non U.S. citizens who are locally employed at posts abroad. They may be direct hires or personal services contractors. USAID uses foreign nationals to manage mission operations and oversee development activities. According to its staffing report to Congress, USAID had 4,712 foreign national employees at the end of fiscal year 2018. While USAID collects demographic data on U.S. personal services contractors for its payroll processor, it does not analyze this information. USAID does not report these data, because USAID does not regard personal services contractors as U.S. government employees. USAID officials noted that current reporting requirements call only for demographics of direct-hire employees, which excludes a considerable portion of the agency s workforce. Other categories of staff not directly employed by USAID, including institutional support contractors and staff detailed from other organizations and U.S. government agencies, also perform a wide range of services in support of the agency s programs. According to its staffing report to Congress, USAID had 1,681 institutional support contractors at the end of fiscal year 2018. EEOC has determined that contractors are a vulnerable group because of confusion as to where such personnel should seek redress for EEO matters. However, according to OCRD officials, OCRD is responsible for EEO matters for both direct and non direct hires, including contractors. Figure 1 shows the total number of staff in each of USAID s workforce categories in fiscal year 2018. <1.4. National Finance Center Data on USAID Civil and Foreign Service Promotions, Fiscal Years 2002-2018> In fiscal year 2018, USAID had 2,964 full-time, permanent, career employees (i.e., direct-hire U.S.-citizens) in its Civil and Foreign Services, according to National Finance Center data. This number reflects an increase of more than 54 percent from fiscal year 2002. Figure 2 shows the numbers of full-time, permanent, career employees in USAID s Civil and Foreign Services in fiscal years 2002 through 2018. <1.4.1. Civil Service> USAID s Civil Service made up 44 percent of the agency s full-time, permanent, career workforce in fiscal year 2018. Civil Service employees are ranked in the GS classification system from GS-1 (lowest) to GS-15 (highest), followed by the executive rank. Civil Service promotions are filled through competitive procedures and noncompetitive career-ladder positions. To be eligible for a promotion, Civil Service candidates must meet minimum qualification standards such as fulfilling time-in-grade requirements and receiving sufficiently positive ratings on their most recent performance appraisals. For competitive promotion positions, USAID uses an automated system to evaluate and rate all eligible candidates and develop referral lists of employees eligible for the promotions. Officials interview all direct-hire USAID employees from the promotion referral lists and select employees for promotion on the basis of the announcement. Career-ladder positions are intended to prepare employees for successive, noncompetitive promotions up to the full performance of the positions. For career-ladder positions, USAID officials select employees for noncompetitive promotions and are responsible for developing individual learning and training plans, offering developmental work, and providing feedback regarding employees performance. Each year, USAID promotes varying numbers of Civil Service employees. Promotion generally becomes more competitive for higher ranks. For example, in fiscal year 2018, 45.3 percent of employees ranked GS-11 in fiscal year 2017 were promoted to GS-12, while 1.0 percent of employees ranked GS-15 in fiscal year 2017 were promoted to the executive rank. Table 2 shows the number and percentage of employees in each Civil Service rank as well as the rate of promotion from each GS level for promotions effective in fiscal year 2018. <1.4.2. Foreign Service> Foreign Service employees made up 56 percent of USAID s full-time, permanent, career workforce in fiscal year 2018. Foreign Service officers enter at Class 4, 5, or 6, depending on their education and experience. Officers can be promoted from each level up to Class 1, after which they can apply for the executive rank. Foreign Service promotions are based on employee eligibility, a rank- ordered list prepared by a performance board, and the number of promotions authorized by USAID management. To be promoted to the next class, Foreign Service employees must meet eligibility requirements, such as time in their current class and overseas experience. Each year, performance boards evaluate the performance of eligible employees in Class 4 and higher, develop a rank-ordered list of employees recommended for promotion, and submit the list to HCTM. According to USAID policy, performance boards primarily consist of Foreign Service employees and, to the extent possible, include members of groups that are underrepresented in the service. The Chief Human Capital Officer, the Director of OCRD, and a representative of the American Foreign Service Association review the list before finalizing promotion decisions. USAID promotes varying numbers of its Foreign Service employees each year. Promotion generally becomes more competitive for higher ranks. For example, in fiscal year 2018, 33.2 percent of employees ranked Class 4 in fiscal year 2017 were promoted to Class 3, while 3.9 percent of employees ranked Class 1 in fiscal year 2017 were promoted to the executive rank. Table 3 shows the number and percentage of employees in each Foreign Service rank in fiscal year 2018 as well as the rate of promotion from each rank for promotions effective in that fiscal year. <1.5. USAID s Hiring Reassignment and Review Board> According to USAID s Chief Human Capital Officer, USAID established the Hiring and Reassignment Review Board (HRRB) in July 2017 as a mechanism to allow USAID to prioritize positions during the government- wide hiring freeze and a subsequent period when all USAID external hires required approval from the Secretary of State. In fiscal years 2017 through 2019, the HRRB met regularly and was responsible for prioritizing U.S. direct-hire positions, monitoring attrition levels, and identifying gaps in national security and other key positions. According to June 2019 guidelines, the HRRB was required to review certain hiring and reassignment actions. Such actions included filling vacancies externally by hiring individuals from outside the agency, using operating expense funding, and filling vacancies internally by reassigning operating expense funded Civil Service staff between the bureaus and independent offices. Hiring and reassignment actions exempted from HRRB review included, among others, hiring to compensate for attrition in certain defined high-risk mission-critical occupations, hiring into program- funded positions, Foreign Service limited appointments, personal services contracts, and institutional support contracts. According to USAID s strategic workforce plan for fiscal years 2019 through 2021, USAID planned to have the HRRB, the Office of the Administrator, HCTM, and the Bureau for Management set broader staffing levels for the agency s bureaus and independent offices beginning by the first quarter of fiscal year 2020. The workforce plan also states that a renamed HRRB would shift to serving as a strategic human capital governance board rather than performing position-by- position reviews. In April 2020, USAID officials told us that hiring decisions no longer required HRRB approval. <2. Diversity of USAID Workforce Has Generally Increased> <2.1. Overall Proportion of Racial or Ethnic Minorities Increased, although Proportion of African Americans Declined> <2.1.1. Overall Proportion of Racial or Ethnic Minorities at USAID Increased> From fiscal year 2002 to fiscal year 2018, the proportion of racial or ethnic minorities among USAID s full-time, permanent, career employees increased from 33 percent to 37 percent, as figure 3 shows. This increase in the proportion of racial or ethnic minorities at USAID overall was driven by an increase in the proportion of racial or ethnic minorities in the Foreign Service. During this period, the proportion of racial or ethnic minorities in the Civil Service decreased slightly, from 49 to 48 percent and the proportion of racial or ethnic minorities in the Foreign Service increased from 18 to 27 percent. <2.1.2. Proportion of Racial or Ethnic Minorities Was Nearly the Same as in Federal Workforce and Higher Than in Relevant Civilian Labor Force> We compared the proportions of racial or ethnic minorities in USAID s workforce with those in the federal workforce and relevant civilian labor force. Our comparison of USAID workforce data for fiscal year 2018 with federal workforce data for fiscal year 2017 the most recent available found that the proportion of racial or ethnic minorities was 37 percent both at USAID and in the federal workforce. For more details, see appendix III. The proportion of racial or ethnic minorities at USAID increased from 33 percent in fiscal year 2002 to 37 percent in fiscal year 2018. In comparison, the proportion of racial or ethnic minorities in the federal workforce increased from 31 percent in fiscal year 2002 to 37 percent in fiscal year 2017. Our comparison of USAID workforce data from fiscal year 2018 with data for the relevant civilian labor force from 2006 through 2010 (the most recent available data) found larger proportions of racial or ethnic minorities at USAID than in the relevant civilian labor force for three occupational groups: (1) officials and managers, (2) professional workers, and (3) technical workers and technologists. For more details, see appendix III. <2.1.3. Proportions of Hispanics, Asians, and Other Racial or Ethnic Minorities Increased, while Proportion of African Americans Decreased> Although the overall proportion of racial or ethnic minorities at USAID increased from fiscal year 2002 to fiscal year 2018, the direction of change for specific racial or ethnic minority groups varied the proportions of Hispanics, Asians, and other racial or ethnic minorities rose, while the proportion of African Americans fell. As figure 3 shows, from fiscal year 2002 to fiscal year 2018, the proportion of Hispanics at USAID rose from 3 to 6 percent; Asians, from 4 to 7 percent; and other racial or ethnic minorities, from 1 to 2 percent of USAID employees. In contrast, during the same period the proportion of African Americans fell from 26 to 21 percent of the agency s employees. Our analysis found that the overall decline in the proportion of African Americans at USAID reflected a substantial decline in the proportion of African Americans in USAID s Civil Service. The proportion of African Americans in USAID s Civil Service decreased from 42 percent in fiscal year 2002 to 32 percent in fiscal year 2018. The proportion of African Americans in USAID s Foreign Service increased from 11 percent to 12 percent over the same period. In contrast to the proportion of African Americans, the proportions of Hispanics, Asians, and other racial or ethnic minorities at USAID increased in both the Civil and Foreign Services from fiscal year 2002 to fiscal year 2018. <2.1.4. Proportions of Racial or Ethnic Minorities in Civil and Foreign Services Were Generally Smaller in Higher Ranks> Our analysis of USAID data for fiscal year 2018 found that the proportions of racial or ethnic minority employees generally decreased as rank increased. As figure 4 shows, the proportions of racial or ethnic minorities in the Civil Service in fiscal year 2018 were progressively smaller in each rank above GS-12, except at the executive rank, where the proportion of racial or ethnic minorities was larger than in GS-15. Specifically, the proportions of racial or ethnic minorities decreased from 77 percent in GS-12 to 31 percent in GS-15. Our analysis similarly found that, in general, the proportions of racial or ethnic minorities in the Foreign Service in fiscal year 2018 were progressively smaller in all ranks above Class 6. In fiscal year 2002, the proportion of racial or ethnic minorities was also generally smaller at higher ranks in both the Civil and Foreign Services. <2.2. Overall Proportion of Women Increased> <2.2.1. Proportion of Women Increased Overall, Rising in Foreign Service While Declining in Civil Service> From fiscal year 2002 to fiscal year 2018, the proportion of women at USAID increased from 51 to 54 percent, as figure 5 shows. Our analysis found that the overall increase in the proportion of women at USAID reflected a growth in the proportion of women in the Foreign Service. Specifically: The proportion of women in the Civil Service decreased from 66 percent in fiscal year 2002 to 61 percent in fiscal year 2018. The proportion of women in the Foreign Service increased from 38 percent in fiscal year 2002 to 49 percent in fiscal year 2018. <2.2.2. Proportion of Women Was Higher Than in Federal Workforce but Mixed in Comparison with Relevant Civilian Labor Force> We compared the proportion of women at USAID with the proportions of women in the federal workforce and relevant civilian labor force. Our comparison of USAID workforce data for fiscal year 2018 with federal government workforce data for 2017 found the following: The proportion of women at USAID in fiscal year 2018 (54 percent) was higher than the proportion of women in the federal workforce in fiscal year 2017 (43 percent). The proportion of women at USAID increased from 51 percent in fiscal year 2002 to 54 percent in fiscal year 2018. In contrast, the proportion of women in the federal workforce decreased slightly, from 44 percent in fiscal year 2002 to 43 percent in fiscal year 2017. Our comparison of USAID workforce data for fiscal year 2018 with data from the relevant civilian labor force for 2006 through 2010 (the most recent available data) found that the proportions of women were higher at USAID than in the relevant civilian labor force for two occupational groups (1) officials and managers and (2) technical workers and technologists. However, the proportion of women was lower at USAID than in the relevant civilian labor force for professional workers. For more details, see appendix III. <2.2.3. Proportions of Women in Civil and Foreign Services Were Generally Smaller in Higher Ranks> As figure 6 shows, our analysis of USAID data for fiscal year 2018 for the Civil Service found progressively smaller proportions of women in each rank above GS-11. The proportions of women ranged from 75 percent in GS-11 or lower ranks to 43 percent in the executive rank. Additionally, data for fiscal year 2018 for the Foreign Service show overall smaller proportions of women in the higher ranks. Specifically, women made up 55 percent of employees in Class 4 or lower ranks but 48 percent of Foreign Service executives. In fiscal year 2002, the proportion of women was also generally smaller in higher ranks in both the Civil and Foreign Services. <3. Promotion Outcomes Were Lower for Racial or Ethnic Minorities Than Whites in Early to Mid Career, but Differences Were Generally Statistically Significant Only in Civil Service> Our analyses of USAID data on promotions in fiscal years 2002 through 2017 found lower promotion outcomes for racial or ethnic minorities than for whites in early to mid career. We found these differences when conducting descriptive analyses, which calculated simple average promotion rates, as well as adjusted analyses, which controlled for certain individual and occupational factors other than racial or ethnic minority status that could influence promotion. Promotion rates were generally lower for racial or ethnic minorities than for whites in both the Civil and Foreign Services, although the differences shown by our adjusted analyses were generally statistically significant only in the Civil Service. However, our analyses do not completely explain the reasons for differences in promotion outcomes, which may result from various unobservable factors. Thus, our analyses do not establish a causal relationship between demographic characteristics and promotion outcomes. <3.1. Civil Service Promotion Outcomes Were Lower for Racial or Ethnic Minorities Than for Whites in Early to Mid Career> Both our descriptive analysis and adjusted analysis of data for USAID s Civil Service found that promotion rates were lower for racial or ethnic minorities than for whites in early to mid career, as table 4 shows. In addition, our adjusted analysis found that racial or ethnic minorities in USAID s Civil Service had lower odds of promotion than their white counterparts. As table 4 shows, our descriptive analysis of the data for USAID s Civil Service found that the average percentage of racial or ethnic minorities promoted from ranks GS-11 through GS-14 was lower than the average percentage of whites promoted from the same ranks. For example, our descriptive analysis found that in fiscal years 2002 through 2017, an average of 38.9 percent of racial or ethnic minorities were promoted from GS-11 to GS-12, compared with an average of 69.9 percent of whites. This difference of 31.0 percentage points indicates that the average rate of promotion from GS-11 to GS-12 was 44.4 percent lower for racial or ethnic minorities than for whites. In addition, our analysis of yearly promotion rates in the Civil Service for fiscal years 2013 through 2017 showed that the rate of promotion from GS-11 and higher ranks was greater for whites than for racial or ethnic minorities for every rank and year, except for promotions from GS-15 to the executive class in fiscal years 2013, 2014, and 2016. However, our descriptive analysis does not account for the variety of factors besides racial or ethnic minority status, such as occupation, that may affect promotion rates. Our adjusted analysis of the data for USAID s Civil Service, controlling for certain factors other than racial or ethnic minority status that could influence promotion, found that racial or ethnic minorities had lower adjusted rates and lower odds of promotion from each rank from GS-11 through GS-14 than their white counterparts. Specifically, our adjusted analysis of USAID data on promotions in fiscal years 2002 through 2017 found the following: The average adjusted rate of promotion from GS-11 to GS-12 for racial or ethnic minorities was 46.8 percent, compared with an average of 55.8 percent for whites. This statistically significant difference indicates that the odds of promotion from GS-11 to GS-12 in the Civil Service were 41.4 percent lower for racial or ethnic minorities than for whites. Our estimates of the adjusted rates and odds of promotion from GS- 12 to GS-13 and from GS-13 to GS-14 were also statistically significantly lower for racial or ethnic minorities than for whites. There was no statistically significant difference in the odds of promotion from GS-14 to GS-15 or from GS-15 to the executive rank for racial or ethnic minorities relative to whites in the Civil Service. Compared with our descriptive analysis, our adjusted analysis found smaller percentage differences in promotion outcomes for racial or ethnic minorities relative to whites in the Civil Service. Figure 7 shows key results of our descriptive and adjusted analyses of USAID data for racial or ethnic minorities and whites in USAID s Civil Service. <3.2. Foreign Service Promotion Outcomes Were Lower for Racial or Ethnic Minorities in Early to Mid Career, but Differences Were Generally Not Statistically Significant When We Controlled for Various Factors> As table 5 shows, our descriptive analysis of data for USAID s Foreign Service found that the rate of promotion was generally lower for racial or ethnic minorities than for whites. In addition, our adjusted analysis found differences between the promotion rates for racial or ethnic minorities and those for whites. These differences were not statistically significant for promotions from Class 4 to Class 3, from Class 2 to Class 1, or from Class 1 to the executive rank. However, the differences between promotion rates for racial or ethnic minorities and whites were statistically significant for promotions from Class 3 to Class 2. As table 5 shows, our descriptive analysis of the data for USAID s Foreign Service found that for Class 4 and higher ranks, a lower average percentage of racial or ethnic minorities than of whites was promoted from each rank except Class 1. For example, our descriptive analysis found that in fiscal years 2002 through 2017, an average of 31.5 percent of racial or ethnic minorities were promoted from Class 4 to Class 3, compared with an average of 33.7 percent of whites. This difference of 2.2 percentage points indicates that the average rate of promotion from Class 4 to Class 3 was 6.4 percent lower for racial or ethnic minorities than for whites. However, our descriptive analysis does not account for the variety of factors besides racial or ethnic minority status, such as occupation, that may affect promotion rates. Our adjusted analysis of the data for USAID s Foreign Service, controlling for certain factors other than racial or ethnic minority status that could influence promotion, found that racial or ethnic minorities had lower adjusted rates and odds of promotion than their white counterparts but that these differences were generally not statistically significant. Specifically, our adjusted analysis of USAID data on promotions in fiscal years 2002 through 2017 found the following: On average, the adjusted rate of promotion from Class 3 to Class 2 for racial or ethnic minorities was 11.0 percent, compared with 13.1 percent for whites. This statistically significant difference indicates that the odds of promotion from Class 3 to Class 2 in the Foreign Service were 21.5 percent lower for racial or ethnic minorities than for whites. The adjusted rates and odds of promotion for racial or ethnic minorities relative to whites were also lower for promotion from Class 4 to Class 3 and from Class 2 to Class 1 and were higher for promotion from Class 1 to the executive rank, but these differences were not statistically significant at the 95 percent confidence level. That is, we could not conclude that there was a statistical relationship between racial or ethnic minority status and promotion from these ranks. Compared with our descriptive analysis, our adjusted analysis found a larger percentage difference in promotion outcomes at all levels from Class 4 to the executive rank for racial or ethnic minorities relative to whites. Figure 8 shows key results of our descriptive and adjusted analyses of USAID data for racial or ethnic minorities and whites in the Foreign Service. <4. Differences in Promotion Outcomes for Women and Men Were Generally Not Statistically Significant> Our analyses of USAID data on promotions in fiscal years 2002 through 2017 found differences between promotion outcomes for women relative to men, but these differences were generally not statistically significant. We found these differences when conducting descriptive analyses, which calculated simple average promotion rates, as well as adjusted analyses, which controlled for certain individual and occupational factors other than gender that could influence promotion. In particular, we found that average promotion rates for women in the Civil Service varied relative to men, but the differences were not statistically significant. In the Foreign Service, average promotion rates varied for women relative to men, but these differences were statistically significant only for promotion from Class 4 to Class 3. Our analyses do not completely explain the reasons for differences in promotion outcomes, which may result from various unobservable factors. Thus, our analyses do not establish a causal relationship between demographic characteristics and promotion outcomes. <4.1. Civil Service Average Promotion Rates Varied for Women Relative to Men, but Differences in Outcomes Were Not Statistically Significant When We Controlled for Various Factors> As table 6 shows, our descriptive analysis of USAID data on promotions in fiscal years 2002 through 2017 found that the rate of promotion in USAID s Civil Service was generally lower for women than for men at GS- 13 and lower ranks. However, our adjusted analysis did not find any statistically significant differences in the rates or odds of promotion for women relative to men in the Civil Service. Our descriptive analysis of the data for USAID s Civil Service found that the average percentage of women promoted from GS-11 through GS-13 was lower than the average percentage of men. For example, our descriptive analysis found that in fiscal years 2002 through 2017, an average of 47.4 percent of women were promoted from GS-11 to GS-12, compared with an average of 58.7 percent of men. This difference of 11.3 percentage points indicates that the average rate of promotion from GS- 11 to GS-12 was 19.3 percent lower for women than for men. However, our descriptive analysis does not account for the variety of factors besides gender (e.g., occupation) that may affect promotion rates. Our adjusted analysis of the USAID data, controlling for certain factors other than gender that could influence promotion, found no statistically significant differences in the rates or odds of promotion for women compared with men in the Civil Service. Specifically, the adjusted analysis for promotions in fiscal years 2002 through 2017 found the following: The adjusted rates and odds of promotion from GS-12 to GS-13, from GS-13 to GS-14, and from GS-14 to GS-15 were lower for women than for men. Our estimates of the odds of promotion from GS-11 to GS-12 and from GS-15 to the executive rank were higher for women than for men. In all cases, we found no statistically significant differences at the 95 percent confidence level in the odds of promotion from any rank for women relative to men in the Civil Service. That is, we could not conclude that there was a statistical relationship between gender and promotion from these ranks. Figure 9 shows key results of our descriptive and adjusted analyses of USAID data for men and women in USAID s Civil Service. <4.2. Foreign Service Average Promotion Rates Were Generally Higher for Women Than Men, but Differences in Outcomes Were Generally Not Statistically Significant When We Controlled for Various Factors> Our descriptive and adjusted analyses of data on promotions in fiscal years 2002 through 2017 for USAID s Foreign Service both found that the rate and odds of promotion were generally higher for women than for men, as table 7 shows. Our descriptive analysis of the data for USAID s Foreign Service found that higher average percentages of women, relative to men, were promoted from Class 4 to Class 3, from Class 2 to Class 1, and from Class 1 to the executive rank. For example, our descriptive analysis found that in fiscal years 2002 through 2017, an average of 33.9 percent of women were promoted from Class 4 to Class 3, compared with an average of 32.2 percent of men. This 1.7 percentage point difference indicates that the average rate of promotion from Class 4 to Class 3 was 5.2 percent higher for women than for men. However, our descriptive analysis does not account for the variety of factors besides gender (e.g., occupation) that may affect promotion rates. Our adjusted analysis of the data for USAID s Foreign Service, controlling for certain factors other than gender that could influence promotion, found that the adjusted rates and odds of promotion varied for women relative to men in the Foreign Service. Specifically, our adjusted analysis of data on promotions in fiscal years 2002 through 2017 found the following: On average, the adjusted rate of promotion from Class 4 to Class 3 for women in the Foreign Service was 34.7 percent, compared with 31.5 percent for men. This statistically significant difference indicates that the odds of promotion from Class 4 to Class 3 were 20.2 percent higher for women than for men. While the adjusted rates of promotion from Class 3 to Class 2 and from Class 1 to the executive rank were lower for women than for men, there were no statistically significant differences in the odds of promotion from these ranks for women relative to men in the Foreign Service. Thus, we could not conclude that there was a statistical relationship between gender and promotion from these ranks. Compared with the descriptive analysis, our adjusted analysis found a smaller percentage difference in promotion outcomes from Class 3 to Class 2 and from Class 2 to Class 1 for women relative to men. Our adjusted analysis also found positive, rather than negative, percentage differences in promotion outcomes from Class 4 to Class 3 and from Class 2 to Class 1 for women relative to men. Figure 10 displays key results of our descriptive and adjusted analyses of USAID data for men and women in USAID s Foreign Service. <5. USAID Has Identified Underrepresentation of Specific Groups in Its Workforce but Has Not Carried Out Required EEO Activities> USAID has determined that specific groups, such as Hispanics and African Americans, are underrepresented in its workforce, and the agency has a strategic plan that identifies goals, activities, and measures to support diversity and inclusion. However, staffing gaps stemming in part from a lack of leadership attention have prevented OCRD from conducting required equal employment opportunity functions. Specifically, staffing gaps have prevented OCRD from responding to EEO complaints within mandated timeframes, analyzing USAID s workforce for trends and potential barriers to equal employment, and completing the annual MD- 715 reports on the agency s diversity efforts. <5.1. USAID Has Identified Underrepresentation of Specific Groups and Developed a Diversity and Inclusion Strategic Plan> <5.1.1. USAID Has Identified Underrepresentation of Specific Groups in Its Workforce> USAID has identified specific groups that are underrepresented in its workforce relative to the national civilian labor force. In each of its MD- 715 reports to EEOC for fiscal years 2013 through 2017, USAID identified the following groups as being underrepresented in its workforce: (1) Hispanic males and females in both the Civil Service and the Foreign Service; (2) individuals with a targeted disability; and (3) Hispanic, African American, and Asian American males and females in certain major occupations in areas such as health, contracting, program or project development, auditing, and management and program analysis. According to USAID officials, these groups remain underrepresented in USAID s workforce. In fiscal years 2010 through 2016, USAID completed analyses intended to identify barriers that could contribute to underrepresentation of specific groups and other diversity issues and described such barriers in its MD- 715 reports. For example, in its report for fiscal year 2010, USAID stated that its recruitment and outreach efforts had failed to attract a representative pool of qualified applicants. In its report for fiscal year 2011, USAID stated that it had no executive development program to prepare employees to enter the senior executive service. In its report for fiscal year 2016, USAID reported on three barrier analyses examining the underrepresentation of, respectively, Hispanics; people with targeted disabilities; and African Americans, Asian Americans, and Hispanics in major occupations. Additional diversity issues may exist at USAID. For example, in 2014, EEOC found that black and Asian females may encounter barriers to equal employment when attempting to enter USAID s Senior Foreign Service. Further, representatives from 10 of 11 employee resource groups told us that they believed members of their communities have fewer career prospects at USAID than members of other USAID communities. <5.1.2. USAID Developed a Diversity and Inclusion Strategic Plan> USAID outlined planned efforts to support diversity and inclusion in its June 2016 Human Resource Transformation Strategy and Action Plan, 2016-2021 (HR Transformation Strategy) as well as its 2017 Diversity and Inclusion Strategic Plan. According to the HR Transformation Strategy, USAID envisioned an environment in which diversity recruiting is targeted and strategic, selection bias does not prevent diverse candidates from being hired, all staff and supervisors are trained regularly in diversity and inclusion topics, and agency leaders incorporate diversity into staffing decisions. The HR Transformation Strategy included an objective focused on diversity and inclusion, with planned activities to work toward this goal. The 2017 Diversity and Inclusion Strategic Plan was developed concurrently with, and folded into, the HR Transformation Strategy s diversity and inclusion objective. Shortly into the first year of the HR Transformation Strategy implementation, USAID narrowed its scope and suspended the diversity and inclusion objective. USAID s 2017 Diversity and Inclusion Strategic Plan identifies three goals: (1) diversify the federal workforce though active engagement of leadership, (2) include and engage everyone in the workplace, and (3) optimize inclusive diversity efforts using a data-driven approach. The plan also identifies priorities, activities, and measures intended to meet USAID s diversity goals, several of which cite, and overlap with, the original diversity and inclusion related elements of the HR Transformation Strategy. HCTM and OCRD officials indicated that that the Diversity and Inclusion Strategic Plan includes some of the areas that would no longer be addressed through the HR Transformation Strategy. In addition, the officials noted that USAID has implemented some aspects of the plan. For example, according to the officials, its employee resource groups have participated in various outreach and recruitment events, as called for by the plan. HCTM and OCRD officials told us that USAID was drafting an update to the Diversity and Inclusion Strategic Plan, which it aimed to finish in June 2020. The officials stated that, although OCRD and HCTM will remain the plan s primary implementers, the new plan will give USAID bureaus and offices more responsibility for diversity and inclusion activities. Additionally, the officials stated that working groups from USAID s employee resource groups had begun reviewing the draft. The officials stated that OCRD expected to submit the draft to the Executive Diversity Council for comment after these reviews. <5.2. Staffing Gaps Have Prevented USAID from Responding to EEO Complaints in a Timely Manner, Analyzing Its Workforce, and Reporting on Diversity Efforts> OCRD has faced persistent staffing gaps stemming in part from a lack of management attention to the agency s EEO programs. Moreover, the office has experienced turnover among its directors. OCRD officials stated that the staffing gaps and turnover challenges have prevented the office from completing required EEO functions. As figure 11 shows, the number of OCRD s filled positions has consistently been less than its allocation. According to OCRD and EEOC officials, the office needs to fill its allocated positions to effectively perform its duties and responsibilities. These staffing gaps generally correspond to times when USAID reported that OCRD could not perform EEO investigations within mandated timeframes, conduct barrier analyses of the agency s workforce, or complete an MD-715 report. <5.2.1. OCRD Faces Staffing Gaps> OCRD cannot effectively perform its duties and responsibilities without sufficient staff, according to OCRD officials. Federal equal employment regulations require federal agencies to provide sufficient resources to their EEO programs to ensure efficient and successful operation. However, as table 8 shows, OCRD has faced staffing gaps since fiscal year 2010. According to USAID officials, vacancies have a greater effect on smaller offices such as OCRD, where fewer staff are available to take on the resulting extra work. The officials said that this can in turn affect morale, which can increase staff turnover. Such turnover is observable in USAID s employee data showing the number of employees and new hires in OCRD. Specifically, while OCRD added new hires to the office each fiscal year, the number of filled positions generally stayed the same or decreased. For example, the number of filled positions in OCRD decreased from 10 to nine in fiscal year 2016, despite the addition of a new hire. Similarly, in fiscal years 2017 and 2018, OCRD s filled positions remained constant at 10 despite four new hires during that period. As a result, OCRD s vacancy rate remained near or above 30 percent from October 2015 through April 2020. EEOC similarly noted OCRD s insufficient staffing in compliance letters to USAID in 2017 and 2019. In both letters, EEOC outlined its expectation that USAID establish a plan to allocate sufficient resources to its EEO program and demonstrate meaningful progress toward correcting this deficiency. USAID officials stated that these staffing gaps have limited OCRD s capacity to carry out required EEO functions. For example, in November 2019, most of OCRD s divisions had vacancy rates of 50 percent or more. At that time, all three allocated positions in the Reasonable Accommodation Division and five of six positions in the Diversity and Inclusion Division were vacant. In February 2020, OCRD officials reported that the division s Affirmative Employment Program had no staff to implement the MD-715 report for fiscal year 2019. Additionally, OCRD reported that the Complaints and Resolution Division s Anti-Harassment Program continued to receive cases while working through backlogs. Without sufficient staff, OCRD is unable to effectively perform its duties and responsibilities, according to OCRD officials. As part of its response to EEOC s October 2019 compliance letter, USAID increased the number of positions approved for OCRD to 24. However, the office has struggled to fill those positions. HCTM and OCRD officials stated that, although they are working to resolve the staffing gaps in OCRD, high demand for staff with the specialized skills OCRD requires, as well as unexpected recent turnover in OCRD due to illness and retirement, have hindered this effort. According to USAID officials, long security clearance processes also caused several candidates to withdraw from the hiring process when they found other employment. As table 9 shows, OCRD continued to have staffing gaps of 30 to 50 percent in April 2020. Representatives from nine of the 13 USAID employee groups we spoke with echoed the concern that OCRD lacked sufficient staffing resources to do its job effectively. For example, one group attributed OCRD s lack of responsiveness to information requests to a lack of sufficient staffing resources. Another group said that there was an implicit understanding in USAID that OCRD had to prioritize reacting to negative events rather than undertaking proactive efforts to increase diversity. Without sufficient staffing resources, USAID will lack the capacity to perform required functions such as responding to EEO complaints, analyzing demographic data, or completing annual MD-715 reports. <5.2.2. USAID Has Not Responded to EEO Complaints in a Timely Manner> According to EEOC MD-715 instructions to federal agencies, model EEO programs must have sufficient budget and staffing to support the success of the EEO program, including sufficient staffing to ensure thorough and fair processing of EEO complaints in a timely manner. According to USAID, a lack of staffing resources has prevented the agency from meeting required time frames for EEO investigations. In four of its six MD- 715 submissions for fiscal years 2010 through 2018, USAID reported that it did not have sufficient staffing to implement a successful complaint process. In recent years, USAID has consistently reported being unable to complete EEO counseling, EEO investigations, or final agency decisions on EEO complaints in a timely manner, as required by federal equal employment regulations. For example, in fiscal year 2013 and fiscal years 2015 through 2019, USAID reported being unable to complete EEO investigations within prescribed time frames. Further, in an October 2019 compliance letter, EEOC stated that in fiscal year 2018, USAID completed 67 percent of EEO counseling, 14 percent of EEO investigations, and none of the final agency decisions in a timely manner. As table 10 shows, USAID reported that it did not complete any stages of the EEO complaints response process in a timely manner for fiscal years 2016 through 2019, with the exception of EEO counseling in fiscal year 2016. In fiscal year 2019, the agency continued to lack sufficient funding and qualified staffing to process EEO complaints in a timely, thorough, and fair manner, according to USAID documentation. Representatives of three USAID employee groups also stated that OCRD lacked the capacity to address EEO issues in a timely manner and attributed this to understaffing. Representatives of the first group said that, at a certain point, USAID had a single EEO investigator for the entire agency and that investigations took more than a year. Representatives of the second group stated that because OCRD was short-staffed, it had a backlog of complaints of harassment and bullying. Representatives of the third group said that they had observed the reasonable-accommodation process taking longer than a year. They speculated that this had resulted from USAID s assigning the handling of reasonable-accommodation requests across the worldwide portfolio to a single person. According to USAID, OCRD has made progress in reducing complaint backlogs. In February 2020, OCRD officials said that the timeliness requirement had been met for the EEO complaint process and that the office no longer had a backlog of complaints. However, OCRD officials said that backlogs remained in processing anti-harassment cases. Further, the officials said that the Reasonable Accommodation Program continued to be affected by a lack of staff. In an April 2020 compliance letter to EEOC, USAID reported that OCRD had developed metrics and new internal procedures for complaint processing. The letter further stated that thus far in fiscal year 2020, OCRD had been 100 percent timely with EEO counseling, EEO investigations, and final agency decisions. While USAID has noted recent improvement in its ability to conduct timely EEO counseling and investigations, without the capacity to consistently perform these functions, USAID cannot meet mandated timeframes for responding to EEO complaints and risks being unable to achieve its goal of a diverse and inclusive workforce environment. <5.2.3. USAID Is Unable to Perform Analyses of Its Demographic Data> According to EEOC MD-715 instructions to federal agencies, model EEO programs must have sufficient budget and staffing to, among other things, conduct self-assessments of possible program deficiencies and conduct thorough barrier analyses of their agency s workforce. Although USAID has previously completed barrier analyses of its workforce, the agency reported insufficient personnel resources to conduct annual agency self- assessments and self-analyses for its MD-715 submissions for fiscal years 2010, 2013, 2016, and 2017. For example, the fiscal year 2017 MD-715 report stated that USAID did not conduct trend analyses of the effects of management or personnel policies, procedures, and practices and that the agency lacked sufficient resources to enable it to conduct a thorough barrier analysis of its workforce. According to USAID officials, OCRD lost its staff member assigned to manage barrier analyses and was unable to fill that position during the hiring freeze. Further, OCRD continues to lack sufficient personnel to conduct barrier analyses. In November 2019, OCRD s Diversity and Inclusion Division consisted of one supervisor and five vacant positions. Despite subsequent efforts to hire more staff, OCRD reported in February 2020 that it still lacked staff to perform its data analysis responsibilities. EEOC officials expressed concern regarding OCRD s lack of capacity to analyze and address diversity issues. For example, EEOC officials said that USAID had not adequately used applicant flow data to identify potential barriers in fiscal year 2017. Despite having collected applicant data, USAID did not submit applicant flow data as part of its MD-715 submission for fiscal year 2017, the most recent year for which it submitted this report. According to the EEOC officials, OCRD told them that it lacked staff with sufficient technical expertise to conduct a barrier analysis of these data. Without the capacity to perform self-analysis, USAID is unable to proactively identify and address barriers to diversity in its workforce. <5.2.4. USAID Is Unable to Consistently Submit the Annual MD-715 Report> EEOC MD-715 requires federal agencies to submit their MD-715 reports to the EEOC annually. The report is due by February 28 following the end of the fiscal year that is being reported, although EEOC has the discretion to grant extensions. However, OCRD did not complete the MD-715 report in fiscal years 2011 or 2012 and has not submitted an MD-715 report for fiscal year 2018. Despite being granted submission extensions, USAID had not submitted its MD-715 report for fiscal year 2018 by the certification deadline of September 30, 2019, according to EEOC s October 2019 compliance letter. The letter stated that EEOC expected USAID to submit the MD-715 report for fiscal year 2018 and to ensure that the MD-715 report for fiscal year 2019 would be submitted by the deadline of February 28, 2020. In November 2019, USAID officials told us that OCRD lacked the staff needed to complete the fiscal year 2018 MD- 715 report by this deadline and therefore intended to concentrate on submitting a report for fiscal year 2019. However, in February 2020, USAID officials told us that OCRD s Affirmative Employment Program continued to lack any staff to monitor and implement the MD-715 effort. In April 2020, USAID officials reported that they were using a contractor to complete the fiscal year 2019 MD-715 report. Without OCRD capacity to submit required reports on the agency s diversity and inclusion efforts, USAID leadership will lack sufficient insight into the EEO program to ensure that its activities meet agency goals. Furthermore, inconsistent reporting could hamper EEOC s oversight of USAID s EEO programs. <5.3. Lack of USAID Leadership Attention Has Contributed to OCRD s Staffing Gaps> OCRD s staffing gaps stem in part from a lack of leadership attention to USAID s equal employment opportunity programs at both the office and agency levels. We have previously identified top leadership commitment as a leading practice for diversity management. Leaders and managers within organizations are primarily responsible for the success of diversity management, because they must provide the visibility and commit the time and necessary resources. Both USAID and EEOC officials attributed OCRD s staffing problems to frequent management turnover within OCRD. According to information provided by USAID officials, OCRD has had five directors (permanent and acting) since 2013. USAID officials stated that this turnover made it difficult for any director to provide sufficient office-level leadership attention to sustain efforts to improve OCRD s capacity. EEOC officials also expressed concern regarding this level of director turnover and asserted that without consistent office leadership that could effectively advocate for scarce personnel resources within USAID, OCRD would continue to face staffing shortages. EEOC officials said that OCRD could not draw sufficient attention from senior USAID leadership without a permanent director. According to EEOC MD-715 instructions to federal agencies, model EEO programs have a reporting structure for the EEO program that provides the principal EEO official with appropriate authority and resources to effectively carry out a successful EEO program. This includes, but is not limited to, an annual State of the Agency briefing given by the EEO Director (in USAID s case, the Director of OCRD) to the agency head and other senior management officials after the submission of a MD-715 report. According to MD-715 instructions to federal agencies, the briefing must thoroughly cover all components of the agency s MD-715 report, including an assessment of the agency s performance in each of the six elements of a model EEO program, as well as a report on the agency s progress in completing its barrier analysis. However, OCRD has not presented a State of the Agency briefing to the head of USAID and other senior leadership for 3 consecutive fiscal years. In April 2020, OCRD officials told us that the office planned to provide the briefing to USAID s Executive Diversity Council once the MD-715 for fiscal year 2019 was completed, which they anticipated would occur in May 2020. HCTM and OCRD officials also told us that since receiving the EEOC s October 2019 compliance letter, senior USAID leadership had been more engaged than previously. Without senior USAID leadership attention to diversity, OCRD will continue to lack the staffing resources necessary to build its capacity to support USAID s diversity and inclusion efforts as well as operate an effective and efficient EEO program. <6. Conclusions> Although USAID has made some progress in increasing representation of diverse groups in its Civil and Foreign Service workforces, continued underrepresentation and generally lower promotion outcomes for racial or ethnic minorities suggest that additional efforts are needed. Addressing these issues requires an effective and efficient EEO program. However, OCRD, which operates the agency s EEO program, is currently unable to perform its key functions because of significant staffing gaps and turnover. USAID s recent efforts to fill staff vacancies within various OCRD divisions could help increase OCRD s capacity to perform its required EEO functions. However, such capacity will not be fully demonstrated until OCRD can consistently ensure timely processing of EEO complaints and investigations, regular analysis of workforce demographics for trends, and regular submission of required MD-715 reports. Further, sustained attention to diversity efforts from USAID s senior leadership would help ensure that OCRD has the capacity to perform its required EEO functions. Without capacity to perform these functions, USAID cannot consistently respond to allegations of discrimination in a timely manner, identify potential barriers to equal employment opportunity, or maintain accountability for the progress of its diversity and inclusion efforts. <7. Recommendations for Executive Action> We are making the following four recommendations to USAID: 1. The USAID Administrator should ensure that OCRD consistently responds to EEO complaints in a timely manner. (Recommendation 1) 2. The USAID Administrator should ensure that OCRD consistently analyzes USAID workforce demographic data for trends and potential barriers to equal employment opportunity. (Recommendation 2) 3. The USAID Administrator should ensure that OCRD submits required MD-715 reports to EEOC in a timely manner. (Recommendation 3) 4. The USAID Administrator should demonstrate senior leadership attention to diversity by ensuring that OCRD has the capacity to perform required EEO functions. (Recommendation 4) <8. Agency Comments> We provided a draft of this report to USAID, EEOC, and OPM for comment. USAID provided comments, which we have reproduced in appendix XV. EEOC and OPM stated they did not have comments. In its comments, USAID concurred with our four recommendations and described actions planned or underway to address them. For example, in response to recommendations 2 and 3, USAID stated that it is in the process of establishing an Affirmative Employment Program in OCRD to, among other things, analyze and report on workforce data and prepare and submit the agency s annual MD-715 Report. USAID indicated that it expects to finish implementing actions addressing our EEO-related recommendations in 2020. We believe that, to demonstrate consistent capacity to perform its EEO functions, USAID will need to successfully complete these functions for at least two consecutive cycles. We are sending copies of this report to the appropriate congressional committees, the Administrator of USAID, the Chair of EEOC, and the Director of OPM. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6881 or at bairj@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix XVI. Appendix I: Objectives, Scope, and Methodology This report examines (1) the demographic composition of USAID s workforce in fiscal years 2002 through 2018, (2) differences in promotion outcomes for racial or ethnic groups in USAID s workforce, (3) differences in promotion outcomes for men and women in USAID s workforce, and (4) the extent to which USAID has identified workforce diversity issues and worked to address them. For this report, we analyzed National Finance Center data on USAID s full-time, permanent, career workforce (direct-hire U.S. citizen Civil and Foreign Service employees) for fiscal years 2002 through 2018. For each fiscal year, we analyzed record-level status data for USAID s employees as of September 30 (the end of the fiscal year). This included demographic and administrative data for each employee, such as race, ethnicity, gender, grade or class, age, date of entry to USAID, years of service, veteran s status, occupation, location or duty station, and the employee s unique identifier. We also analyzed record-level dynamic data that included personnel actions, such as promotions or separations. In addition, we obtained Post (Hardship) Differential Percentage of Basic Compensation data from the Department of State s website for fiscal years 2002 through 2018. Following guidance from the U.S. Equal Employment Opportunity Commission, we used data for nine federal job categories and their correspondence to specific occupation codes to match federal job categories to the occupations of USAID s employees. We assessed the reliability of these data sets and of other data critical to our analyses through documentation review, electronic testing, and interviews with knowledgeable agency officials. We determined that these data were sufficiently reliable for our purposes. To examine the demographic composition of USAID s workforce over time, we analyzed National Finance Center data for USAID s full-time, permanent, career workforce for fiscal years 2002 through 2018. For each year, we calculated the demographic composition of the workforce by racial or ethnic group and by gender for USAID overall and for USAID s Civil and Foreign Services. We also analyzed these numbers and percentages by occupation and rank, including General Service (GS) grade for the Civil Service, salary class for the Foreign Service, and executive rank (i.e., Senior Executive Service or Senior Foreign Service). We excluded political appointees and Office of Inspector General employees from our overall analysis because, according to agency officials, USAID s Office of Human Capital and Talent Management does not have authority over these hires. We also compared the demographics of USAID s workforce in fiscal year 2018 with the most recent available data on demographics of (1) the federal workforce, as reported by the Office of Personnel Management (OPM), and (2) the relevant civilian labor force, from the Census Bureau s equal employment opportunity (EEO) tabulation. Because of USAID s involvement in disability-related litigation during the course of this engagement, we did not analyze the numbers and percentages of employees with disabilities. Additionally, because the National Finance Center data we used did not include information about employees sexual orientation, we were unable to analyze the data on that basis. For the purposes of our report, racial or ethnic minorities exclude non- Hispanic whites; Hispanics include Hispanics of all races; and the remaining non-Hispanic racial or ethnic groups include white, African American, Asian, and other. Our analysis for the category we report as other includes non-Hispanics identified as American Indian or Alaskan Native, Native Hawaiian or other Pacific Islander, and individuals identifying as two or more races. For instances where an employee s reported racial, ethnic, or gender category changed, we assigned the most recently recorded category to all available years. To examine promotion outcomes for racial or ethnic minorities and women in USAID s workforce, we conducted two types of analyses descriptive and adjusted using USAID s National Finance Center data for its full-time, permanent, career workforce in fiscal years 2002 through 2018. For both analyses, we considered promotion to be an increase in rank between 2 consecutive fiscal years. We included in these analyses all individuals in the original rank and did not distinguish between individuals who did or did not apply for promotion or who were eligible or ineligible. We conducted a descriptive analysis of USAID data, comparing annual promotion rates for racial or ethnic minorities and whites and for women and men. For each rank and fiscal year, we calculated these rates as the number of newly elevated employees in the next-higher rank in the following fiscal year divided by the number of employees in the given rank in the current year. We conducted adjusted analysis using a multivariate statistical method (i.e., duration analysis), which accounted for certain individual and occupational factors other than racial or ethnic minority status and gender that could influence promotion. Specifically, we used a discrete-time multivariate statistical logit model to analyze the number of yearly cycles it took to be promoted up to the executive level from GS-11 in the Civil Service and from Class 4 in the Foreign Service. We examined the statistical relationship between promotion and racial or ethnic minority status and gender, including adjusted promotion rates, odds ratios, and percentage differences in relative odds of promotion. Because a variety of factors besides racial or ethnic minority status and gender may influence promotion outcomes, we incorporated various individual and position-specific characteristics in our regression models to control for other potential factors. These included an employee s (1) time in each rank before promotion; (2) years of prior federal government experience; (3) age when entering USAID; (4) receipt of veterans preference points; (5) having transferred between the Civil and Foreign Services; (6) having worked overseas in the previous year (for the Foreign Service); (7) having worked in at a location where the hardship differential was 20 percent or more in the previous year (Foreign Service only); (8) proficiency in two or more languages other than English (Foreign Service only); and (9) occupation as well as (10) fiscal years. We identified these attributes as being relevant to promotion by reviewing relevant literature and interviewing agency officials. Our primary model was a pooled model that included all employees whose records we used to determine summary statistics for USAID s full-time, permanent, career workforce in fiscal years 2002 through 2018. Additionally, we conducted a number of sensitivity analyses, such as examining the robustness of our models to the inclusion of various sets of control variables (see app. XIII) and applying the multivariate statistical method for various permutations of racial or ethnic minority status (see app. XIV). <9. USAID s Identification of Diversity Issues> To examine the extent to which USAID has identified workforce diversity issues and worked to address them, we reviewed all annual Management Directive 715 reports that it submitted to EEOC from fiscal year 2011 through fiscal year 2019. We also reviewed policies, guidance, and other USAID documentation related to diversity. Additionally, we met with relevant USAID officials from the Office of Civil Rights and Diversity and the Office of Human Capital and Talent Management as well as officials from EEOC. We also conducted interviews with representatives of 13 employee groups representing current employees in USAID s Civil and Foreign Services to obtain their perspectives on diversity efforts at USAID. These groups included two unions: the Association of Federal Government Employees and the American Foreign Service Association. The 13 groups also included 11 employee resource groups: Arab- Americans in Foreign Affairs Agencies, the Asian Pacific American Employees Committee, Blacks in Government, Employees with Disabilities, Gender and Sexual Minorities, the Hispanic Employees Council of Foreign Affairs Agencies, the Jewish Affinity Group, the Native Americans in Foreign Affairs Council, the Personal Services Contractor Association, the USAID Muslims Employee Resource Group, and Women@AID. We conducted this performance audit from October 2018 to June 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: USAID Workforce Data, Fiscal Years 2002-2018 The following figures and tables present numbers and proportions of employees in racial, ethnic, and gender groups in the U.S. Agency for International Development (USAID) overall and in USAID s Civil and Foreign Services in fiscal years 2002 through 2018. Appendix III: Comparison of USAID Workforce with Federal Government and Relevant Civilian Labor Force We compared summary statistics for the U.S. Agency for International Development s (USAID) workforce overall with summary statistics for the federal government and relevant civilian labor force. <10. Comparison of USAID and Federal Workforce> We compared summary statistics calculated from USAID personnel data for fiscal year 2018 with summary statistics for the federal government for fiscal year 2017, published in the Federal Equal Opportunity Recruitment Program (FEORP) report. Our comparison of USAID personnel data with data from the Office of Personnel Management s FEORP report for the federal government found differences between the proportions of racial or ethnic minorities at USAID and those in the federal workforce. In particular, the proportions of African Americans and Asians were higher at USAID in fiscal year 2018 than in the federal workforce in fiscal year 2017, but the proportion of Hispanics was lower at USAID than in the federal workforce for those years. The proportion of women at USAID was higher than in the federal workforce (see table 17). <11. Comparison of USAID s Workforce with Relevant Civilian Labor Force across Equal Employment Opportunity Commission Groupings> We compared summary statistics for USAID s workforce with summary statistics for the relevant civilian labor force from the Census Bureau s equal employment opportunity tabulation for three of the Equal Employment Opportunity Commission (EEOC) occupational classification system s nine categories. Using an EEOC table that cross-classifies Office of Personnel Management occupation codes and federal sector occupational categories, we classified each USAID employee into one of the nine categories. We compared USAID and relevant civilian labor force statistics for the following three categories, corresponding to 99 percent of USAID s full-time, permanent employees in fiscal year 2018: officials and managers, professional workers, and technical workers and technologists. Our comparison of USAID workforce data with relevant civilian labor force data found generally larger proportions of racial or ethnic minorities at USAID than in the relevant civilian labor force for officials and managers, professional workers, and technical workers and technologists (see tables 18 through 20). The proportions of women were lower at USAID than in the relevant civilian labor force for professional workers but were higher for officials and managers and for technical workers and technologists. Appendix IV: Demographic Data on Executives at USAID, Fiscal Years 2002- 2018 To compare U.S. Agency for International Development (USAID) and federal government workforce data, we contrasted summary statistics on executive employees calculated from USAID personnel data for fiscal year 2018 with summary statistics on executives from federal government workforce data for fiscal year 2017 that were published in the Federal Equal Opportunity Recruitment Program (FEORP) report. As table 21 shows, our comparison of USAID workforce data with the FEORP data found a slightly higher proportion of white executives and a slightly lower proportion of racial or ethnic minority executives at USAID than in the federal workforce overall. Appendix V: Workforce Data on Veterans at USAID We analyzed U.S. Agency for International Development (USAID) data on employees hired with veterans preference in fiscal years 2002 through 2018. The following tables present the numbers and percentages of employees hired with or without veterans preference in USAID s workforce overall and in USAID s Civil and Foreign Services during that period. Appendix VI: Workforce Data on Individuals with Disabilities at USAID Table 25 shows the proportions of permanent employees with a disability in the U.S. Agency for International Development s (USAID) Civil and Foreign Services in fiscal years 2009 through 2017. The data shown are summary statistics from USAID s Management Directive 715 (MD-715) reports to the Equal Employment Opportunity Commission. As the table shows, the proportion of permanent employees with disabilities increased in the Civil Service and remained constant in the Foreign Service in the years for which USAID reported these data. Appendix VII: USAID Data on Political Appointees and Office of Inspector General Employees, Fiscal Years 2002-2018 In addition to analyzing the demographic composition of the U.S. Agency for International Development s (USAID) workforce, we analyzed USAID personnel data to determine summary statistics on political appointees in fiscal years 2002 through 2018. We considered employees to be political appointees if they were on the executive pay plan or the administratively determined pay plan. This includes Senate-confirmed political appointees as well as political appointees that did not require Senate confirmation. The following figures and tables present the numbers and proportions of political appointees in racial or ethnic and gender groups in USAID overall and USAID s Civil Service and Foreign Service in fiscal years 2002 through 2018. We also analyzed USAID personnel data to determine summary statistics on employees of the agency s Office of Inspector General in fiscal years 2002 through 2018. The following tables present the numbers and percentages of the office s employees in racial or ethnic and gender groups in fiscal years 2002 through 2018. Appendix VIII: Data on Applicants to USAID, Fiscal Years 2012-2018 We analyzed data for applicants to the U.S. Agency for International Development s (USAID) Civil Service in fiscal years 2012 and 2018 and applicants to USAID s Foreign Service in fiscal years 2012 and 2016. According to USAID s guidance on personnel recruitment, an applicant is considered eligible when USAID s online application evaluation system, using the applicant s online responses to standardized questions, determines that the applicant meets eligibility requirements and the minimum qualifications defined in the vacancy announcement. USAID s Civil Service staffing guidance provides that officials may interview and make selections on the basis of referral lists of eligible applicants. USAID s personnel recruitment guidance for the Foreign Service also notes that an applicant is considered selected when the applicant s score is above the cut-off total score and the applicant has passed the onsite assessment to advance to the reference-check stage of the hiring process. We considered an applicant to have been rated eligible if the applicant data showed that the applicant had not been rated ineligible. We considered an applicant to have been selected if the applicant data showed that the applicant was either hired or selected. Tables 30 through 32 show the percentages of eligible applicants and selected eligible applicants to, respectively, USAID overall in fiscal years 2012 and 2018, USAID s Civil Service in fiscal years 2012 and 2018, and USAID s Foreign Service in fiscal years 2012 and 2016. Appendix IX: USAID Data on Newly Hired Employees, Fiscal Years 2003-2018 In addition to analyzing the demographic composition of the U.S. Agency for International Development (USAID) workforce, we analyzed USAID personnel data to determine summary statistics on employees hired in fiscal years 2003 through 2018. We considered an employee to have been hired in a given fiscal year if the employee first appeared in USAID s personnel data for that year. Because the USAID data we reviewed began in fiscal year 2002, we were unable to identify employees who were hired in that fiscal year; thus, fiscal year 2003 is the first for which we were able to identify newly hired employees. Figure 21 shows the number of newly hired employees at USAID from fiscal year 2003 to fiscal year 2018. The following figures and tables present the numbers and proportions of newly hired employees in racial, ethnic, and gender groups in USAID overall and USAID s Civil Service and Foreign Service in fiscal years 2003 through 2018. Appendix X: U.S. Agency for International Development Workforce Data on Attrition, Fiscal Years 2003-2018 In addition to analyzing the demographic composition of the U.S. Agency for International Development s (USAID) workforce, we analyzed USAID personnel data to determine summary statistics for employees who left USAID in fiscal years 2003 through 2018 for reasons other than retirement or death. Figures 24 and 25 show the percentages of such employees in various racial, ethnic, and gender groups at USAID overall and in USAID s Civil Service and Foreign Service in fiscal years 2003 and 2018. Table 35 presents attrition rates for white and racial or ethnic minority employees who left USAID in fiscal years 2003 through 2018 for reasons other than retirement or death. Table 36 presents attrition rates for men and women who left USAID in fiscal years 2003 through 2018 for reasons other than retirement or death. Appendix XI: USAID Workforce Data on Promotion Rates, Fiscal Years 2013-2017 As table 37 shows, our analysis of yearly promotion rates for fiscal years 2013 through 2017 at the U.S. Agency for International Development (USAID) found that promotion rates for white employees exceeded those for racial or ethnic minority employees for Civil Service promotions from GS-11 and every higher rank in every year, except from GS-15 to executive in 3 years, and Foreign Service promotions from Class 4 and higher ranks for 11 of the 20 possible year-rank combinations. Table 38 shows the promotion rates for white employees and racial or ethnic minority employees in USAID s Civil and Foreign Services in fiscal years 2013 through 2017. <12. Class 4 to Class 3> from Class 4 and higher ranks for 12 of the 20 possible year-rank combinations in the Foreign Service. Table 40 shows the promotion rates for men and women in USAID s Civil and Foreign Services in fiscal years 2013 through 2017. Appendix XII: USAID Workforce Data on Years Employees Spent in Each Rank, Fiscal Years 2002-2018 Our analysis of U.S. Agency for International Development (USAID) workforce data found that racial or ethnic minorities generally spent more years in each rank than whites did in USAID s Civil Service in fiscal years 2002 through 2018. Table 41 shows the average years in rank for whites and racial or ethnic minorities in USAID s Civil and Foreign Services. Our analysis also found that in the Civil Service, women generally spent more years than men in early- to mid-career ranks (GS-13 and below) before being promoted. However, women spent fewer years than men in later career ranks (GS-14 and above) before being promoted. In the Foreign Service, women generally spent fewer years than men in early- to mid-career ranks (Class 2 and below) before being promoted. Table 42 shows the average years in rank for men and women in USAID s Civil and Foreign Services in fiscal years 2002 through 2018. <13. Rank Civil Service Executive GS-15> Appendix XIII: Full Promotion Regression Results Tables 43, 44, 50, and 51 provide summaries of the multivariate statistical regression results (specifically, duration regression results) for our estimates of the percentage differences in odds of promotion for racial or ethnic minorities compared with whites and for women compared with men in the U.S. Agency for International Development s (USAID) Civil and Foreign Services. Our analyses do not completely explain the reasons for differences in promotion outcomes, which may result from various unobservable factors. Thus, our analyses do not establish a causal relationship between demographic characteristics and promotion outcomes. promotion slots (and therefore promotion outcomes) may be affected by budget constraints that vary across fiscal years. Model 6 used data for fiscal years 2011 through 2018 only. In addition to controlling for the same variables as model 5, model 6 controlled for use of long-term leave in the prior year. Tables 43 through 55 provide the regression results of these six models for all promotion stages that we analyzed in the Civil and Foreign Services. Tables 43, 44, 50, and 51 present the consolidated regression results for all six models and all promotion stages, presented as estimates of percentage differences. Tables 45 through 49 and tables 52 through 55 provide the full regression results of the first five models, presented as odds ratios. Odds ratios that are statistically significant and lower than 1.00 indicate that individuals with the given characteristic were less likely to be promoted. Odds ratios that are statistically significant and greater than 1.00 indicate that individuals with the given characteristic were more likely to be promoted. To convert the values in tables 45 through 49 and tables 52 through 55 to the values in tables 43, 44, 50, and 51, we linearly transformed the estimates. That is, the values for the estimates in tables 43, 44, 50, and 51 are equal to the values in tables 45 through 49 and in tables 52 through 55 multiplied by 100, minus 100. The values for the standard errors in tables 43, 44, 50, and 51 are equal to the values in tables 45 through 49 and in tables 52 through 55 multiplied by 100. For example, in table 45, the estimate for model 1a is 0.463; we arrived at the percentage difference of negative 54 percent in table 43 by 0.463*100-100. Additionally, in table 45, the estimate for the standard error for model 1a is (0.0624); we arrived at the converted standard error of (6) in table 45 by (0.0624)*100. Table 43 summarizes the regression results for our estimates of the percentage differences in odds of promotion for racial or ethnic minorities compared with whites in the Civil Service. We observed that racial or ethnic minorities lower odds of promotion from GS-11 through GS-14 were consistently statistically significant across all of our models examining combinations of factors that could influence promotion (i.e., models 1a through 5). In addition, our results were generally statistically significant when we examined the more recent time period fiscal years 2011 through 2018 (see model 6). <14. Racial or ethnic minority> (0.0624) (0.0629) (0.0950) (0.0850) (0.0972) 0.872 (0.200) 0.962 (0.229) 1.016 (0.255) Age at entry, squared (0.000627) (0.000640) (0.000659) 0.932*** (0.0197) 0.927*** (0.0200) 0.909*** (0.0211) Years of government service, squared (0.317) (0.362) (0.340) (0.393) (0.399) 0.764 (0.174) 0.802 (0.193) (1.043) (2.707) 1.750** (0.393) 1.919*** (0.459) (0.568) (0.559) 1.987** (0.662) 2.096** (0.728) Odds ratio (standard error) (0.718) (0.747) 1.147 (0.228) 1.235 (0.262) (5.832) (5.407) (6.191) (1.395) (1.142) (0.802) Legend: GS = General Schedule, = not applicable, = controls applied, *** = statistically significant at p-value < 0.01, ** = statistically significant at p- value < 0.05, * = statistically significant at p-value < 0.1. Odds ratio (standard error) Model 3 Model 2 0.876 0.825* (0.103) Model 4 0.833 (0.100) Model 5 0.806* (0.102) (0.150) (0.149) (0.163) 1.020 (0.0400) 1.022 (0.0410) 1.032 (0.0427) Age at entry, squared (0.000528) (0.000540) (0.000554) <15. Control variable Years of government service> Odds ratio (standard error) Model 2 Model 3 0.937*** (0.0170) (0.0174) (0.0184) Years of government service, squared 1.000 (0.000678) 1.000 (0.000687) 1.000 (0.000703) (0.158) (0.150) (0.152) (0.278) (0.316) 2.299 (3.283) (0.321) (0.306) 0.781 (0.261) 0.878 (0.303) (0.141) (0.142) 1.518 (0.482) 1.515 (0.500) 2.194*** (0.579) 2.050*** (0.548) 2.457*** (0.669) 2.315 (1.734) 1.564 (1.199) 0.648 (0.538) Legend: GS = General Schedule, = not applicable, = controls applied, *** = statistically significant at p-value < 0.01, ** = statistically significant at p- value < 0.05, * = statistically significant at p-value < 0.1. Appendix XIII: Full Promotion Regression Results For example, the estimated odds ratio for racial or ethnic minority employees for promotion from GS- 12 to GS-13 is 0.640 (model 5), which means that the odds of promotion for racial or ethnic minority employees are about 64 percent of the odds for white employees. We conducted discrete-time duration analysis using logit models to analyze the time duration (number of years) before promotion from each GS grade shown. In all models, we controlled for the time that employees spent in each grade before promotion. The overall baseline population for the duration analysis represents individuals who possessed none of the characteristics indicated by the list of control variables. These analyses do not completely explain why differences in odds of promotion exist. While various independent variables capture and control for many characteristics across demographic groups, unobservable factors may account for differences in odds of promotion; thus, our regression results do not establish a causal relationship between demographic characteristics and promotion outcomes. <16. Control variable Woman> Model 1b 0.912 (0.0897) Model 2 0.988 (0.0986) Model 3 0.855 (0.0906) Model 4 0.832* (0.0900) Model 5 0.838 (0.0926) (0.116) (0.120) (0.139) 1.021 (0.0385) 1.028 (0.0395) 1.034 (0.0405) Age at entry, squared (0.0158) (0.0165) (0.0170) Years of government, service squared 1.000 (0.000575) 1.000 (0.000594) 1.000 (0.000603) (5.518) (4.311) 0.865 (0.235) 1.013 (0.281) (0.159) (0.160) 0.788 (0.242) 0.703 (0.219) Odds ratio (standard error) (0.163) (0.180) 0.580 (0.210) 0.612 (0.223) (0.277) (0.284) 0.844 (0.263) 0.932 (0.295) 0.198*** (0.0621) 0.164*** (0.0520) 0.199*** (0.0636) 0.243* (0.183) 0.221* (0.172) 0.187** (0.154) Legend: GS = General Schedule, = not applicable, = controls applied, *** = statistically significant at p-value < 0.01, ** = statistically significant at p- value < 0.05, * = statistically significant at p-value < 0.1. <17. Racial or ethnic minority> (0.0822) (0.0823) (0.0910) (0.0988) (0.111) 0.634* (0.150) 0.633* (0.151) 0.722 (0.175) Age at entry, squared (0.000670) (0.000673) (0.000690) 1.018 (0.0214) 1.024 (0.0221) 0.987 (0.0224) Years of government service, squared (0.327) (0.352) (0.473) (1.609) (1.437) 1.157 (0.497) 1.401 (0.612) (1.302) (1.511) 0.553 (0.229) 0.557 (0.232) (0.172) (0.146) 1.170 (0.451) 1.575 (0.617) Odds ratio (standard error) (0.227) (0.230) 0.856 (0.361) 1.009 (0.433) 0.0232*** (0.0103) 0.0202*** (0.00903) 0.0227*** (0.0102) 0.00362*** (0.00377) 0.00317*** (0.00336) 0.00526*** (0.00584) Legend: GS = General Schedule, = not applicable, = controls applied, *** = statistically significant at p-value < 0.01, ** = statistically significant at p- value < 0.05, * = statistically significant at p-value < 0.1. (0.418) (0.413) (0.332) (0.360) (0.455) 4.827*** (2.417) 5.221*** (2.569) 6.051*** (3.095) (0.147) (0.141) (0.148) Odds ratio (standard error) Control variable Age at entry, squared (0.00180) (0.00173) (0.00174) 1.120* (0.0751) 1.143* (0.0785) 1.054 (0.0768) Years of government service, squared (0.963) (1.082) (1.282) (0.472) (0.514) (0.459) (0.485) (0.274) (0.242) 1.712 (1.907) 2.195 (2.500) 1.287 (1.431) 1.708 (1.964) (0.446) (0.414) Odds ratio (standard error 0.00117*** (0.00181) 0.00110*** (0.00171) 0.00109*** (0.00170) 0.000580** (0.00178) 0.000754** (0.00229) 0.00114** (0.00361) Legend: GS = General Schedule, = not applicable, = controls applied, *** = statistically significant at p-value < 0.01, ** = statistically significant at p- value < 0.05, * = statistically significant at p-value < 0.1. Table 50 summarizes the regression results for our estimates of the percentage differences in odds of promotion for racial or ethnic minorities compared with whites in the Foreign Service. We found that racial or ethnic minorities had lower estimated odds of promotion than whites in early to mid career (Class 4 through Class 1), but these differences were generally not statistically significant. However, we observed statistically significantly lower odds of promotion for racial or ethnic minorities from Class 3 through Class 2. These results were consistently statistically significant across all of our models examining combinations of factors that could influence promotion (i.e., models 1a through 5). including in the more recent period fiscal years 2011 through 2018 (see models 5 and 6). Tables 52 through 55 present full regression results for models 1a through 5 for each rank in the Foreign Service. The results are presented as odds ratios. <18. Racial or ethnic minority> (0.0732) (0.0725) (0.0746) (0.0733) (0.0831) 0.910 (0.229) 0.819 (0.209) 1.010 (0.278) Age at entry, squared (0.000598) (0.000602) (0.000645) 1.104*** (0.0310) 1.104*** (0.0312) 1.098*** (0.0327) Years of government service, squared (0.126) (0.129) (0.128) 2.035*** (0.182) 2.018*** (0.182) 3.664*** (0.442) (0.217) (0.216) (0.346) (3.089) (3.039) 1.338 (0.266) 1.557** (0.339) (0.151) (0.157) 0.935 (0.151) 1.190 (0.209) Odds ratio (standard error) (0.124) (0.147) 1.592** (0.362) 1.391 (0.334) Duration controls Fiscal year controls (0.169) (0.212) Legend: = not applicable, = controls applied, *** = statistically significant at p-value < 0.01, ** = statistically significant at p-value < 0.05, * = statistically significant at p-value < 0.1. (0.0722) (0.0745) (0.0817) (0.0831) (0.0911) 0.752*** (0.0769) 0.767** (0.0790) 0.767** (0.0803) 0.748*** (0.0801) 0.785** (0.0895) (0.154) (0.147) (0.201) Odds ratio (standard error) Control variable Age at entry Age at entry, squared (0.0679) (0.0688) (0.0650) 0.998** (0.000764) 0.998** (0.000769) 0.999 (0.000799) Years of government service, squared (0.00140) (0.00141) (0.00148) 1.077 (0.109) 1.078 (0.110) 0.855 (0.0970) (0.0868) (0.0868) (0.121) 1.033 (0.168) 1.024 (0.167) 0.929 (0.164) 2.641 (3.278) 2.499 (3.189) (0.185) (0.182) 1.063 (0.177) 1.166 (0.208) (0.126) (0.221) 1.534** (0.295) 1.857*** (0.386) (0.160) (0.215) Odds ratio (standard error) 0.000459*** (0.000257) 0.000467*** (0.000262) 0.000494*** (0.000277) 0.000022*** (0.000028) 0.000017*** (0.000022) 0.000417*** (0.000588) Legend: = not applicable, = controls applied, *** = statistically significant at p-value < 0.01, ** = statistically significant at p-value < 0.05, * = statistically significant at p-value < 0.1. Odds ratio (standard error) Model 2 Model 3 1.122 0.997 (0.0951) (0.0986) (0.0922) (0.0948) (0.0970) 0.837* (0.0890) 0.820* (0.0882) 0.842 (0.0922) 0.830* (0.0922) 0.863 (0.0979) Age at entry, squared (0.0659) (0.0665) (0.0642) 0.998*** (0.000750) 0.998*** (0.000754) 0.998** (0.000772) Years of government service, squared (0.000815) (0.000821) (0.000859) <19. Control variable Two or more languages> Odds ratio (standard error) Model 2 Model 3 1.417*** (0.135) (0.137) (0.130) 1.480*** (0.165) 1.437*** (0.164) 1.777*** (0.227) (0.141) (0.144) (0.172) (1.409) (3.636) 1.181 (0.241) 1.203 (0.249) (0.200) (0.202) 1.050 (0.197) 1.280 (0.247) (0.274) (0.268) 0.966 (0.229) 1.003 (0.242) (0.130) (0.137) Legend: = not applicable, = controls applied, *** = statistically significant at p-value < 0.01, ** = statistically significant at p-value < 0.05, * = statistically significant at p-value < 0.1. Appendix XIII: Full Promotion Regression Results (0.117) (0.117) (0.124) (0.127) (0.127) 1.028 (0.160) 1.034 (0.162) 1.089 (0.173) 1.220 (0.202) 1.321 (0.226) (0.0613) (0.0577) (0.0612) Age at entry, squared 1.002** (0.00104) 1.003*** (0.00101) 1.003** (0.00106) Years of government service, squared (0.000948) (0.00101) (0.00114) 0.954 (0.125) 0.996 (0.137) 0.918 (0.130) (0.139) (0.154) (0.200) 0.546* (0.182) 0.654 (0.223) 0.827 (0.306) 4.998*** (1.128) 3.924*** (0.907) Odds ratio (standard error) (0.326) (0.321) 0.586* (0.185) 0.514** (0.165) (0.418) (0.413) 1.168 (0.551) 1.139 (0.552) (0.189) (0.145) 1.156 (0.451) 0.882 (0.354) 0.00502*** (0.00303) 0.00513*** (0.00310) 0.00510*** (0.00308) 0.211 (0.312) 0.547 (0.813) 0.00422** (0.0100) Legend: = not applicable, = controls applied, *** = statistically significant at p-value < 0.01, ** = statistically significant at p-value < 0.05, * = statistically significant at p-value < 0.1. Tables 56 and 57 summarize the multivariate statistical regression results (specifically, duration regression results) for our estimates of the percentage differences in odds of promotion for two groupings of racial or ethnic minorities in the U.S. Agency for International Development s (USAID) Civil and Foreign Services. We examined odds of promotion for African Americans and non African American racial or ethnic minorities compared with whites. We examined odds of promotion for the individual racial or ethnic groups African Americans, Hispanics, Asians, and other racial or ethnic minorities compared with whites. Our analyses do not completely explain the reasons for differences in promotion outcomes, which may result from various unobservable factors. Thus, our analyses do not establish a causal relationship between demographic characteristics and promotion outcomes. Veteran s status Transferring between the Foreign and Civil Services Having a hardship assignment in the prior year (Foreign Service only) Having an overseas post in the prior year (Foreign Service only) Proficiency in two or more languages other than English (Foreign Service only) Fiscal year fixed effects (indicator variables representing the fiscal year) The third model, which was limited to fiscal years 2011 through 2018, controlled for the same variables as the second model and also controlled for use of long-term leave in the previous year. Table 56 summarizes the regression results for our estimates of the percentage differences in odds of promotion for the two groupings of racial or ethnic minorities compared with whites in the Civil Service. For the first grouping, we found statistically significantly lower odds of promotion from GS-11 through GS-15 for African Americans than for whites in fiscal years 2002 through 2018 (model 2). The odds of promotion from GS-12 to GS-13 were also statistically significantly lower for non African American racial or ethnic minorities during the same period. For the second grouping, we found statistically significantly lower odds of promotion from GS-12 to GS-13 for Asians than for whites in fiscal years 2002 through 2018. <20. Asian> (46) (24) (24) (31) (74) -49 (22) -30 (22) -2 (31) 2 (38) Other racial or ethnic minority (198) (27) (26) (68) Legend: GS = General Schedule, *** statistically significant at p-value < 0.01, ** statistically significant at p-value < 0.05, * statistically significant at p- value < 0.1, = not applicable. Table 57 presents the summary of the regression results for our estimates of the percentage differences in odds of promotion for the two groupings of racial or ethnic minorities compared with whites in the Foreign Service. For the first grouping, we found statistically significantly lower odds of promotion from Class 4 to Class 3 for African Americans than for whites in fiscal years 2002 through 2018 (model 2). For the second grouping, we found statistically significantly lower odds of promotion from Class 3 to Class 2 for members of the Other racial or ethnic minority group than for whites in fiscal years 2011 through 2018 (model 3). Appendix XV: Comments from the U.S. Agency of International Development Appendix XVI: GAO Contacts and Staff Acknowledgments <21. GAO Contacts> <22. Staff Acknowledgments> In addition to the contacts named above, Mona Sehgal (Assistant Director), David Hancock (Analyst-in-Charge), Cody Knudsen, Moon Parks, Nisha Rai, Deirdre Sutula, and Melinda Cordero made key contributions to this report. Reid Lowe, Justin Fisher, Nicole Willems, and Chris Keblitis provided technical assistance. | Why GAO Did This Study
USAID has a stated commitment to fostering an inclusive workforce that reflects the diversity of the United States and has undertaken efforts to increase diversity in its Civil and Foreign Services. However, concerns about the demographic composition of USAID's workforce are longstanding.
GAO was asked to review issues related to the diversity of USAID's workforce. This report examines, among other things, the demographic composition of USAID's workforce in fiscal years 2002 through 2018, differences between promotion outcomes for racial or ethnic minorities, and the extent to which USAID has identified workforce diversity issues and worked to address those issues. GAO analyzed USAID's personnel data for its full-time, permanent, career workforce for fiscal years 2002 through 2018—the most recent available data. GAO's analyses do not completely explain the reasons for differences in promotion outcomes, which may result from various unobservable factors. Thus, GAO's analyses do not establish a causal relationship between demographic characteristics and promotion outcomes. GAO also reviewed USAID documents and interviewed USAID officials and members of 13 employee groups.
What GAO Found
The overall proportion of racial or ethnic minorities in the U.S. Agency for International Development's (USAID) full-time, permanent, career workforce increased from 33 to 37 percent from fiscal year 2002 to fiscal year 2018. The direction of change for specific groups varied. For instance, the proportion of Hispanics rose from 3 to 6 percent, while the proportion of African Americans fell from 26 to 21 percent. The proportions of racial or ethnic minorities were generally smaller in higher ranks. During this period, the overall proportion of women increased from 51 to 54 percent, reflecting their growing proportion in USAID's Foreign Service.
8
Promotion outcomes at USAID were generally lower for racial and ethnic minorities than for whites in early to mid career. When controlling for factors such as occupation, GAO found statistically significant odds of promotion in the Civil Service were 31 to 41 percent lower for racial or ethnic minorities than for whites in early and mid career. In the Foreign Service, average promotion rates were lower for racial or ethnic minorities in early to mid career, but differences were generally not statistically significant when GAO controlled for various factors.
USAID has previously identified underrepresentation of specific groups in its workforce, but staffing gaps, partly due to a lack of senior leadership attention, prevent the agency from consistently performing required Equal Employment Opportunity (EEO) activities. The Office of Civil Rights and Diversity (OCRD), responsible for USAID's EEO program, has been significantly understaffed. Vacancy rates in most OCRD divisions were 50 percent or higher in November 2019 and, despite attempts to hire more staff, remained at 30 to 50 percent as of April 2020. These staffing gaps have limited OCRD's capacity to process EEO complaints and investigations within mandated timeframes and analyze USAID's demographic data. Staffing gaps also prevented OCRD from submitting required reporting on the status of its EEO program in fiscal year 2018. A lack of consistent leadership in OCRD as well as a lack of senior USAID leadership attention to diversity has contributed to OCRD's staffing gaps. As a result, USAID lacks the capacity to respond to allegations of discrimination, identify potential barriers to equal employment opportunity, and submit required annual reports on the progress of its diversity and inclusion efforts in a timely manner—all of which are required EEO functions.
What GAO Recommends
GAO is making four recommendations to USAID, including three to perform required EEO activities and one to demonstrate senior leadership attention to diversity efforts. USAID concurred with the recommendations. |
gao_GAO-20-16 | gao_GAO-20-16_0 | <1. Background> <1.1. What is an FBO?> FAA defines an FBO as a business granted the right by the airport to operate fueling facilities, hangars, aircraft tie-downs, aircraft rental, aircraft maintenance, flight instruction, and other aeronautical services at an airport. In addition, FBOs sometimes manage parking ramps for transient aircraft at the airport. FBOs may charge a fee for parking, as they also maintain the ramp areas for the airport. According to FAA, airports, within certain parameters, have the ability to charge or not to charge users for access to airport ramp space. FBOs generally serve pilots who operate general aviation aircraft, but can also support commercial flights. The type of amenities and services any one FBO provides varies. For example, representatives of one FBO said that it provides high-level customer service and offers more services such as catering, pilot lounges, concierge services, and aircraft maintenance and repair facilities for its clients. See figure 1 for an illustration of FBO services. As of March 2019, we identified 3,070 FBOs operating at 3,016 airports located in the contiguous United States; these airports are included in FAA s National Plan of Integrated Airport Systems (NPIAS). FBOs can be run by the airport, an independent operator, or a network chain with multiple locations. The Transportation Research Board estimated in 2016 that 47 percent of all FBO locations were airport-operated. Most stakeholders with whom we spoke agreed that there are fewer FBOs today than in the past, although estimates vary on the extent of the decline in FBO numbers. Stakeholders we interviewed said that this decline is due to factors such as a drop in general aviation activity that resulted in a reduction in fuel sales. More recent innovations, such as more fuel-efficient aircraft and decision-making software that provides information to pilots on where to purchase fuel to fly more efficiently, also contributed to the decline. <1.2. Airport and Federal Oversight of FBO Activities> Airports that receive federal AIP grants contractually agree to FAA grant assurances that require those airports to adhere to certain requirements. One of those key grant assurances is a prohibition of unjust economic discrimination. This assurance requires airports to provide to users equal access to airport facilities. Likewise, tenant businesses (e.g. FBOs) operating at airports are required to make services available and price those services not in an unduly discriminatory fashion. Although FAA is required to ensure that airports, as a condition for accepting federal grants, provide fair and equal access to services and nondiscriminatory pricing, FAA does not regulate prices in the FBO industry. As a condition of accepting federal grants, airports have a responsibility to ensure that they and their contractors and concessionaires abide by the grant assurances. FAA s Airport Compliance Office oversees airports adherence to grant assurances by taking and responding to inquiries and complaints, developing and circulating advisories and guidance documents, and coordinating with airports and industry to conduct compliance training and airport land use inspections. The Department of Justice (DOJ) also plays a role in overseeing the FBO industry under its antitrust responsibilities to preserve competition. For example, in the Final Judgement entered by a federal court in the DOJ s case regarding the acquisition by BBA Aviation (Signature Flight Support) acquisition of Landmark Aviation, BBA was required to divest FBO facilities in six locations where the transaction would have created a monopoly or duopoly for FBO services. In addition, the court order required BBA to provide advance notice of certain future acquisitions for the 10-year duration of the final judgment. <2. Transparency of FBO Fees Varies by Service> Based on our review of FBO and third party websites, we found that fuel prices at FBOs are readily available to anyone on the internet. Nearly all of the pilots we spoke with told us they use these resources for making flight plans. For example, current prices for fuel are readily available on third party websites such as AirNav and Sky Vector, among others, and on about a third of the websites for FBOs we visited. See figure 2 for a representation of a website providing FBO information. However, fees for other services such as for parking and aircraft handling are less transparent. Our review of FBO and third-party websites found that such fees are not always available online, and that fees may vary by type of aircraft, are sometimes waived, and are called by different terms. According to FBO staff we spoke with, fees for services other than fueling can be lengthy and unwieldy to post on their websites for multiple reasons. First, some fees will vary based on the size and approved weight of the aircraft. For example, the price sheet for services other than fuel at one FBO showed fees varying by the aircraft s approved weight, so there were 11 different prices for each of those services. The same pricing sheet also included prices for dozens of incidental services, such as aircraft towing and lavatory service that are not based on aircraft weight. Additionally, customers may be eligible for discounts on fuel purchases either by volume or through a membership program. FBOs may also waive fees in some cases for example, with a qualifying fuel purchase an FBO might waive a parking, ramp, or handling fee. Further, a few stakeholders and pilots we spoke to indicated that FBOs don t always use the same terms for a fee. For example, a landing fee or a ramp fee might be a fee for doing essentially the same thing. Consequently, to find out how much an FBO visit will cost, 16 of the 18 pilots we interviewed told us they call the FBO in advance. Based on information such as their type of aircraft, length of stay, and other services they might require, the FBO provides an estimate of their total cost. Recently, some industry stakeholders have called for increased price transparency and consistency among FBOs regarding how they characterize their fees, and have taken some actions to increase the transparency of fees. A campaign called, Know Before You Go, developed through the cooperation of six aviation associations, encourages FBOs to communicate and expeditiously provide available services and a listing of currently applicable posted fuel prices, as well as fees and charges for other available services. Further, the campaign suggests that these fees and charges should be made accessible to aircraft operators online in a user friendly manner and with sufficient clarity. Additionally, it encourages customers to contact the FBO to ask questions so pilots can make informed decisions. In response, one large- chain FBO began posting fees online for piston aircraft at its locations and another FBO company created a trip calculator on its web site for pilots to calculate the cost of their visit (see fig 3). We also found that a third party company recently began a web site that provides FBO parking ramp fees similar to those providing fuel prices. In addition, AOPA invited FBOs to include their fees in the association s online airport directory. The association also indicated it categorizes the variety of fees into basic types of fees such as landing, using a hanger, or using lavatory service to help clarify what pilots could be expected to pay. In October 2019, AOPA officials indicated that FBOs posting of fees had not increased as much as they hoped. <3. Various Cost and Demand Factors and the Extent of Competition Are Associated with FBO Prices> Selected stakeholders we interviewed including officials from 26 airports, 16 FBOs, as well as 18 general aviation pilots highlighted key factors that may influence FBO prices at airports across the country. Our statistical model confirmed a correlation between certain key factors identified by stakeholders and FBO prices. Consistent with general economic theory, these factors fall into three groups: (1) an FBO s costs, (2) demand for an FBO s services, and (3) competition among FBOs. <3.1. Stakeholders Reported That Cost Factors May Influence Prices> Selected stakeholders we interviewed highlighted cost factors such as airport leases, infrastructure investment, fuel, labor, and security as influencing FBO prices. They cited the following examples: Airport Leases. Airport leases dictate terms and conditions of contracts between an FBO and an airport and include provisions related to the services an FBO must provide for pilots. Depending on the specific requirements or minimum standards developed at a given airport, FBO lease requirements vary and can affect an FBO s costs. For example, at one airport we visited, the FBO is required to offer an after-hours self-service fueling option, which necessitates the acquisition and maintenance of additional equipment. In another case, the manager at an FBO we spoke to said that its overhead costs are relatively high because it is required to offer flight training and aircraft maintenance as part of its lease. To offer these services the FBO needs additional hangar space and must pay qualified skilled employees. Infrastructure Investment. As with leasing costs, the greater the investment an FBO makes at an airport, the higher its prices to users may be. According to airport and interest group officials we spoke with, FBOs typically have 20 to 30 year leases during which they may make infrastructure investments such as building hangars or lounges, based on FBO s assessment of customer demand for its services. For example, according to airport and FBO officials we spoke to, an FBO will choose to invest in high-end facilities and amenities if it determines there is sufficient demand and revenues earned are expected to be sufficient to recoup the costs over the term of the lease. Fuel Transportation Costs. FBOs generally sell two types of aviation fuel for general aviation aircraft: Jet A and 100 low lead (100LL). Jet A is generally delivered over long distances via pipeline. According to one petroleum company, 100LL is generally moved by truck, rail, or barge less cost-effective methods of transport than pipeline due to the smaller volumes being produced. Further, there are parts of the United States, specifically on the East Coast, where little or no 100LL is produced and, as a result, transportation costs can significantly affect the cost of fuel to the FBO. Labor costs. FBOs compete in the local labor market for staff. The cost of labor for FBOs may vary across local labor markets around the country. Further, a particular FBO may need specialized skills to provide the services they offer, and this factor can affect the FBO s costs. For example, some FBOs offer maintenance services, so will have trained mechanics on staff to perform such services. State taxes. Aviation fuel excise taxes on 100LL vary considerably from state to state and may also affect the costs to a consumer. For example, both Oregon and Idaho have a lower state aviation fuel tax compared to neighboring Washington State. An FBO manager told us that in some cases, a pilot will fly over to Idaho to obtain less expensive fuel, even though he or she may base the aircraft in Washington. Security. Some airports particularly those with commercial service are responsible for implementing security requirements in accordance with their Transportation Security Administration (TSA)- approved security programs, notably the security of perimeters and access controls protecting restricted areas of the airport, such as ramps and taxiways. We found that some FBOs are responsible for security and access controls on their leased property based on our review of individual lease requirements and the airport security plan. These FBOs might require staff on site 24 hours a day to maintain airfield and perimeter security, a requirement that can increase FBO costs. For example, an FBO operating at an airport with commercial service told us that it is responsible for perimeter security on the land it leases from the airport. In addition, it is subject to unannounced security checks by TSA. In contrast, smaller general aviation airports without commercial service are not required to have as many security requirements. <3.2. Stakeholders Said Demand Factors May Influence Prices> Selected stakeholders told us that the location of an airport may influence demand for FBO services. Economic theory indicates that increased demand for a service will generally result in increased prices, all else equal. In particular, stakeholders cited the following examples of demand factors that may influence prices: Busy and congested airports may have higher prices for FBO services due to greater demand. Prices may be higher during part of the year in locations with significant seasonal traffic, such as beach resorts with a summer high season and ski resorts with a winter high season. The increased demand at FBOs during high seasons results in higher prices than during the off season. An airport s proximity to the central business district may be associated with higher demand and higher prices in such locations. <3.3. Stakeholders Described How the Extent of Competition May Influence FBO Prices> In addition to cost and demand factors, the extent to which the market for FBO services is competitive may also influence prices. According to the stakeholders we interviewed, competition among FBOs may lead to lower prices than would be the case when only one FBO provides the service at that airport. In our analysis of FAA airport and FBO data, however, we found that nearly 90 percent of NPIAS airports that offer FBO services are served by only one FBO (see table 1). According to a Transportation Research Board report, a strong indicator of the number of FBOs that can be financially viable at an airport can be the amount of fuel sales. For example, two airport managers we spoke to said that there was an insufficient volume of fuel sold at their airports to support more than one FBO. While the majority of NPIAS airports in the contiguous United States have only one FBO, pilots we spoke to said that competition from FBOs at nearby airports can also affect prices. For example, within 30 miles of Spokane International Airport, there are five other airports, each of which is served by an FBO that may compete with the services provided at Spokane International. (See fig. 4) We asked selected managers of FBOs and airports and selected general aviation pilots to describe how off-airport competition may influence FBO pricing. FBO and airport managers told us that they view nearby airports as competitors and monitor the FBO prices at these locations. For example, an FBO manager in Maine told us he regularly checks the prices at the larger international airport that is nearby. This finding suggests that when an FBO sets its prices, it takes into account the extent to which nearby airports may compete for its services. On the buyer s side of the market, 11 of the 18 general aviation pilots we interviewed told us that they generally price shop for aviation fuel. Further, most general aviation pilots we spoke with told us they use online flight-planning tools to map their route and consider the fuel cost and service fees of the airports along that route. Further, four pilots and an FBO manager indicated that on longer trips that require refueling before reaching a destination, pilots may have options that are hundreds of miles from each other. For example, when flying from California to Texas, a pilot could choose to stop either in New Mexico or Arizona to obtain fuel. In this scenario, a pilot would compare prices of many FBOs in those two states and likely choose one with lower fuel prices. Likewise, an FBO manager in Kansas indicated that for these types of customers, he competes with FBOs at airports more than 100 miles away. However, we interviewed some pilots who said that they do not consider every nearby airport as a substitute. To be a true substitute the airport must meet the pilot s needs to be a viable option. For example, the airport s runway must be of sufficient length for the aircraft, and some runways may be too short for certain aircraft. Also, pilots take into account the type of fuel offered at an FBO. The pilot of a piston-driven aircraft will be unable to refuel at an FBO that offers only jet fuel. Finally, some pilots said the price differential would need to be sufficiently large to compensate them for any inconvenience. Some mentioned that the price of 100LL would have to be 30 to 40 cents per gallon lower to affect their flight plan, while others put that threshold at a lower point, 25 to 30 cents per gallon. Pilots also told us they take travel time into account. For example, some pilots said that they would consider landing at an alternative airport with lower prices if it were no more than 20 to 30 miles out of their way and if the change in destination were to add no more than 10 to 20 minutes to their trip. As a first step in examining the relationship of FBO competition with pricing, we examined differences in the average posted prices for 100LL and Jet A across NPIAS airports in the contiguous United States where only one FBO sold a fuel type compared to airports at which more than one FBO sold that fuel, without controlling for other factors that might also be correlated with prices. We also calculated the average prices at airports with one FBO and airports with multiple FBOs for a subset of airports with an air traffic control tower. We examined this subset of towered-airports, as they generally have more operations, and thus more demand. As shown in table 2 below, the average price per gallon of aviation fuel was lower at airports with only one FBO than at airports with on-airport competition. For example, at airports with only one FBO, the average price posted for full-service 100LL was $5.01 per gallon, while the average price posted at airports with more than one FBO was about 73 cents higher. However, examining average differences in price fails to control for other factors that might be correlated with prices. In particular, airports that have more than one FBO are likely to be those that have higher traffic volumes and that are located in areas with larger populations and higher per-capita incomes all factors likely correlated with higher prices. Therefore, to more fully assess the issue, we developed a statistical model that examines how fuel prices may be correlated with measures of competition when controlling for other factors, such as demand, that also may be correlated with prices. <3.4. Our Statistical Model Indicates Several Factors Are Correlated with FBO Fuel Prices> Our statistical model confirmed a correlation between selected cost and demand factors and FBO-posted pricing of full-service 100LL and Jet A. It also confirmed a correlation between some of the competition factors described by stakeholders and the price of aviation fuel. Our analysis included information on posted prices for both 100LL as well as Jet A. In addition to running the model for NPIAS airports in the contiguous United States for which posted prices were available (all-airports), we also ran the model for a subset of these airports that have air traffic control towers (towered-airports). See appendix II for a more detailed discussion of the model structure and findings. As we have noted, we expected fuel prices to be correlated with a variety of cost, demand, and competition factors that pertain to characteristics of airports and their locations, as well as characteristics of FBOs operating at airports. We found the following correlations: Airport Characteristics. Our model found that the size of an airport measured as the total number of operations was associated with higher prices for both 100LL and Jet A. The operational size of an airport is likely associated with higher demand for airport services and also is likely related to higher costs of providing those services. Specifically, we found that an increase of 10,000 airport operations per year was associated with higher prices of about 2 cents per gallon for both 100LL and Jet A in both the all-airports and towered-airports datasets. The length of the longest runway available at an airport was also correlated with higher fuel prices. The length of the runway is an indicator of the types of aircraft an airport can support. In particular, longer runways are able to accommodate larger and heavier aircraft aircraft that generally use more fuel and thus may indicate higher demand for fuel at the airport. Specifically, we found that a 1,000-foot increase in runway length was associated with a higher price of about 7 to 8 cents per gallon for both fuels. Demographic Characteristics of Airport Location. Our analysis found that FBOs fuel prices were generally higher at airports in areas with higher incomes, but not always at airports in areas with larger population. We found that prices for both types of fuel were higher at airports located in counties with higher per-capita incomes. Where incomes are higher, we would expect the demand for travel to be greater. Moreover, where there are higher incomes, the cost of providing FBO services particularly labor costs are likely higher. We found income correlated with fuel prices for both the 100LL and Jet A in the all-airports datasets and for 100LL in the towered-only airport dataset. We also found that 100LL aviation fuel prices were higher at airports located in counties with larger populations. This was expected due to the likely greater demand for air travel in more populous areas. However, county population was not statistically significant in relation to the price of Jet A. Geographic Characteristics of Airport. Our model found that airports located in states on the East Coast tend to have higher 100LL prices. Specifically, the model suggests that 100LL prices are between 22 to 26 cents higher per gallon on average in these states. We expected FBOs operating in East Coast states to have higher 100LL prices due to higher transportation costs, as we found that there is no production of 100LL in these states. As mentioned earlier, 100LL is generally moved by truck, rail or barge due to the smaller volumes being produced a less cost-effective means of transport than pipeline. Jet A, on the other hand, is transported by pipeline over long distances. We thus expected higher prices for 100LL at airports in these states. We found this geographic differential in all specifications for the 100LL model. Large-Chain FBOs. Our model found that both types of fuel tend to have higher posted prices at airports that have a large-chain FBO operating on the premises, regardless of whether or not there was another competitor on the premises. Specifically we found that when a large-chain FBO operates at an airport, fuel at the airport tends to be more expensive on average on the order of 60 cents more per gallon for 100LL, and an even greater differential for Jet A, more than $1.20 per gallon. Availability of Self-Serve 100LL fuel. Our model found that when self-serve 100LL is available at an airport, the prices for full-serve 100LL tend to be lower than at airports with no self-serve 100LL available. Specifically, we found that if a self-serve 100LL option is available at an airport, the price of full-service 100LL will be about 10.5 cents per gallon lower, on average, compared to FBOs at airports without self-service 100LL. We expected a self-service option might be correlated with somewhat lower prices for full service 100LL even if the full service option is provided by the same FBO because pilots are presented with a lower price option may constrain the prices that FBOs will charge for a full-service option. Competition. Within our statistical model, we examined whether the extent of competition among FBOs had a correlation with fuel pricing in two ways. On-airport competition. On-airport competition occurs when two or more FBOs at an airport sell the same kind of fuel. We estimated that the price of Jet A is lower, on average, at an airport when two or more FBOs provided that fuel at an airport. Specifically, for the all-airports dataset, we found that, on average, the posted price of Jet A was 35 cents per gallon higher if only one FBO sold the fuel at an airport compared to the case when at least one additional competitor also served the airport. In the towered-airports dataset, the posted price of Jet A was about 50 cents higher on average if there were only one FBO at the airport. For 100LL, we did not find a statistical relationship between on- airport competition and prices in the all-airports dataset; however, we did find a statistical relationship between on-airport competition and prices in the towered-airports dataset. The finding of no correlation between on-airport competition and 100LL prices may be linked to the rarity of airports with more than one FBO selling 100LL in the all-airports dataset. In fact, in the all-airports dataset, only 13 percent of FBOs faced on-airport competition in the sale of 100LL while in the towered-airports dataset about one-third of FBOs faced competition in the sale of 100LL. Specifically, we estimated that the price of 100LL is 11 cents lower, on average, if there are at least two FBOs selling that fuel at a towered airport. Nearby competition. Our model also tested whether the availability of additional FBOs at airports within a 30-mile distance from a given airport had any correlation to prices for 100LL. We included this factor because many of the stakeholders we spoke to noted that general aviation pilots will consider using an airport near their preferred airport if prices were more favorable at the alternative location. However, across all model specifications, we did not find that prices for 100LL were correlated with the presence of FBOs at nearby airports. <4. FAA s Compliance Activities Have Not Identified FBO Pricing as a Widespread Area of Concern, and FAA Is Taking Steps to Consolidate and Review Regional Inquiries> FAA officials told us they primarily rely on airports to self-certify their compliance with federal airport grant assurances when they accept AIP grant funding. This reliance includes the grant assurance that relates to FBO fees an airport must ensure aeronautical services are available to all users on a reasonable and not unjustly discriminatory basis. FAA officials indicated that airport compliance staff conduct outreach to stakeholders and provide training aimed at ensuring that airports comply with these assurances. One recent outreach effort focused on FBO pricing. Additionally, FAA responds to phone and email inquiries and informal and formal complaints, and conducts periodic airport land use inspections, as discussed below, but none of these efforts has identified FBO pricing as a widespread area of concern. Training and Outreach. According to FAA, compliance staff conducts periodic training and outreach to the airport community on a variety of compliance issues. FAA headquarters annually conducts recurrent compliance training which includes overseeing airport grant assurances with regional and other FAA offices. FAA officials told us they use these sessions to address concerns brought up by regional compliance officials and airport compliance staff. One example of FAA s outreach efforts occurred in December 2017 after AOPA raised questions about FBO pricing earlier that year. To bring clarity to the issue of FBO pricing and the role of FAA, the agency released questions and answers that emphasized: (1) FAA does not regulate FBO prices; and (2) airports are responsible for ensuring FBO prices are reasonable and applied in a non-unjustly discriminatory manner. Furthermore, FAA stated that whether an FBO s fees are reasonable (i.e., higher than average than other FBOs) involves a number of economic, business, and other factors that vary widely from airport to airport and FBO to FBO and may include underlying costs, market conditions, quality of service, and other factors. Inquiries and Complaints. According to FAA, airport compliance staff respond to (1) phone and e-mail inquiries, (2) informal complaints, and (3) formal complaints. FAA officials told us that, while FAA does not regulate FBO prices, if someone contacts them with inquiries or a concern about a potential grant assurance violation, such as one involving FBO prices, they first refer the issue to the local airport to resolve. If the issue is not resolved, the complainant may file an informal complaint with an FAA regional office. According to FAA guidance, each FAA regional office will review the complaint and issue a letter indicating whether FAA sees a grant violation that the airport should fix or not. If the complainant is dissatisfied with the regional office s letter, the complainant may then file a formal complaint about a violation of grant assurances with headquarters. Headquarters will then review the circumstances of the complaint and make a formal determination as to whether a grant violation occurred and work with the airport to address the violation. Data on informal and formal complaints filed with FAA headquarters and regional offices indicate FAA has not received many complaints on FBO pricing. Specifically, we reviewed informal complaint data from 2013 through 2018 from each FAA region, and found a total of 142 informal complaints about potential grant violations. Seven of these complaints related to FBO prices, and FAA found one violation later resolved by the airport by providing space for aircraft to do routine maintenance. In addition, we obtained and reviewed FAA s responses to formal complaints from 2013 through 2018, and found that none of these formal complaint responses dealt with FBO prices. While FAA received few complaints related to FBO prices, there are limitations in relying on complaint data to understand the magnitude of an issue. For example, some pilots we spoke with stated if they have an issue with an FBO, they will use an alternative FBO rather than submit a complaint to FAA. We found that in addition to informal and formal complaints, each FAA regional office independently records inquiries about airport grant assurance issues ranging from inappropriate hanger use to noise complaints to FBO lease arrangements. Further, each region varies in the way it captures airport compliance information such as airport location, dates, and description of an inquiry or concern. For example, some regions indicate the specific grant assurance that was potentially violated while others simply describe the nature of the concern. To help see if there is a pattern of concerns across the country, FAA s Office of Airport Compliance in headquarters has an initiative to centralize information on inquiries and concerns about grant assurances, including any that may be related to FBO prices. As envisioned, this Enhanced Information Sharing Initiative will provide FAA compliance staff with the ability to record and track inquires and complaints in comparable systems and should facilitate information sharing among regions and between regions and headquarters. According to FAA officials, centralizing this information will help identify issues that may be of a concern to airport users. According to FAA officials, this initiative, originally planned to be completed in August 2019, has faced delays due to a government shutdown earlier this year and information technology security difficulties. However, FAA has hired a new contractor and anticipates completion sometime in fiscal year 2020. According to an FAA compliance manager, problems that arise in the regions are brought to the attention of the airport compliance offices and discussed, and whether additional actions should be taken. Airport Land-Use Inspections. FAA is required to conduct a minimum of two airport land-use inspections per year per region, reviewing whether airports are complying with grant assurances such as airport property use requirements and lease agreements. We reviewed FAA s annual land use inspection reports to Congress from fiscal year 2013 through 2018 and did not identify any FBO pricing concerns. <5. Agency Comments> We provided a draft of this report to DOJ and DOT for review and comment. DOJ provided technical comments, which we incorporated as appropriate. DOJ also suggested that we discuss more directly the implications that airport ownership of FBOs might have for fuel prices. We agree that airport-owned FBOs might price fuel differently than privately- owned FBOs. However, we were not able to obtain reliable data on airport ownership of FBOs. DOT did not have any comments. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Transportation, the Attorney General, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov If you or your staff any have questions about this report, please contact me at (202) 512-2834 or VonahA@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Airports and Fixed Base Operators GAO Interviewed Appendix II: Analysis of Factors Associated with Aviation Fuel Prices This appendix describes a model we developed to assess factors that may correlate with fixed base operator (FBO) aviation fuel prices across airports. The model uses data on posted prices for full-service 100LL aviation fuel (100LL) and Jet A (Jet A) fuel at a sample of airports in the contiguous United States that are part of the National Plan of Integrated Airport Systems (NPIAS), along with data on selected other factors that may be correlated with fuel prices. Specifically, this appendix discusses (1) the structure of the model, data sources, and variable definitions, and (2) base-case and alternative model results. <6. Structure of Model, Data Sources, and Variable Definitions> Based on our audit work as well as economic reasoning, we hypothesized that a variety of factors may be correlated with aviation fuel prices across airports. Generally, factors that influence the price of any product are the demand for the product, the cost of producing and marketing the product, and the extent of competition among those selling the product. To examine the correlation between these factors and the price of both 100LL and Jet A fuel sold by FBOs, we developed an econometric model. Specifically, our model analyzed the independent correlation of selected key factors with aviation fuel prices. We used several specifications of the model for both full-service 100LL and full-service Jet A. Each specification used airport-level data to analyze variation in the price of a single type of fuel across airports. For each type of fuel, we included only NPIAS airports within the contiguous United States for which our data on aviation fuel prices reported a price for at least one FBO. In addition, we ran the analysis not only on the full dataset of all of the airports for which we were able to obtain fuel-pricing information (which we refer to as the all-airports dataset), but also on a subset of airports limited to those with an air traffic control tower (towered-airports dataset). <6.1. Dependent Variable> For each type of aviation fuel, the dependent variable or the variable to be explained in the model is the average price of that fuel at an airport, net of state taxes. If an airport has only one FBO selling a fuel, the average price of that fuel at that airport is simply the price charged by the FBO that sells it. At an airport where two or more FBOs compete to sell the same type of fuel, the average price of the fuel is calculated as a simple (unweighted) average across all of the FBOs that sell the fuel at the airport. For 100LL, about 87 percent of airports in the all-airports dataset are served by only one FBO, and for Jet A, the share is lower, at about 84 percent. We obtained data on posted aviation fuel prices from a company that publishes such data online for two separate dates a Wednesday in October of 2018 and a Wednesday in May of 2019. All of the fuel price data we received had been updated within 30 days of these dates. <6.2. Independent Variables> Independent variables in our model included a variety of demand, cost, and competitive factors that we hypothesized may explain the variation in fuel prices across airports. In particular, these factors relate to characteristics of (1) airports, (2) the locations where an airport resides, (3) the FBOs operating at a given airport, and (4) the availability of competing FBOs. Characteristics of airports. We expected certain characteristics of each airport to be related to the level of fuel prices. Airport size. We measured airport size based on the number of total operations takeoffs and landings at the airport. Greater activity at an airport reflects higher demand for services, which we expected to correlate with higher prices. Moreover, it is likely more costly to provide services at these busier airports. As such, based on both demand and cost factors, we expected larger airports to have higher fuel prices. We obtained data on airport operations from the Federal Aviation Administration (FAA). Length of the longest runway. Longer runways can accommodate larger and heavier aircraft, which may increase demand of such traffic. Because larger and heavier aircraft require more fuel, a longer runway may be indicative of greater demand for fuel at the airport. At the same time, longer runways are more costly to construct and maintain. Thus both demand and supply factors related to having a longer runway would suggest that fuel prices could be higher at such airports. We obtained information on runway length, which we measure in thousands of feet, from FAA. Characteristics of locations. We also expected demographic and geographic characteristics of the location of each airport to be correlated with fuel prices. Demographic characteristics of the population living in the area near an airport. Personal income per capita. Areas where per-capita incomes are higher could signal a greater demand for air travel and airport services. At the same time, areas with higher per-capita incomes also suggest that costs for labor and other resources the FBO will need to procure will be higher. Thus, we hypothesize that airports located in counties with higher per-capita incomes will have higher fuel prices. We obtained data on personal income per capita by county from the Bureau of Economic Analysis, Department of Commerce. Square of per-capita income. We also expected that, as income levels rise, the effect of even higher levels of income on fuel prices will attenuate. To account for the possibility of a nonlinear relationship between income and fuel prices, we included a variable equal to the square of personal income per capita. Population. A larger population in the area surrounding an airport would likely indicate higher demand for airport services. We obtained population data by county from the Bureau of Economic Analysis, Department of Commerce. Distance from Source of 100LL aviation fuel production. Following production, 100LL is typically shipped over longer distances by truck, rail, or barge, while Jet A tends to be transported over longer distances by pipeline. As such, long-haul transport is relatively more costly for 100LL. We found that there is no production of 100LL in East Coast states, while most other states in the contiguous United States have production sources for 100LL in closer proximity. Therefore, we controlled for the greater cost of transporting 100LL to states along the East Coast with a dummy variable in the 100LL pricing model. We obtained information on production sources for 100LL from the Energy Information Administration, Department of Energy. Characteristics of FBOs. We included two variables in the model that relate to the services provided by FBOs at the airport Large-Chain FBOs. Based on our audit work, we hypothesized that large-chain FBOs those with operations at numerous airports are more likely to focus their business model on meeting the demands of pilots looking for a suite of services and amenities. We thus expected that an airport served by a large-chain FBO may have higher average fuel prices due to the costs of providing such services. We used the data on aviation fuel prices to determine the number of operations run by each FBO. For purposes of the model, we defined an FBO as a large-chain if the owner had at least 25 FBO operations across airports reported in our dataset. Availability of self-service fuel at airport. We hypothesized that the price for full-service 100LL might be lower at an airport where a self- service 100LL is also offered for sale. That is, the ready availability of a cheaper fueling option may influence the pricing of full-service 100LL. Therefore, we included a dummy variable in the 100LL pricing model if self-service 100LL was also available at the airport. In many cases, only one FBO is available at an airport and provides both self- service and full-service 100LL. The variable is derived from the data on aviation fuel prices. Degree of competition among FBOs. Economic theory suggests that market prices for a product will be lower when more firms are selling a product, all else equal. We examined on-airport competition among FBOs for both fuels, and for 100LL, we also developed a variable to account for competition at nearby airports. The number of on-airport FBOs selling a given fuel. The most immediate and likely relevant competition among FBOs would occur at a given airport. To examine the correlation between on-airport competition and aviation fuel prices, we used two alternative measures: (1) the number of FBOs selling the fuel at each airport and (2) a dummy variable that equals 1 for airports where more than one FBO sells the fuel and 0 otherwise. These competition measures were derived from the data on aviation fuel prices. Availability of alternative FBOs at airports in the vicinity. Because stakeholders we interviewed said that pilots using 100LL may consider using nearby airports where that fuel is less expensive rather than their intended destination airport, we examined whether the availability of FBO services at nearby airports correlated with 100LL prices. Specifically, we counted the number of different FBOs selling 100LL at airports within a 30-mile radius of each airport where 100LL is sold. We derived this measure of competition from nearby airports by combining geospatial data for each airport with information from our data on aviation fuel prices. <7. Base-Case and Alternative Model Results> As noted, we ran the fuel-pricing model for both 100LL and Jet A aviation fuels. Table 4 provides descriptive statistics for all of the variables included in the models. We report regression results for several specifications in tables 5 9. Specifically these tables provide the extent and direction (plus or minus) of the estimated correlation of each of the independent variables on aviation fuel prices. We also indicate whether each estimated correlation is statistically different from zero. The per-gallon price of aviation fuel (100LL and Jet A) the dependent variable in our model is measured in dollars and cents. Some of the independent variables are measured in levels for example, annual airport operations are measured in tens of thousands, and the length of the longest runway in thousands of feet. For these variables, the regression model results indicate the estimated correlation of a one-unit increase in the level of the independent variable on the price of aviation fuel. For example, as shown in table 4, an increase in runway length of 1,000 feet is associated with an increase in the price of both 100LL and Jet A of about 8 cents, and this estimated correlation is statistically different from zero at the 1 percent level. The model also includes some dummy variables variables that take a value of either 1 or 0, depending on whether a specific attribute does or does not apply. For a dummy variable, the estimated correlation is interpreted as the effect of the attribute on the per-gallon fuel price. Based on the findings in table 5, being located on the East Coast is associated with an increase in the price per gallon of 100LL fuel of about 22 cents. This correlation was also found to be statistically different from zero. In another example, the model specification shown on table 5 uses a dummy variable to indicate the presence of competition at an airport the variable equals 1 for airports that are served by more than one FBO and 0 for airports that are served by only one FBO. Results in table 5 indicate that the price per gallon of Jet A fuel is about 35 cents lower at airports that are served by more than one FBO than at airports with only one FBO. Appendix III: GAO Contact and Staff Acknowledgments <8. GAO Contact> <9. Staff Acknowledgments:> In addition to the individual named above, Cathy Cowell (Assistant Director); Nick Nadarski (Analyst-in-Charge); Amy Abramowitz; Dave Hooper; Christopher Jones; Ned Malone; Malika Rice; Ardith Spence; and Michelle Weathers; made key contributions to this report. | Why GAO Did This Study
Since 2007, the FAA has provided more than $37 billion in grants to airports to fund capital development and is responsible for ensuring compliance with requirements airports assume when they accept these grants. One such requirement is that the airports provide users equal access to airport services such as fueling and parking. Recently, an industry group and pilots raised concerns about the transparency and reasonableness of prices charged for these and other services at airports.
GAO was asked to examine FBOs' pricing and FAA's oversight of related airport grant assurances. This report examines: (1) the transparency of FBO prices, (2) the factors that influence prices, and (3) the extent to which FAA ensures compliance with federal airport grant assurances related to FBO activities.
GAO analyzed FAA data related to complaints from 2013 through 2018 and reviewed relevant literature, key laws and regulations, and program documentation. GAO developed a statistical model to analyze variation in fuel prices across airports in the contiguous United States. GAO interviewed FAA compliance staff at headquarters and all regional offices, as well as a non-probability selection of stakeholders.
What GAO Found
Fixed base operators (FBO) at airports (see figure) offer a variety of services to pilots and passengers. While anyone can view fuel prices offered by FBOs online, other service fees, such as for aircraft parking, can vary by type of aircraft and are not always available online, although they can be obtained by calling the FBO. Recently, industry groups developed the “Know Before You Go” campaign that calls for greater transparency of FBO prices. Some of the FBOs GAO interviewed list their fees online; however, others do not.
Stakeholders GAO interviewed––including general aviation pilots, airports, FBOs, and industry groups––said FBOs' costs to build and maintain facilities—such as hangars and fueling facilities—as well as operating expenses such as labor and fuel––influence their prices. Stakeholders also said that demand for FBOs' services can influence prices, such as when seasonal demand affects operations at an airport near a ski resort. Finally, they also said that competition affects FBO's prices. GAO's statistical model confirmed a correlation between many cost and demand factors and aviation fuel prices and found higher prices at airports with higher costs and demand. This model also found that on-airport competition is associated with lower prices at the country's busiest airports: Prices for aviation fuels were lower at such airports with more than one FBO. However, not all airports can support more than one FBO due to, for example, the amount of business each gets.
Airports receiving Federal Aviation Administration (FAA) grants must meet “grant assurances” such as charging reasonable and not unjustly discriminatory prices for services, including prices charged by FBOs. FAA officials said FAA oversight relies on (1) airports' consent to adhere to grant assurances; (2) training and outreach; and (3) complaints. Since 2013, in complaints received by FAA, GAO found few complaints about FBOs' prices. GAO found each regional office independently records additional inquiries. FAA is moving to collect regional inquires centrally, and by 2020 that step may allow FAA to stay abreast of apparent nationwide trends or issues with any grant assurance concerns. |
gao_GAO-19-240 | gao_GAO-19-240_0 | <1. Background> Space systems generally involve one of four types of interrelated segments that are needed to make a space capability fully functional. As illustrated in figure 1, they include (1) space components namely the satellites, (2) ground components, including satellite control systems and data processing subsystems and facilities, (3) user equipment, such as radios/terminals, needed by the warfighter to use the capability, and (4) launch vehicles and facilities. DOD space systems are acquired under the same acquisition policies as other weapons systems. However, as we found in July 2016, space systems are different from other acquisitions in some ways. For example, space has more programs of joint interest than other areas, and includes varied stakeholders, such as civil agencies and multiple services. According to officials, in developing space systems once a satellite is launched, if there are problems it is essentially impossible to change the hardware, and software changes may not be an option. In addition, space programs typically use cutting-edge technologies that have to withstand the harsh space environment. Such technologies are rarely available as off-the-shelf systems from the commercial market and must be developed following a specific process overseen by specially- trained DOD acquisition personnel. Data from the Under Secretary of Defense for Acquisition and Sustainment s Office of Human Capital Initiatives show that, as of June 2018, DOD employed about 170,000 military and civilian personnel who are designated as acquisition personnel and are responsible for acquiring weapon systems, such as aircraft, ships, tanks, and space systems. DOD tracks the characteristics, education, training, and experience of these acquisition personnel in DOD s acquisition workforce data system Data Mart where they are tracked as belonging to 1 of 15 acquisition career fields. Table 1 shows a list of these acquisition career fields. Contractor and FFRDC personnel often support DOD acquisition efforts. For the purpose of this report, contractor refers to support service contractors who provide technical and administrative support rather than prime contractors who develop and produce weapon systems or products. FFRDCs are not-for-profit entities sponsored and funded primarily by DOD to fulfill research and development, engineering, and analytic needs that cannot be met as effectively by existing government or contractor personnel. Nonprofit, university-affiliated, or private industry organizations operate the FFRDCs through contracts or other agreements with federal agencies. DOD procures FFRDC services by staff years of technical effort. The total amount of FFRDC services time that DOD is permitted to obtain is set annually by Congress. For fiscal year 2018, DOD was authorized to use available funds for FFRDCs for not more than 6,030 staff years of technical effort. Authorized staff years of technical effort are allocated among the military services organizations that act as the primary sponsors for each FFRDC, which then prioritize what work the FFRDC will perform according to the allocation level received. In general, managers in the contractor and FFRDC organizations direct the daily activities of their respective personnel, while DOD military and civilian personnel oversee their work. Over the years, GAO has highlighted the importance of workforce management. Since 2001, GAO has included strategic human capital management as a government-wide high-risk area. More recently, we found that having the right workforce mix with the right skill sets is critical to achieving DOD s mission, and that it is important for DOD, as part of its strategic workforce planning, to conduct gap analyses of its critical skills and competencies. Strategic workforce planning an integral part of human capital management is an iterative, systematic process that helps organizations determine if they have staff with the necessary skills and competencies to accomplish their strategic goals. As shown in table 2, many DOD offices play key roles in strategic workforce planning activities. <2. DOD Lacks Comprehensive Data on Its Space Acquisition Workforce, but Information Indicates That It Includes at Least 8,000 Personnel> DOD does not have comprehensive information about its space acquisition workforce including the size, mix, and location of this workforce. DOD does not have this information because, among other things, DOD has not clearly identified its space programs, and its workforce data systems are not configured to identify space acquisition personnel. In the absence of comprehensive DOD data, we sought to obtain an understanding of the extent of this workforce. We aggregated data from individual DOD organizations and estimate that at least 8,000 military, civilian, contractor, and FFRDC personnel were working on space acquisitions in multiple locations across the United States at the end of 2017. While this information represents only a snapshot in time, it provides insight into the extent of the space acquisition workforce. Given DOD s recent decision to stand up a United States Space Command and to establish a consolidated Space Development Agency in 2019, along with the President s directive for DOD to submit a legislative proposal for a United States Space Force, having knowledge about which personnel are involved with military space acquisitions and where these personnel are located will be important to DOD s planning efforts. <2.1. DOD Does Not Collect and Maintain Comprehensive Information on the Space Acquisition Workforce> DOD collects data on its acquisition workforce, but does not collect and maintain comprehensive and complete data on the size, mix, and location of the military, civilian, contractor, and FFRDC personnel working on space acquisitions. According to the military services Directors of Acquisition Career Management, DOD manages its acquisition workforce by career field, such as program management and engineering, and not by the type of product being acquired, such as space systems. They told us that, in their view, the acquisition skills needed for an acquisition program such as those for program management, engineering, and contracting are largely the same regardless of the product type. However, officials acknowledged that it takes some time for personnel to learn the nuances of acquiring a specific type of product. We identified three factors that hinder DOD s ability to collect comprehensive data on its space acquisition workforce. Together, they impede DOD from maintaining a complete and accurate understanding of the size, mix, and location of its space acquisition workforce. DOD does not maintain a complete list of its space acquisition programs. Officials from the office of the Assistant Secretary of the Air Force for Acquisition and the service-level acquisition career managers told us that DOD does not maintain a list of the universe of space acquisition programs. In addition, the budget document that DOD submits to Congress specific to space programs, which could possibly serve as an alternative source of such information, identifies programs that have large amounts of funding by name, but aggregates information for smaller programs without identifying them individually. While DOD does not maintain a complete list of space acquisition programs, during the course of our review we found that the military services were generally able to identify space acquisition programs. DOD does have a definition of space systems. Specifically, according to a DOD Directive, space systems include all systems related to making a space capability operational that is programs acquiring satellites, satellite ground systems (including satellite control and data processing), receivers/user segments (including terminals and radios), and launch systems but specifies that terminals that are embedded as part of a platform (i.e. aircraft, ship, or tank) are excluded. However, DOD officials had difficulty identifying some programs, particularly those in the user segment. For example, the Air Force s Space Fence program, which is developing ground radar as a part of the space surveillance network that detects and tracks space objects, is included as a space program in DOD s budget documents. Officials from the Program Executive Office that staffs personnel to the program initially told us they did not consider it a space program since it is a series of ground-based radars. They subsequently determined that it is a space program since the radar will track space objects and provide data for space situational awareness. DOD data systems are not currently configured to identify space acquisition personnel. We examined three data sources that have information on the different personnel groups in the acquisition workforce, and found that none of them can identify space acquisition personnel. The Office of Human Capital Initiatives within the Office of the Under Secretary of Defense for Acquisition and Sustainment uses the Data Mart system to track the education, experience, and training of military and civilian acquisition-coded personnel working in the 15 acquisition functional career fields shown in table 1. DOD has taken periodic steps to enhance the data captured in this system. For example, in 2009 DOD began tracking whether acquisition personnel in the business career field were working on financial management or cost estimating. In 2014, DOD started to track personnel with expertise in contracting with small businesses, and expanded its efforts to track personnel with expertise in international acquisitions. However, this system does not currently identify personnel staffed to or supporting space acquisitions or any other type of product acquisition. The Office of the Under Secretary of Defense for Personnel and Readiness tracks contractor data using the Enterprise-wide Contractor Manpower Reporting Application system to provide DOD management information on contracted services obtained by each military service and defense agency. The system includes data on the number of hours of service each contractor provides to the government, which could be used to approximate the number of contractor personnel used to perform the work. However, the system does not track the type of acquisition programs being supported, such as space acquisition programs. In addition, the data are self-reported by service contractors and concerns exist regarding potential underreporting. For example, we reported in March 2018 that the military services estimated that the Enterprise-wide Contractor Manpower Reporting Application included fiscal year 2016 contractor data for 80 percent of Army contracts and 75 percent of Navy contracts; the percentage of Air Force contracts was unknown. The Director of Laboratories and Personnel within the Office of the Under Secretary of Defense for Research and Engineering tracks information on FFRDCs, such as the staff years of technical effort provided each year, to ensure that DOD stays within its congressionally mandated limit. Each FFRDC sponsor organization provides an annual report of their staff years of technical effort and funding to DOD. However, DOD officials told us that sponsoring organizations do not identify what type of acquisition program their respective FFRDC personnel support, such as space acquisition programs. Personnel supporting space acquisitions are dispersed across a variety of organizations and may also support non-space programs. Each of the military services we reviewed has program executive offices, research labs, or other organizations that support both space and non- space acquisitions. DOD officials told us that functional career field leaders in each of the organizations, such as the engineering or the contracting directorates, assign personnel to space or non-space programs on an as-needed basis, which could make it difficult for DOD to determine which and how many personnel should be included in the space acquisition workforce. Five of the 10 space acquisition programs we reviewed 1 Air Force, 1 Navy, and 3 Army were managed by organizations that were primarily responsible for developing and acquiring non-space programs. Air Force The Space Fence program is staffed by the Air Force Life Cycle Management Center s Program Executive Office for Battle Management. The Center primarily supports non-space programs, such as fighters, bombers, tankers, and presidential aircraft. Navy The Mobile User Objective System is managed by the Space and Naval Warfare Systems Command, which primarily manages non-space programs that provide enterprise information system and command, control, communications, computers, and intelligence capabilities. Army The Joint Tactical Ground Station program is managed by the Army s Program Executive Office for Missiles and Space. The office primarily manages a variety of missile programs such as close combat, cruise, and integrated air and missile defense programs that are non-space programs. Similarly, the Secure, Mobile, Anti-Jam, Reliable, Tactical-Terminal and the Transportable Tactical Command Communications programs are managed by the Army s Program Executive Office for Command, Control, Communications-Tactical. This office primarily manages a variety of information systems to provide tactical communication for the service, which may or may not be space programs. Officials told us that the three Army programs we reviewed were also supported by other, separate Army organizations, such as the Army Contracting Command for contracting support; the Army s Aviation and Missile Research, Development, and Engineering Center for engineering support; and the Army Materiel Command for logistics support. These organizations provide support to space and non-space programs on an as-needed basis. The Administration, Congress, and DOD are discussing a variety of approaches for strengthening the government s space operations, including the establishment of one or more new organizations. In June 2018 the President directed DOD to begin the process of establishing a new military branch focused on space that is separate from and equal to the other military departments, Army, Navy, and Air Force. In an August 2018 report to the Congress on the organizational and management structure needed for the national space components, DOD described the immediate steps that it plans to take to implement the President s direction while waiting for Congressional authorization to create the new military branch. These steps include establishing a new United States Space Command to further its space warfighting capabilities, as well as developing plans to establish a consolidated Space Development Agency to rapidly develop and field next generation space capabilities. DOD has described the general areas of focus planned for these new organizations; however, many specifics are still to be determined. DOD has announced that a committee of senior DOD leaders is expected to identify which of the current space activities will be consolidated into these new space organizations. In addition, the President s February 2019 Space Policy Directive now requires DOD to submit a legislative proposal to establish a United States Space Force as a new armed service within the Air Force. DOD announced it delivered a legislative proposal to Congress on March 1, 2019. The lack of comprehensive information about DOD s space programs and the acquisition personnel supporting those programs affects DOD s ability to assess gaps in the overall capabilities of its space acquisition workforce and determine whether it has sufficient internal capability and critical knowledge or skills for its space acquisitions. Moreover, it hampers DOD s ability to make decisions related to establishing the United States Space Command, a new Space Development Agency, or potentially the United States Space Force. This includes determining the appropriate number and mix of acquisition personnel that are needed for the new organizations, as well as which military and civilian personnel should be assigned to them. According to federal internal control standards, an agency, such as DOD, should have relevant, reliable, and timely information in order to run and control operations, including managing the workforce. Air Force Director of Acquisition Career Management officials stated that having a process for identifying space acquisitions personnel would be beneficial. As we reported in July 2003, the success of merging personnel during organizational transformations is more likely when the best individuals are selected to meet the skills and competencies needed for the new organization s goals. <2.2. GAO Identified at Least 8,000 Personnel in Over 20 Locations As Part of DOD s Space Acquisition Workforce> In the absence of readily available comprehensive data from DOD, we collected and aggregated data from multiple DOD space organizations and found that at least 8,000 personnel were in the space acquisition workforce at the end of 2017. However, our data set is not complete. For example, the National Reconnaissance Office, which DOD officials told us has a significant number of personnel working on space acquisitions, is not included in our analysis. In addition, our count only includes personnel that spent 50 percent or more of their time working on space acquisitions; therefore any personnel who spent less than 50 percent of their time on space acquisitions was not included. Furthermore, it is important to note that our data provide a snapshot of the workforce as of December 31, 2017. According to DOD officials, the size and mix of the workforce can change based on the number of programs and where programs are in the acquisition process. The military and civilian personnel data we collected are expressed as number of people. The contractor and FFRDC personnel data are expressed as full-time equivalents and staff-years of technical effort equivalents, respectively. Size of Workforce: Based on data we collected from multiple DOD space acquisition organizations, at least 8,000 military, civilian, contractor, and FFRDC personnel supported DOD s space acquisitions as of December 31, 2017 (see figure 2). Military and civilian personnel comprised about 64 percent of the total space acquisition workforce, the vast majority of which support Air Force acquisitions. The remaining 36 percent of the workforce is contractor and FFRDC personnel that support DOD s space acquisition activities. The Air Force has the largest number of military and civilian personnel because the Air Force has primarily been responsible for DOD s space acquisitions and develops programs for all four segments of space capability, including launch services for the most critical national security space satellites. The Navy is responsible for systems that provide satellite communications across DOD as well as its user segments, while the Army and other DOD components primarily focus their efforts on developing their user segment systems or other space-related projects. Workforce Mix: Based on data we collected from multiple DOD space acquisition organizations, the mix of military, civilian, contractor, and FFRDC personnel that each military service and agency had supporting their respective space acquisition programs varied considerably (see figure 3). Military and civilian personnel comprised between 54 and 63 percent of the Air Force s, Army s, and Navy s space acquisition workforce and 94 percent of the other DOD components workforces. Contractors and FFRDC personnel made up the remainder of the workforce. The Air Force relies more heavily on FFRDC personnel as a percentage of its workforce than the Army, Navy, and other DOD components. According to Air Force officials, the Space and Missile Systems Center the Air Force s major space acquisition organization has relied heavily on FFRDC support for space engineering and technical expertise since its founding in the 1950s. The Army and Navy primarily rely on contractors for their remaining support. These contractors mainly provide technical expertise, such as engineering services, to support military and civilian personnel. Some contractors also support program management and business and administration activities, such as cost estimating. Figure 4 provides detailed examples of how personnel support two space acquisition programs included in our review. Locations of Workforce: Based on data we collected from multiple DOD space acquisition organizations, space acquisition personnel work at over 20 organizations located across the United States. Figure 5 shows the primary locations of DOD s space acquisition organizations. About 45 percent of the overall space acquisition workforce is located at the Air Force Space and Missile Systems Center in Los Angeles, California. The Army space acquisition workforce is located primarily at Redstone Arsenal in Huntsville, Alabama, and Aberdeen Proving Ground, Maryland. The Navy space acquisition workforce is located at the Space and Naval Warfare Systems Command in San Diego, California, and a few other locations. <3. DOD Faces Challenges Hiring, Assigning, and Retaining Qualified Personnel to Work on Space Acquisition Programs, but Is Taking Steps to Address These Challenges> DOD faces several challenges related to hiring, assigning, and retaining qualified personnel to work on space acquisition programs, similar to the challenges it faces more generally with the acquisition workforce. However, some of the challenges are magnified because almost half of the military and civilian space acquisition workforce is concentrated in Los Angeles, California, which has a higher cost of living than many other areas in the United States, and where competition with private industry for personnel with space acquisition experience is high. DOD is taking steps to address these challenges where possible. <3.1. DOD Faces Challenges Hiring Qualified Candidates, but Is Taking Steps to Address Them> DOD officials told us that one of the primary workforce challenges DOD faces is its ability to hire qualified people to work on space acquisitions. They said that DOD is competing with private industry and other federal agencies for top talent in several acquisition career fields. Attracting Candidates with Technical Expertise. DOD officials stated that it is particularly difficult to attract people with certain technical expertise, such as cybersecurity and systems engineering, because they are in high demand in both government and private industry. Air Force officials said the government cannot match the salaries offered by industry. For example, the Launch and Test Range System program office told us that a shortage of trained and qualified cybersecurity personnel exists both within the government and industry. Our prior work has described how maintaining cybersecurity personnel is a challenge government-wide, and that, according to DOD officials, even when DOD cybersecurity positions are filled, it may not necessarily be with the right expertise since it is a specialized area. Hiring in Areas with Higher Costs of Living. Air Force officials at the Space and Missile Systems Center said that hiring challenges are further exacerbated for space acquisition organizations that are located in areas with higher costs of living. They said, for example, that prospective employees often visit the center in Los Angeles, California, and, after assessing the local cost of living, decide not to accept a job offer. DOD is taking steps to address its hiring challenges. To address difficulties in obtaining personnel with sufficient technical experience, some officials told us that they typically hire the best candidate available who may lack some of the desired technical skills and provide them with on-the-job and formal training to increase their technical knowledge and skills. To better compete with higher salaries offered by other potential employers, several officials told us they offer tuition reimbursement as a recruiting incentive. Air Force officials told us that in areas with higher costs of living they focus their recruiting efforts on the local area because local candidates already understand the higher costs of living challenges for the area and are more likely to have support systems in place to manage such costs. <3.2. DOD Faces Challenges Assigning Experienced Personnel to Space Acquisition Programs, but Is Taking Steps to Address Them> Beyond the concerns expressed about hiring personnel, Air Force Space and Missile Systems Center officials expressed concerns that some functional areas within the space acquisition workforce face challenges assigning experienced personnel personnel with the appropriate knowledge and skill set to perform the work that are already hired to space acquisition programs. For example, contracting career field officials at the center noted that the space acquisition workforce does not have enough mid-level personnel who understand the detailed steps and documentation required in the acquisition process. In particular, the Air Force Space and Missile Systems Center reported that at the end of January 2018, the number of mid-level civilian and military personnel working in the contracting functional career field was 50 less than the number authorized. According to contracting career field officials at the center, a large number of mid-level procurement contracting officer positions were vacant, and senior procurement managers were picking up the corresponding workloads rather than performing their staff development and strategic planning tasks. Furthermore, officials from the Air Force s Space and Missile Systems Center program management functional office also expressed concern that the bulk of the military personnel assigned to the program management positions were more junior in rank than the Center was authorized by the Air Force to obtain. Figure 6 shows the level of the Air Force Space and Missile Systems Center personnel that filled its program management positions as of January 2018. Junior officers typically have less experience managing acquisition programs than more senior officers. The military services are taking steps to manage the effects of military and civilian personnel skills and experience gaps, to some degree, by having contractor personnel perform the work. For example, the Air Force Space and Missile Systems Center s contracting functional office used four contractor personnel to support its pricing work. <3.3. DOD Faces Challenges Retaining Experienced Personnel in Space Acquisitions, but Is Taking Steps to Address Them> DOD has also experienced challenges with retaining some space acquisition personnel, especially those within their first few years of joining federal government service that had obtained certain acquisition- related experience or authorities. For example, contracting career field officials at the Air Force Space and Missile Systems Center said that they have difficulty retaining contracting officers once they receive their contract warrant authority because they can obtain a higher compensation package from private industry companies. Receiving contract warrant authority is considered an indication that the individual gained sufficient skills and experience to be able to perform the work involved in writing, awarding, and managing contracts. Officials also stated that some personnel leave after obtaining security clearances required to perform their work because private companies working on government contracts pay more to qualified individuals with clearances. Officials from the Air Force Space and Missile Systems Center and Army Space and Missile Defense Command also told us that they have difficulty retaining engineers. They said some engineers have left because they were not satisfied with being used as generalists to oversee the work of FFRDC or contractor personnel, rather than being used to perform hands-on engineering work. Officials also stated that this situation is not unique to space acquisitions government engineers seldom get to design, develop, or build systems as the hands-on engineering work is primarily performed by prime contractors. Air Force Space and Missile Systems Center officials said they are trying to help the government engineers understand how to influence decisions and be more effective in working as part of the space engineering acquisition team, which would include military, civilian, contractor, and FFRDC personnel. Officials from various functional career fields at these Air Force and Army locations noted that limited promotion opportunities for civilian personnel in space acquisitions also cause retention challenges. For example, the Air Force Space and Missile System Center has 53 management (General Schedule 15) positions; however, Center officials told us that the turnover rate for these higher-level positions is low. Officials reported that some mid-level program management personnel seek and accept promotions at other non-space acquisition offices or in other geographical locations that have more promotion opportunities. Some Air Force Space and Missile Systems Center and Army officials noted that retention incentives are used to help retain staff. This includes student loan repayments, and recognition incentives, such as monetary or time-off awards tied to performance. Air Force Space and Missile Systems Center officials also said that they are working to realign current civilian acquisition personnel at the center under the Civilian Acquisition Workforce Demonstration project, which they believe will help attract, retain, and motivate high-quality civilian personnel for the acquisition workforce. <4. Conclusions> DOD space systems and the personnel who work to acquire them remain critical components of national security and key resources. As DOD takes steps toward establishing the United States Space Command, its Space Development Agency, and potentially the United States Space Force, it will be essential to understand the size, mix, and location of the space acquisition workforce. However, DOD does not collect and maintain this type of comprehensive data on its space acquisition workforce. Although we were able to pull together information on the space acquisition workforce, the data represent a snapshot of the workforce at one point in time, and are not complete since acquisition personnel working on National Reconnaissance Office space programs and those who spent less than 50 percent of their time working on space acquisitions were not included. Taking steps to identify and routinely track accurate information on space acquisition programs and the organizations and personnel that support those programs would provide several benefits to DOD. In particular, it would better position DOD to assess whether it has the appropriate number and mix of military, civilian, contractor, and FFRDC personnel working on space acquisitions and to make adjustments if necessary. Further, it would better position DOD to make decisions on which acquisition personnel will support or transition into the United States Space Command or the new Space Development Agency, since DOD has not clearly defined what acquisition functions may or may not be handled by these new organizations. Finally, comprehensive data on the space acquisition workforce would also be beneficial to support DOD s development of its legislative proposal regarding the establishment of the United States Space Force. <5. Recommendations for Executive Action> We are making the following two recommendations to DOD: The Secretary of Defense should direct the military services and other DOD components to identify the universe of space acquisition programs, as well as the various organizations that support these programs, and report this information to Congress. In doing so, DOD should implement procedures to maintain and periodically update the list. (Recommendation 1) The Under Secretary of Defense for Acquisition and Sustainment, in conjunction with the Under Secretaries of Defense for Research and Development and for Personnel and Readiness, should collect and maintain data on acquisition-coded military and civilian personnel that support space acquisition programs and related activities including those that may do so less than full time as well as track the contractor and FFRDC workforce general levels of effort supporting space acquisition programs and related activities and the total resources annually committed to perform that work. (Recommendation 2) <6. Agency Comments and Our Evaluation> We provided a draft of this report to DOD for review and comment. DOD provided written comments (reproduced in appendix II) on our draft report. In those comments, DOD concurred with our first recommendation to identify the universe of space acquisition programs, as well as the various organizations that support these programs, and report this information to Congress. DOD did not concur with our draft second recommendation to collect and maintain data on the space acquisition workforce. DOD stated that the manner in which personnel data are captured in its human resource and development systems makes it difficult to identify, collect, and maintain data on the military and civilian personnel working on space acquisition programs. Further, DOD raised concerns over contractual limitations on collecting and maintaining data on contractor and FFRDC personnel supporting space acquisitions. In light of these concerns, we made changes to the draft recommendation. We believe the language of our final recommendation will better facilitate implementation by DOD. With regard to our second recommendation, we continue to believe that taking steps to identify military and civilian personnel supporting space acquisition programs would support DOD s strategic workforce planning, particularly considering DOD s recent legislative proposal for establishing the United States Space Force. For example, we acknowledge that the current personnel data system used to track military and civilian acquisition personnel has limitations, but we believe taking steps to make minor modifications to the system to facilitate identifying and routinely tracking accurate information on these two segments of the space acquisition workforce would provide several benefits to DOD. Most importantly, it would help DOD make decisions on how many and which military and civilian acquisition personnel should be assigned to the new space organizations namely the Space Development Agency, the United States Space Command, and the United States Space Force. With regard to DOD s comment that our recommendations do not recognize that DOD personnel have been shifted into and out of space acquisition programs, we recognize that acquisition personnel have been moved across programs and support space and non-space acquisitions. However, we continue to believe that DOD should have better information on military and civilian acquisition personnel. In particular, knowing which personnel have space acquisition backgrounds could enhance the productivity and effectiveness of DOD s space acquisition efforts. As a result, we did not make a change to our second recommendation as it relates to military and civilian space acquisition personnel. However, in consideration of the concerns raised by DOD about tracking data on contractor and FFRDC personnel who are supporting space acquisition activities, we modified our second recommendation. It was not our intention to have DOD undertake significant modifications to the relevant contracts to obtain data on these segments of the space acquisition workforce. However, understanding the extent to which space acquisition programs rely on contractor and FFRDC personnel for support could be useful in helping DOD determine the right number and mix of military and civilian personnel needed in the new space organizations. As a result, we modified the language of our second recommendation to focus on tracking the contractor and FFRDC workforce general levels of effort supporting space acquisition activities and the resources spent to obtain this assistance, rather than as we stated in our draft recommendation tracking the individuals who perform such work. However, we continue to believe that collecting and maintaining more robust data on that workforce will support DOD s planning efforts and better inform Congress. DOD also expressed concern that our report may be equating statements of officials at the staff- and operational-level to military service- and DOD- level officials. We reviewed statements attributed to DOD officials throughout our report. Where necessary, we clarified attributions to better reflect the appropriate level of the officials with whom we discussed the corresponding information during our review. DOD also provided technical comments on our draft report, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; the Acting Secretary of Defense; and the Secretaries of the Air Force, Army, and Navy. In addition, the report will be available at no charge on GAO s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or ludwigsonj@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology A House Report related to the National Defense Authorization Act of Fiscal Year 2017 contained a provision for GAO to review the current state of the Department of Defense s (DOD) space systems acquisition workforce. This report examines (1) what DOD knows about the size, mix, and location of its space acquisition workforce, and (2) the challenges, if any, DOD faces in hiring, staffing, and retaining space acquisition workforce personnel. For the purpose of this report, we defined the space acquisition workforce broadly to include military, civilian, contractor, and Federally Funded Research and Development Center (FFRDC) personnel working on space acquisition programs and related efforts. To determine what DOD knows about the size, mix, and location of the space acquisition workforce, we met with officials from DOD s Office of Human Capital Initiatives, the Air Force, the Army, the Navy, and 4th Estate s Director of Acquisition Career Management to obtain information that is collected on the space acquisition workforce. We were told by each of these officials that DOD does not have a group of personnel officially designated as the space acquisition workforce. They stated that DOD has separate mechanisms for collecting military, civilian, contractor, and FFRDC workforce data and that none of these systems contained the level of granularity we would need to identify all personnel working on space acquisitions. Specifically, the sources we discussed were DOD s Data Mart system, a central repository for military and civilian acquisition workforce data, as well as workforce data systems maintained by DOD components that feed into the Data Mart system; the Enterprise-wide Contractor Manpower Reporting Application system for contractor services data; and FFRDC data maintained by military components. We collected data on the size, mix, and location of the space acquisition workforce from the space organizations performing space acquisition activities. The Directors of Acquisition Career Management for the military services and the 4th Estate defense agencies provided a list of organizations that could be working on space acquisitions based on DOD s 2017 space system definition, which states that a space system includes all areas related to making a space capability operational that is programs acquiring satellites, satellite ground systems (including satellite control and data processing), receivers/user segments (including terminals and radios), and launch systems. It also specifies that terminals are included unless they are embedded as part of a platform (i.e., aircraft, ship, or tank). We contacted each of the identified space organizations to verify that they had personnel working on space acquisitions based on this definition. Three of the organizations we originally contacted stated their organizations did not work on any space acquisition programs based on the definition. We did not include these organizations in our data gathering efforts. We also identified other organizations that worked on space acquisitions through discussions with acquisition management officials from the Army and included these organizations in our data gathering efforts. We asked each space organization to identify the number of military and civilian personnel working on space acquisition activities for 50 percent or more of their work time as of December 31, 2017. We used the threshold of 50 percent or more of the time to be consistent with the DOD definition of the acquisition workforce, which requires personnel to work 50 percent or more of their work time on acquisition activities to be counted as part of that workforce. DOD officials could not identify the number of contractor and FFRDC personnel working on space acquisitions. Therefore, for contractor and FFRDC personnel, we asked for the number of full-time equivalencies and staff-years of technical effort equivalencies, respectively, provided as support to space acquisitions. We requested that the personnel data be categorized by acquisition career field. We collected data from each DOD component as follows: The Air Force Director of Acquisition Career Management provided military and civilian workforce data from the Air Force s Acquisition Career Management System that feeds into Data Mart for all Air Force organizations where the entire organization works on space acquisitions. These organizations were the Air Force Space Command and the Networks Family of Advanced Beyond Line of Sight Terminals Division within the Air Force Life Cycle Management Center s Program Executive Office for Command, Control, Communications, Intelligence and Networks. The Deputy Director identified other space programs that are managed by the Air Force Life Cycle Management Center, but could not identify which military and civilian personnel were supporting those programs because the workforce data system is not configured to identify personnel by product types. In addition, the Deputy Director could not provide data on the number of contractor or FFRDC personnel working on any space acquisition program. We contacted these organizations directly to collect additional military, civilian, contractor and FFRDC workforce data: Air Force Space Command; Air Force Space and Missile Systems Center; Program Executive Office Command, Control, Communications, Program Executive Office Battle Management; and Air Force Research Laboratory. These organizations provided personnel data from their respective manpower sources, such as personnel data systems or manning documents. To assess the reliability of the data, we discussed the data and sources used to compile the data with Air Force officials; reviewed the data for logical inconsistencies; compared the data received from the Air Force workforce data system to data from Air Force Space and Missile Systems Center briefing documents; and compared relevant data received from individual space organizations with data from the Air Force Research Laboratory Space Vehicle Directorate. We collected military, civilian, contractor and FFRDC workforce data directly from the following Army organizations performing space acquisition activities: Army Space and Missile Defense Command; Program Executive Office Missiles and Space; Program Executive Office Command, Control and Program Executive Office Intelligence, Electronic Warfare and Sensors; Communications-Electronics Research, Development and U.S. Army Aviation and Missile Research Development and Army Contracting Command. These organizations provided personnel data from their respective manpower sources, such as personnel data systems or manning documents. To assess data reliability, we discussed the data and sources used to compile the data with Army officials, and reviewed the data for logical inconsistencies. We collected military, civilian, contractor and FFRDC workforce data directly from the following Navy organizations: Space and Naval Warfare Systems Command; Program Executive Office Space Systems; Space and Naval Warfare Systems Center Pacific; and Space and Naval Warfare Systems Center Atlantic. These organizations provided personnel data from their respective manpower sources, such as personnel data systems or manning documents. The Naval Research Laboratory and the Navy s Program Executive Office for Command, Control, Communications, Computers and Intelligence were originally identified as performing space acquisition activities; however, officials stated they did not have any personnel working on space acquisition activities for at least 50 percent of their time. To assess data reliability, we discussed the data and sources used to compile the data with Navy officials, and reviewed the data for logical inconsistencies. We collected military, civilian, contractor, and FFRDC workforce data directly from: Defense Contract Management Agency; and Missile Defense Agency. To assess data reliability, we obtained information on the data and sources used to compile the data with the agencies officials and reviewed the data for logical inconsistencies. The Defense Advanced Research Projects Agency was originally identified as performing space acquisition activities; however, officials stated they did not have any personnel working on space acquisition activities for at least 50 percent of their time. We determined the workforce data were sufficiently reliable to provide estimates of the general size and mix of the space acquisition workforce. To assess any challenges DOD faces in hiring, staffing, and retaining its space acquisition workforce, we interviewed officials from multiple levels within DOD and the Air Force, Army and Navy. In addition to discussing the challenges with the majority of the military service space organizations listed above, we also met with the following DOD organizations: Office of Cost Assessment and Program Evaluation; and Defense Acquisition University. To gather additional insight into the challenges faced at the program office level, we also interviewed officials from a non-generalizable sample of 10 space acquisition programs from the Air Force, Army, and Navy. The selected programs included different types of space acquisitions such as satellites and launch systems with a range of dollar values and phases of acquisition. During our review, the Air Force and Army had other space acquisition programs in addition to the ones we selected, whereas the Navy had one space acquisition program according to service officials. The selected programs from each military service included: Advanced Extremely High Frequency (space segment) Evolved Expendable Launch Vehicle (launch segment) Launch and Test Range System (launch segment) Protected Tactical Enterprise Service (ground segment) Space Fence (ground segment) United States Nuclear Detonation Detection System (ground segment) Joint Tactical Ground Station (ground system) Secure, Mobile, Anti-Jam, Reliable, Tactical Terminal (user segment) Transportable Tactical Command Communications (user segment) Mobile User Objective System (space segment) We also reviewed prior DOD and other space acquisition studies, including reports from the Defense Science Board, Institute for Defense Analyses, Office of Management and Budget, and the RAND Corporation. We conducted this performance audit from November 2017 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgements <7. GAO Contact> Jon Ludwigson (202) 512-4841 or ludwigsonj@gao.gov. <8. Staff Acknowledgements> In addition to the contact named above, Cheryl K. Andrew (Assistant Director), Peter W. Anderson, R. Eli DeVan, Lorraine R. Ettaro, Lisa L. Fisher, Miranda Riemer, Anne Louise Taylor, and Lauren M. Wright made key contributions to this report. | Why GAO Did This Study
DOD plans to spend about $65 billion from fiscal year 2019 to 2023 on space acquisition programs—including satellites, launch vehicles, ground components, and user equipment. DOD's space acquisition personnel perform a variety of activities, such as preparing and reviewing acquisition documents, to manage or oversee programs that develop or procure space capabilities. DOD recently announced it plans to establish a new Space Development Agency and a United States Space Command.
A House Report accompanying a bill for the 2017 National Defense Authorization Act contained a provision for GAO to review DOD's space acquisition workforce. This report examines, among other things, what is known about the size, mix, and location of that workforce. GAO collected data from DOD's acquisition workforce data systems and multiple space acquisition organizations. GAO interviewed officials from these organizations and from a non-generalizable sample of 10 space acquisition programs, representing a range of dollar values and stages in the acquisition process.
What GAO Found
The Department of Defense (DOD) does not routinely monitor the size, mix, and location of its space acquisition workforce. However, data GAO collected and aggregated from multiple DOD space acquisition organizations show that at least 8,000 personnel in multiple locations nationwide were working on space acquisition activities at the end of 2017 (see figure). Also as shown, military and civilian personnel comprise the majority of the overall workforce, while contractor and Federally Funded Research and Development Center personnel also provide support.
Several factors hinder DOD's ability to collect data needed for a comprehensive view of its space acquisition workforce:
DOD does not maintain a complete list of its space acquisition programs;
DOD's workforce data systems are not configured to identify personnel working on space acquisition activities; and
DOD space acquisition personnel are dispersed across organizations and some personnel support both space and non-space programs.
Without complete and accurate data, DOD cannot assess gaps in the overall capabilities of the space acquisition workforce. Identifying space programs and collecting such data would also better position DOD to ensure that the appropriate space acquisition personnel are assigned to the new Space Development Agency and the United States Space Command. Finally, comprehensive data on the space acquisition workforce would also be beneficial to support DOD's efforts related to its recent legislative proposal regarding the establishment of the United States Space Force.
What GAO Recommends
GAO recommends that DOD (1) identifies the universe of its space acquisition programs and the organizations that support them and (2) collects and maintains data on the workforce that supports these programs. DOD agreed with the first recommendation, but not the second. GAO revised the second recommendation to address DOD's concerns. |
gao_GAO-19-374 | gao_GAO-19-374_0 | <1. Background> <1.1. TSA s Aviation Security Responsibilities> TSA is the primary federal agency responsible for implementing and overseeing the security of the nation s civil aviation system and is responsible for ensuring that all passengers and property transported by commercial passenger aircraft to, from, within, or overflying the United States are adequately screened. Specifically, TSA performs, or oversees the performance of, screening operations at about 440 TSA- regulated (i.e., commercial) airports nationwide. These airports range in size from smaller airports (category III and IV airports) to larger airports (categories X, I, and II airports). According to TSA policies and procedures in effect at these airports, all passengers, their accessible property, and their checked baggage are to be screened prior to entering the airport sterile area the portion of an airport beyond the security screening checkpoint that provides passengers access to boarding aircraft. Among other things, these policies and procedures generally provide that passengers must pass through security checkpoints where their person, identification documents, and accessible property are to be screened by TSOs, and that all checked baggage must be screened by TSOs. <1.2. TSA Checkpoint and Checked Baggage Screening> Checkpoint Screening. The checkpoint screening process, as set forth in TSA s procedures, is intended to deter and prevent passengers from carrying any unauthorized or prohibited items into the airport s sterile area and onboard an aircraft. Upon entering the airport terminal security checkpoint, passengers provide travel document checkers their boarding passes for review. Based on the printed boarding pass result, travel document checkers are to direct passengers to designated areas for standard, enhanced, or expedited screening. Standard screening is generally applied to all passengers with boarding passes that are not marked for enhanced or expedited screening. This screening typically includes passing through either a walk-through metal detector or advanced imaging technology (the latter of which identifies objects or anomalies concealed on the person) and using X-ray equipment to screen the passenger s accessible property. In the event that any of these screening devices identify a potential item of concern, additional security measures are to result as part of the alarm resolution process. These measures may include pat downs, explosives trace detection searches (which involve a device to detect explosive particles), and colorimetric testing to identify the concentration of certain chemical elements. Enhanced screening is generally required for passengers TSA identifies as high risk, such as passengers that have been matched to federal government lists of known or suspected terrorists. Enhanced screening involves the same procedures applied during a typical standard screening experience, as well as a pat down and an explosives trace detection search or physical search of the interior of the passenger s accessible property, electronics, and footwear. Expedited screening is allowed for passengers TSA believes to be low risk. One group of passengers who routinely receive expedited screening are those enrolled in TSA s Pre a program through which individuals vetted and approved by TSA are eligible for this level of screening. At airports with dedicated TSA Pre lanes, expedited screening includes walk-through metal detector screening and X-ray screening of the passenger s accessible property, and travelers do not have to remove their belts, shoes, or light outerwear, or remove items such as laptops from carry-on baggage. Checked Baggage Screening. TSA procedures for checked baggage screening establish a process intended to deter, detect, and prevent the transport of any unauthorized explosive, incendiary, or weapon aboard an aircraft. Checked baggage screening generally entails the use of explosives detection systems which use X-rays and other technology to automatically measure the physical characteristics of objects in baggage and trigger an alarm when objects that exhibit the physical characteristics of explosives are detected. <1.3. Overview of Inspection and Security Operations Testing Processes> Inspection s tests are intended to identify vulnerabilities related to any aspect of TSA s checkpoint and checked baggage screening systems, to include the procedures for screening, the TSOs who implement these procedures, and the technology for screening (e.g., X-ray machines and advanced imaging technology). Security Operations testing focuses entirely on TSO performance of existing standard operating procedures for checkpoint and checked baggage screening, and unlike Inspection s testing, does not test other aspects of screening, such as the performance of screening equipment. To carry out covert testing, both Inspection and Security Operations create test scenarios that describe the overall intent of the test, the threat item, the method of execution (e.g., an explosive device concealed in a shoe carried through the checkpoint), and other pertinent details. Generally, Security Operations scenarios have tested TSOs performance of procedures pertaining to one of three different paths travelers must follow to have either their persons or property screened (i.e., screening paths): checkpoint on-person the tester travels through the checkpoint with the threat item concealed on his or her person; checkpoint in-property the tester travels through the checkpoint with the threat item concealed in a carry-on bag; and checked baggage the threat item is concealed in checked baggage. For both offices, covert tests begin when program managers notify an airport s FSD and local law enforcement agency that testing is scheduled to begin. Testers typically pose as passengers and attempt to smuggle a threat object, concealed either on their person or in their property, through one or more layers of the checkpoint or checked baggage screening process (see fig. 1). These layers of screening include the travel document checker and the walk-through metal detector or the advanced imaging technology machine, among others. In general, TSA s covert tests conclude with a meeting between either Inspection or Security Operations staff and the TSOs and their supervisors who were tested to discuss the results. These meetings, known as post-test reviews, allow officials to reinforce actions resulting in test successes, review the correct procedures for any failures, and collect additional data relating to factors contributing to success and failure. In addition, documented test results are reported to local TSA airport officials, so that they may schedule and track TSO participation in the remedial training that is required by law when screeners fail a test. More broadly, Inspection and Security Operations report test results to certain internal and external stakeholders. Historically, Inspection has reported its test results directly to TSA management to inform executive leadership about the aviation screening system s potential vulnerabilities to new and evolving threats. In addition, Security Operations has reported test results for its prior testing program to the Office of Management and Budget quarterly and has also briefed TSA senior leadership on results periodically. <1.4. Using a Risk-Informed Approach for Covert Testing> DHS policy requires that its components, including TSA, use risk information and analysis to inform decision making. A risk-informed approach helps decision makers identify and evaluate potential risks so that actions can be taken to mitigate those risks. DHS defines risk as a calculation of threat, vulnerability, and consequence. These elements are defined as follows: Threat likelihood is estimated based on intent and capability of an adversary. Vulnerability is a physical feature or operational attribute that renders an entity open to exploitation or susceptible to a given hazard. In calculating risk, vulnerability is based on the likelihood that an attack is successful, given that it is attempted. Consequence refers to the negative effect of an event, incident, or occurrence. According to the 2010 DHS Risk Lexicon, which defines key risk- management terms for DHS agencies and components, risk-based decision making uses the assessment of risk as the primary decision driver, while risk-informed decision making may consider other relevant factors in addition to risk assessment information, for decision making. To guide agency efforts to make risk-based decisions, TSA issues annually its Transportation Sector Security Risk Assessment a report on transportation security that assesses risk by establishing risk scores for various attack scenarios within different transportation sectors, including domestic aviation. These scenarios are continuously refined to reflect evolving threats to the various transportation modes and feedback from subject matter experts. In scoring risk scenarios for the Transportation Sector Security Risk Assessment, TSA considers the three elements of risk (threat likelihood, vulnerability, and consequence). <2. TSA Revised Its Covert Test Processes since 2016 but Is Not Fully Using and Documenting a Risk-Informed Approach for Selecting Test Scenarios> <2.1. Inspection Redesigned Its Covert Test Process to Be More Risk-Informed and Quantitative but Has Not Fully Documented Its Rationales for Selecting Test Scenarios> <2.1.1. Inspection s Redesigned Covert Test Process> In 2016, Inspection redesigned its process to conduct covert tests more consistently across airports, and began using quantitative methods to design tests and analyze results so that its findings might be applied more broadly across airports nationwide. Inspection officials explained that, prior to redesigning their process, Inspection s findings could not be applied more broadly because of how tests were designed and executed. In addition, officials noted that some prior test practices risked diminishing the quality of testing. For example, some testers consistently ran tests at the same airports, increasing the likelihood that they might be recognized by TSOs and compromise the covertness of tests. As part of its new testing effort, Inspection recruited a technical team of employees with expertise in statistics and engineering to enhance the design, execution, analysis, and reporting of its covert tests. Inspection also documented its new covert test process and rationales for key program decisions, including its approach to performing quantitative analysis of test results, in overarching guidance issued in October 2016. These documents set forth a framework for conducting tests that includes the creation of detailed scenarios that specify Inspection s covert test objectives and scope of testing. For example, for one Inspection test scenario conducted in fiscal year 2016, Inspection conducted 280 tests at larger airports to assess whether certain types of assembled explosive devices contained in carry-on luggage could evade detection at the checkpoint. Under new guidance, Inspection s testers may not conduct tests at the same airport within a predetermined period, to limit the potential of being recognized by airport staff. In addition, under its new process, Inspection selects airports for testing so that it may apply its findings more broadly across airports nationwide. Once Inspection testers complete all tests for a given scenario, Inspection develops classified reports containing results of its quantitative analysis (including detection rates for specific threat items) and suggested actions aimed at addressing any identified vulnerabilities. <2.1.2. Inspection Has Not Fully Documented a Risk-Informed Approach for Testing> Inspection uses a risk-informed approach to select locations and scenarios for covert tests, but has not fully documented this approach. According to Inspection officials, to select airport locations for tests, they use a tool to randomly select airports from various regions and of various sizes to ensure appropriate representation. According to our review of the locations Inspection tested in fiscal years 2016 and 2017, Inspection predominantly conducted testing at the larger airports. As previously discussed, this is consistent with a risk-informed approach, as TSA s analysis has shown that larger airports face an increased threat of a terrorist attack. In addition, Inspection officials said that they use a risk-informed approach to select scenarios for their covert tests that takes into consideration all three aspects of a comprehensive risk assessment threat, vulnerability, and consequence. According to officials, Inspection s approach to each of the three components of risk is described below. Efforts to Consider Threats. According to Inspection leadership officials, Inspection has developed close working relationships with key intelligence community agencies to obtain current and specific intelligence information about threats to commercial aviation. Inspection uses this information to create test scenarios involving threat items and attack methods that correspond with the most current threat intelligence. Inspection officials explained that they also consult risk assessments such as the Transportation Sector Security Risk Assessment to help determine which scenarios to test, but do not rely solely on this information. Officials said this is because such assessments can lack specificity about the type and placement of threat items along different screening paths. For example, the Transportation Sector Security Risk Assessment may not convey the specific type of device or the mechanism by which an explosive device will be presented at the checkpoint (e.g., in a laptop). Inspection s approach, which uses both current intelligence and risk assessments, is consistent with a risk-informed approach, which allows agencies to utilize resources beyond risk assessments to inform decision making. Efforts to Consider Vulnerability. Inspection officials told us they have considered vulnerability as a factor for making risk-informed decisions, and have found that it is not useful when deciding which scenarios to test for two reasons. First, their covert testing is intended to identify the existence of vulnerabilities in the aviation security system. Second, officials explained that vulnerabilities at some airports are well-documented and understood; therefore, they would generally not use their limited resources to test a vulnerability that is well-known. Efforts to Consider Consequence. Inspection officials explained that when selecting among possible scenarios to test, considering the consequences that might result from a scenario is less important than the likelihood of a given threat. However, Inspection officials explained that they require that any scenario tested is one that would result in the loss of life if the attack were actually to occur. Although Inspection program officials could articulate the risk-informed approach used to select scenarios for testing, they had not sufficiently documented this approach. Specifically, we found that Inspection documents its process for making risk-informed selections of scenarios in formal work plans. This documentation includes general criteria that Inspection leadership is to consider when developing threat scenarios, one of which is threat likelihood. However, the work plans we reviewed did not identify selection criteria that address the vulnerability or consequence components of risk. DHS s Risk Management Fundamentals (2011) requires that agency documentation include transparent assumptions about the rationale behind risk management decisions. In addition, according to Standards for Internal Control in the Federal Government, agencies should document key decisions in a way that is complete and accurate. According to Inspection officials, they have not fully documented their risk-based process for selecting scenarios because their decision making is often informed by unforeseen events associated with the most exigent threats. Nevertheless, without documenting in its work plans how consequence and vulnerability are considered when determining which scenarios to test, current Inspection program managers may not be able to ensure that their scenario selection decisions are appropriately accounting for risk as called for by DHS and TSA guidance. Furthermore, although vulnerability and consequence are less important criteria for Inspection s current risk-informed selections, documentation of its approach toward each would serve as a baseline for how Inspection makes risk-informed decisions for selecting scenarios to test. This baseline could inform future program managers and agency leadership seeking to make changes. <2.2. Security Operations Redesigned Its Covert Tests to Address Prior Deficiencies but Has Not Fully Incorporated Known Risks or Documented How It Selects Scenarios to Test> <2.2.1. Security Operations Redesigned Its Covert Test Process> In 2016, Security Operations replaced its Aviation Screening Assessment Program with a new covert test program. Security Operations issued guidance for this new program that, among other things, established a parallel test process carried out by headquarters staff to validate (i.e., determine the quality of) local covert test results from airports. In conjunction with this process, Security Operations also developed and launched a new web-based tool to collect more detailed information on covert tests. According to Security Operations officials, the new program is intended to address problems with its covert testing process identified by an independent contractor in 2015. Specifically, the contractor performed the same covert tests that TSA personnel at local airports conducted, and the contractor s test results showed that screeners performed more poorly on its tests. In September 2016, we reported that, based on the results of the contractor s study, TSA had determined that prior-year tests conducted by TSA officials at airports likely showed a higher level of performance than was actually the case. Further, TSA attributed these higher detection rates, in part, to local airport difficulties in successfully maintaining the covert nature of their tests. To address deficiencies identified by the TSA-contracted study, Security Operations issued test guidance in December 2016 and January 2017 that provides more structure to the planning and execution of tests and is intended to help ensure the quality of test results, among other things. For example, the guidance directs local test coordinators to schedule covert tests at varying times of day and varying days of the month, to prevent TSOs from becoming accustomed to testing at particular times. Also, to help ensure that testers are not recognizable by TSOs, the guidance states that airports must not recruit testers from the airport in which the test is to be conducted. Additionally, Security Operations guidance expands opportunities for recruiting testers at airports. Security Operations new covert test program also features a headquarters-based covert test effort, known as Headquarters Evaluation Team (HET) testing, to help validate the results of covert tests conducted by TSA officials at airports, known as Field Evaluation Team (FET) testing. Under the new process, FET teams, which are composed of TSA staff at airports and locally recruited testers, oversee testing at airports where FSDs are located and at any smaller airports under the FSD s authority. FET teams perform tests of three different screening paths checkpoint in-property, checkpoint on-person, and checked baggage using a variety of scenarios assigned by Security Operations program managers every 6 months. FET teams test scenarios for a designated number of times over the 6-month period, after which, program managers are to select and assign a new set of scenarios for testing for the next 6-month period. For its HET tests, Security Operations is to select, on a quarterly basis, three scenarios to test from among the current set of scenarios assigned for FET testing. HET teams are to travel to airports quarterly to conduct these tests and help validate the FET testing results. Security Operations validation process involves comparing detection rates the percentage of tests in which TSA screening recognized and prohibited a threat item from entering the sterile area of an airport for similar scenarios from both groups of testers. To assist HET and FET teams in collecting more detailed information from its new test program, in April 2016, Security Operations developed a web-based data collection instrument called the Task Process Factor (TPF) tool that TSA officials use to record more detailed information on covert tests. According to program officials, collecting more detailed information about test failures was part of the agency s effort to improve screener performance following the DHS Inspector General s 2015 covert test findings that identified vulnerabilities in TSA s checkpoint screening. The tool defines the key TSO activities for conducting checkpoint and checked baggage screening as tasks (e.g., interpret the X-ray image). The tool also identifies the various processes associated with a given task (e.g., move property into the X-ray scanner and stop when a full image appears). For any task in which a TSO fails, testers are to use the TPF tool to record the task and process associated with the failure so that Security Operations may identify points of failure for tests with greater specificity. Furthermore, for all test failures, the tool requires HET and FET testers to identify the factor, or root cause, for failure. <2.2.2. Security Operations Has Not Fully Incorporated or Documented a Risk-Informed Approach for Selecting Test Scenarios> Although Security Operations considers some TSA risk information when selecting airport locations to test, we found that Security Operations does not fully consider this information when determining which scenarios to use for its covert tests, and also does not document its rationale for choosing the scenarios it selects. According to its planning documents for conducting HET and FET tests, Security Operations conducts more tests at larger airports than smaller airports. According to TSA officials, this is because larger airports generally have more TSOs who are subject to covert testing. TSA s decision to allocate more testing resources to larger airports is based on its own risk analysis and, therefore, is consistent with a risk-informed approach. However, Security Operations has not taken steps to incorporate known risks such as those documented in TSA s annual Transportation Sector Security Risk Assessment, TSA s primary risk assessment of threats for all transportation modes into its process for selecting covert test scenarios. As our prior work has shown, implementing a risk-informed approach involves using risk assessments or other risk information to determine the most pressing security needs and developing strategies to address them. In reviewing TSA s 2016 Transportation Sector Security Risk Assessment the version that would have informed Security Operations selection of tests for fiscal year 2017 we identified numerous attack scenarios that could have been incorporated into Security Operations selection of scenarios to test. Specifically, the 2016 risk assessment included 20 scenarios that involved attacks that could be carried out through expedited screening conducted in dedicated TSA Pre screening lanes. We reviewed all scenarios Security Operations selected to test in fiscal year 2017, but found that only one involved a test of the TSA Pre lane. More generally, we also found that TSA s selection of threat items to test at the checkpoint in fiscal year 2017 did not reflect threats identified in TSA s 2016 Transportation Sector Security Risk Assessment. Security Operations officials acknowledged that they do not use formal TSA risk assessments to determine what threat scenarios or items to test. They also do not work with intelligence agencies or review classified information when developing covert test scenarios. Instead, Security Operations officials said they rely mainly on professional judgment regarding which areas of checkpoint and checked baggage procedures TSOs frequently overlook or may not perform correctly (e.g., pat downs). Officials explained that their judgment is informed by monitoring covert test results; unclassified media reports on threats; and requests from agency leadership, such as from TSA s Administrator. Security Operations program managers further explained that because their tests are intended to assess TSO performance of screening procedures and identify any gaps, their selection of scenarios for testing is intended to cover the breadth of checkpoint and checked baggage screening procedures. However, as previously discussed, using a risk-informed approach would allow program managers to balance other goals of testing, such as the need to test a variety of screening procedures, with risk information, when making decisions on what to test. DHS s Policy for Integrated Risk Management (2010) states that DHS components should use risk information and analysis to inform decision making. Additionally, the TSA Strategy 2018 2026 prioritizes structuring programs to manage risk and optimize resource allocation. Formal risk assessments such as the Transportation Sector Security Risk Assessment identify the most significant risks to checkpoint and checked baggage screening, and accordingly identify some of the most critical skills TSOs need to detect or prevent possible attack scenarios. Using a risk-informed approach to select scenarios that more fully account for known risks such as those identified in the Transportation Sector Security Risk Assessment or a similar risk assessment could better ensure that TSA is using its finite testing resources to target screening activities that will counter the most likely threats. Additionally, DHS s Risk Management Fundamentals (2011) requires that agency documentation include transparent assumptions about the rationale behind risk management decisions. However, Security Operations has not documented its rationales for selecting covert test scenarios in any of its overarching guidance or planning documentation. Such rationales would delineate Security Operations framework for determining what screening activities to test, and specify how Security Operations officials balance a risk-informed selection of scenarios with their need to test scenarios that cover the breadth of requirements within existing screening procedures. Security Operations officials said they do not document their scenario selection process because they review covert test data on a frequent enough basis to identify which processes have low detection rates and, thus, are in need of testing. However, documenting a risk-informed rationale for its selection of scenarios would better enable Security Operations or an external party to assess TSA s covert test programs and ensure that decisions are appropriately accounting for risk as called for by DHS and TSA guidance. It would also allow Security Operations to demonstrate how it balances its goal of promoting a risk-informed culture, as required by DHS, with program goals to ensure that TSOs are following all required screening procedures correctly. <3. Inspection s Updated Process Is Designed to Produce Quality Information, but Security Operations Faces Challenges with the Quality of Its Test Results> <3.1. Inspection s New Process is Designed to Produce Quality Test Results and Analysis> Inspection has established a new process and principles for conducting covert tests, as well as collecting and analyzing test data, intended to result in quality information on screening vulnerabilities. We reviewed two reports on results of Inspection s covert testing that were completed using its new processes, and found they resulted in quality information on screening vulnerabilities. With respect to its new processes Inspection has implemented guidance to ensure a standardized process for developing and executing tests. Specifically, Inspection guidance requires that headquarters staff with expertise in relevant fields (including physical security, explosives, and intelligence analysis) develop all threat items used for testing and conceal these items within test bags or on testers in the same manner across tests. In addition, Inspection program managers require that testers have detailed background stories to explain the purpose(s) of their travel. Inspection now employs multiple standard practices to ensure test covertness. We observed several of these practices during four Inspection tests conducted at one airport. These four tests consisted of two scenarios that were each tested at two different checkpoints within the airport. First, we observed that Inspection teams notified the FSD of their presence only immediately prior to beginning tests, to limit the potential for local airport staff to be forewarned. We also observed that Inspection conducted tests simultaneously across checkpoints, and concluded testing at the airport after an initial round of testing. According to Inspection program managers, conducting tests simultaneously and leaving after the initial round of testing are necessary because once TSOs at a tested checkpoint become aware of testing, there is no reliable way to prevent this knowledge from spreading to other checkpoints. Inspection now integrates its technical operations team (technical team) into all aspects of test design and data collection and analysis. Inspection officials recruited staff with expertise in research and test design, statistics, and systems engineering, among other relevant fields, to analyze this information. Inspection has integrated these staff into all aspects of its test process to ensure the quality of test information collected and analyses performed. For example, according to TSA documentation, Inspection technical team members are to oversee the selection of airports for testing by first conducting an analysis to determine the number of airports to be tested, and then ensuring the selection of airports for testing is made using a random process a requirement, given that Inspection intends to use test results to understand and describe screening activities at airports nationwide. Inspection now identifies data to be collected for each scenario and monitors this data as it is being collected for quality assurance. According to TSA documentation, Inspection s technical team develops the data collection forms used to record test information for every scenario. Such data elements are specific to each scenario and can include, for example, the time when the tester entered the checkpoint, whether the TSO running the X-ray machine stopped the belt to review the tester s bag, and the brand of X-ray machine. According to TSA documentation, the technical team is also to monitor incoming data from scenarios on a regular basis to address any problems as they arise. Inspection now uses guidance to ensure consistency in analysis and reporting. This includes requirements for reviewing all test data and applying rules about which data should be excluded. Inspection also developed guidance to specify the types of statistical analyses that may be used to draw conclusions about test results and how to report on the results to ensure that its analysis of test results is appropriate and transparent. For example, Inspection guidance identifies what technical information should be included in the report to help readers interpret Inspection s conclusions that are based on statistical analysis of results. We reviewed the two full reports that Inspection issued using this new guidance and found that Inspection generally followed the guidance for using statistical analysis and reporting final results in these reports. <3.2. Security Operations Faces Challenges with the Quality of Its Covert Test Information and Its Quality Assurance Process> <3.2.1. Security Operations Faces Challenges with the Quality of Airport Test Results> As previously discussed, the primary method by which Security Operations tries to ensure that quality covert test results are generated at airports is by having HET and FET testers conduct the same test scenarios at airports, and then comparing detection rates identified by the two teams. Security Operations program managers explained that this method presupposes that test results collected by HET and FET (following Security Operations overarching guidance for conducting tests and using the same test scenarios) should produce similar detection rates at the national level. Security Operations program managers further explained that, because HET testers are unaffiliated with the airports they test, they can more easily maintain test covertness. According to program managers, this aspect of HET testing, along with additional training HET testers receive in conducting covert tests, gives them greater assurance that HET tests accurately reflect screener performance at airports. Therefore, program managers generally consider large disparities between HET and FET detection rates to indicate problems with the quality of local airport covert test results. According to our analysis of Security Operations national covert test data for fiscal years 2017 and 2018, checked baggage tests consistently met the Security Operations criterion for quality test results, but checkpoint tests did not. In fiscal year 2018, TSA included a new criterion for quality test results for Regional Director and FSD annual performance evaluations. The criterion requires that HET and FET covert test detection rates at airports under their supervision be within a designated percentage point difference for the three types of tests (checkpoint in- property, checkpoint on-person, and checked baggage). According to our analysis of Security Operations national covert test data for fiscal year 2017 and the first half of fiscal year 2018, checked baggage tests consistently met the criterion for quality test results, however, checkpoint on-person and in-property tests did not. Specifically, we calculated HET and FET detection rates for the three kinds of Security Operations tests (checkpoint on-person, checkpoint in-property, and checked baggage tests) for three 6-month periods from fiscal year 2017 through the first half of fiscal year 2018. We found that, for each 6-month period, HET detection rates for checkpoint tests were lower than FET detection rates, and the differences exceeded TSA s established criterion for quality test information. Security Operations officials acknowledged the differences between HET and FET rates, but noted that the differences generally decreased from the last 6-month cycle of testing for fiscal year 2017 through the first 6-month cycle of 2018, and program managers are working to address them further. Nevertheless, our analysis showed that for the first half of fiscal year 2018 (the most recent cycle s data available for our analysis) differences between HET and FET test detection rates for checkpoint on-person and checkpoint in-property remained greater than Security Operations criterion for quality test information. In our observations of FET tests, we identified practices in local airport testing that impact the covertness of tests, and thus may contribute to differences between HET and FET detection rates. First, in our observations of local airport FET tests in which TSOs correctly identified the threat items, at one airport the TSA airport official in charge of FET testing was present at the checkpoint, and his presence may have provided advance notice to the TSOs that testing was in progress. Further, we learned from airport testing officials that having the FET test coordinator present at the checkpoint was a routine practice when testing was in progress. At another airport visit, one TSO told us that TSOs often know a FET test is in progress because TSA airport officials use the same test bag to conceal threat items across all tests performed at the airport. According to TSA documentation, potential lapses in the covertness of covert tests, similar to those we observed and were told about, can make TSOs aware that they are being tested and lead to results on tests that overstate actual TSO performance. In addition, we found that the level of potential variability in how TSA airport officials build threat items and test bags for FET tests may affect the quality of the test results used for comparison purposes. Security Operations requires that FET personnel build the threat items, such as explosive devices, that are used for scenarios according to specifications included within TSA headquarters-disseminated scenarios. These scenarios provide a description of the test scenario, a list of materials needed for the threat item, assembly instructions, and directions on how to conceal the threat item within checked or carry-on baggage. TSA provides standard kits to local airports that contain some of the materials FET teams need to build threat items (e.g., an explosive simulant), but TSA staff at the airport must independently procure a number of items needed for each scenario. Given that approximately 80 different teams of FET testers use non-standardized items to build and conceal threat items for tests, the test bags used by teams of FET testers vary to a certain extent across test programs nationwide. According to TSA officials, variations in the construction of test bags (including the simulated explosive devices and test bag assembly) can affect how easy or difficult it is to detect a threat item. The program manager for the HET-FET testing program agreed there is a need for greater assurance of the quality of covert test results, but stated that Security Operations has not taken action on this issue due to resource constraints. However, quality assurance is critical to ensure that the resources TSA has invested in covert testing will yield valid and usable information. Moreover, given its resource constraints, Security Operations actions to improve local airport test results could encompass less resource-intensive undertakings, such as providing more standardized items for FET tests or improving guidance to address issues that impact the covertness and consistency of tests. Standards for Internal Control in the Federal Government states that management should use quality information to achieve an entity s objectives, and that reliable internal sources should provide data that are reasonably free from error and bias and faithfully represent what they purport to represent. By assessing its current FET testing processes including factors that may compromise the covertness and consistency of tests Security Operations could identify opportunities to improve the quality of its testing. Further, making changes to its testing process based on its assessment of the current FET testing process could help improve the quality of test results. This, in turn, would better position those who use these results (including agency leadership and TSA airport officials) to reliably identify and address vulnerabilities based on TSO performance. In addition, we found that issues we identified with the quality of FET test results also affect Security Operations reporting to external stakeholders. As previously discussed, officials internal and external to TSA use Security Operations test results to assess the effectiveness of TSO performance. Currently, Security Operations reports quarterly FET detection rates as a performance measure to the Office of Management and Budget. The measure identifies the percent of time that TSOs correctly detect threat items at the checkpoint (concealed in carry-on baggage and on the passenger s body) and within checked baggage. However, as previously discussed, we found that airport testers were not generating quality covert test information on checkpoint screening because their FET detection rates were higher than the HET rates used for comparison, and the difference between the rates exceeded the criterion TSA established for quality covert test information. TSA management officials acknowledged that the agency needs to use more reliable covert test results for measures reported to the Office of Management and Budget. In October 2018, TSA notified the Office of Management Budget that it is in the process of assessing the quality of covert test results it uses to report on TSO performance, and expects to develop new measures by fiscal year 2020. <3.2.2. Security Operations Testers Face Challenges Identifying the Root Cause of Some Test Failures> In addition to issues with the overall quality of airport test results, we found that Security Operations faced challenges with the quality of information it collected on the root cause of tests failures. For each test failure, HET and FET testers are to use the TPF tool to identify and record the factor, or root cause, leading to a covert test failure. The TPF tool groups test failure factors into three main categories (1) failures characterized by the screener s lack of knowing what is required to effectively accomplish a task or job (a knowledge deficiency); (2) failures caused by incorrectly performing a procedure (a skill deficiency); or (3) failures due to the TSO not assigning the correct level of importance to performing a specific screening procedure (a value deficiency). Although Security Operations has provided some guidance on when to apply a particular factor as a root cause for a covert test failure, this guidance may not be adequate and some testers may not be selecting factors appropriately as a root cause. In our analysis of the factors assigned by both Security Operations HET and FET testers for all covert test failures in fiscal year 2017, we found that testers assigned one factor more than the other two. To assist HET and FET testers in conducting root cause analyses for test failures, Security Operations provides definitions of the three root causes (knowledge, skills, and value). It also requires that all testers (HET or FET) complete three online exercises for using the TPF tool to record results, but the exercises do not provide additional guidance on how to appropriately select root causes. In addition, Security Operations provides in-person training to all HET testers that includes a practice case on selecting from among the factors, and the training course material indicates that the process can be subjective. In our observation of HET tests, we observed numerous failures in which HET testers had to assign a root cause. In a majority of these failures, the tester attributed the same factor as the root cause. HET testers who completed the root cause analyses for these failures all told us they assigned this particular factor by default, once they ruled out the other two causes. Our observations were consistent with a 2017 independent evaluation of the TPF tool performed by the DHS Science and Technology Directorate. Among other things, subject matter experts conducting the 2017 evaluation found that testers they spoke with were not clear on the meaning of the three root causes, and the evaluation recommended that Security Operations provide better guidance to testers on how to select the root cause of a test failure. Security Operations program managers concurred with the DHS Science and Technology Directorate s recommendation that testers need better guidance on how to select among the factors as the root cause for test failures. They also stated they are working on guidance to assist testers in selecting the appropriate root cause for failures. However, in September 2018, program managers told us they had suspended these efforts to address the recommendation as a result of TSA efforts to transfer program operations to Inspection and in anticipation of broader changes to the Security Operations testing program. Inspection officials, who will assume responsibility for HET and FET testing once the transfer of the program to Inspection is complete, stated that they were unsure what changes they would make to Security Operations legacy testing process with respect to HET and FET tests at local airports, but stated both types of testing will continue to use their respective legacy testing processes in fiscal year 2019 until final decisions are made. Standards for Internal Control in the Federal Government states that management should use quality information to achieve an entity s objectives, and that reliable internal sources should provide data that are reasonably free from error and bias and faithfully represent what they purport to represent. As long as Security Operations legacy testing process is in use, testers will continue to inconsistently and potentially incorrectly identify the root cause for test failures, and in doing so, will diminish the usefulness of root cause information for addressing TSO performance problems. Reviewing existing guidance and training and providing, where appropriate, additional clarification on applying the factors as a root cause would allow TSA to collect more reliable information on the factors leading to test failures. This, in turn, would better position those who use this information (including agency leadership and TSA airport officials) to address root causes of screener failures at individual airports and across the entire system. <3.2.3. Security Operations Has Not Documented Its Methodology for HET Testing> Security Operations has not fully documented its methodology for using HET testing as a quality assurance process for FET test results. While Security Operations has documented some aspects of the HET test process, such as training for HET testers on how to conduct tests and post-test reviews with TSOs, we found that Security Operations has not documented its methodology for using HET tests to ensure the quality of FET test results in either its program guidance or other internal documentation. For example, Security Operations has no documentation on how program managers should select airports (e.g., by airport category) and scenarios for HET testing, as well as how they should analyze, compare, and report on HET test results against FET test results. Security Operations officials described some aspects of how they calculate HET and FET test detection rates for comparison purposes, but they did not have a documented methodology for this quality assurance process. For example, Security Operations officials said that they only use data from the largest airports that receive both HET and FET tests (approximately 120 of the about 440 commercial airports) for comparison purposes. Security Operations officials also explained they exclude all HET and FET tests involving enhanced screening from the rates used for comparison purposes because enhanced screening involves a more detailed inspection of the subject that tends to result in the screeners identifying threat items at a higher rate. In addition to these explanations, program managers provided a document explaining Security Operations rationale for selecting each of the HET test scenarios used for the last half of fiscal year 2017. While these explanations and the accompanying documentation helped clarify aspects of Security Operations process, Security Operations has not developed a policy that provides a comprehensive description (and therefore understanding) of the quality assurance process that its program managers are to use for program planning purposes. Such a policy would describe Security Operations approach to selecting HET test scenarios used for ongoing covert testing, how it calculates and compares test results, and how it reports and uses the results. Security Operations program managers agreed that more transparent information regarding the use of HET test results to assess FET test results would be beneficial, but, given that the program was established in late 2016, they acknowledged that they have not had time to document this process. Standards for Internal Control in the Federal Government states that all transactions and other significant events need to be clearly documented, and this documentation should be readily available for examination. The documentation should appear in management directives, administrative policies, or operating manuals. By fully describing its methodology for comparing the results of HET testing with FET test results as a quality assurance process within its program guidance, Security Operations can better ensure that all aspects of this process are clear and available for assessment and validation by third party users of HET and FET test information, such as TSA senior leadership officials. Doing so can also ensure that future program managers for the HET-FET test program can continue to use this quality assurance method appropriately by following the guidance. <4. TSA Uses Covert Test Results to Help Address Vulnerabilities, but Has Made Limited Efforts to Implement Mitigation Activities, Analyze Test Results, and Disseminate Beneficial Practices> <4.1. Inspection s Test Results Inform an Agency-Wide Process Intended to Mitigate Vulnerabilities, but This Process Has Not Yet Resolved Any Identified Vulnerabilities> Inspection submits its covert test findings that it determines to be security vulnerabilities to TSA s Security Vulnerability Management Process. TSA established this agency-wide process in 2015 to review and address any systemic vulnerability facing TSA (including those related to checkpoint and checked baggage screening). However, it is unclear if vulnerabilities reviewed through this process are being addressed in a timely manner because the process lacks clear timeframes and milestones for mitigation steps, as well as an established method for monitoring the achievement of such timeframes and milestones. In 2015, before establishing the Security Vulnerability Management Process, TSA conducted a review of then-existing processes for evaluating and managing identified vulnerabilities, and found that they were not centralized and did not ensure the level of visibility and accountability needed to adequately mitigate and resolve (or close) the vulnerabilities. Consequently, TSA determined that its processes for tracking and managing the closure of identified security vulnerabilities represented an organizational deficiency that should be addressed. In addition, Inspection officials stated that, under the prior processes, they lacked complete knowledge of all agency resources that could be leveraged to develop mitigation strategies, as well as the necessary authority to compel offices to share these resources, which made it difficult to ensure identified vulnerabilities were addressed. As a result, TSA created the Security Vulnerability Management Process to better ensure the cooperation of various program offices within TSA that had the expertise needed to address vulnerabilities identified by Inspection or other offices within TSA. This process is intended to centralize agency efforts to mitigate vulnerabilities by ensuring that they receive agency- wide visibility and are evaluated, resourced, and managed by appropriate TSA program offices until fully addressed. TSA s Strategy, Policy Coordination, and Innovation office is responsible for managing and overseeing the Security Vulnerability Management Process, as well as enforcing deadlines for vulnerability mitigation. The Strategy, Policy Coordination, and Innovation office submits vulnerabilities for review by one of two groups of TSA stakeholders the Executive Risk Steering Committee or the Risk Assessment Integrated Project Team. These two groups are responsible for identifying all TSA program offices affected by the vulnerability in question and working with those program offices to determine whether and how vulnerabilities can be mitigated and formally closed (see fig. 2). According to TSA Strategy, Policy Coordination, and Innovation office officials, to close a given vulnerability, one of the two groups will assess whether the risk posed by the vulnerability aligns to the identified amount of risk that TSA is willing to accept. TSA officials told us that the agency is risk averse to any vulnerability that could cause catastrophic consequences, such as the loss of an airplane. The Strategy, Policy Coordination, and Innovation office has responsibility for enforcing deadlines for mitigating identified vulnerabilities, but our review of TSA documentation found that the office does not establish timeframes and milestones to ensure measured progress toward mitigation of those vulnerabilities. Moreover, we found that although the Security Vulnerability Management Process charter establishes a broad framework for developing and implementing mitigation strategies, it does not establish a method for how the Strategy, Policy Coordination, and Innovation office is to monitor mitigation activities to ensure that TSA program offices are meeting identified timeframes and milestones, such as by identifying a person or entity responsible for escalating cases when these requirements are not being met. Specifically, we found that Inspection has submitted nine vulnerabilities for consideration. With one exception, as of September 2018, none of the vulnerabilities have been formally closed as a result of mitigation steps taken via the vulnerability management process. Under the process, a vulnerability owner has responsibility for developing and leading mitigation efforts for a specific vulnerability. TSA closed one of the nine vulnerabilities 2 years after submission to this process because the relevant program office made policy changes that addressed Inspection s interim findings. The remaining vulnerabilities have been in progress from 4 months to 2.5 years. Of these eight vulnerabilities, five have had TSA offices assigned as vulnerability owners, and three of these five have mitigation efforts in progress. The three remaining open vulnerabilities that did not yet have vulnerability owners assigned at the time of our review had been waiting for vulnerability owners for a period of 4, 5, and 7 months, respectively; however, TSA officials told us that these three open vulnerabilities had owners assigned in September 2018. TSA officials told us that timeframes for vulnerability mitigation can vary due to the number of stakeholders required to address the situation. They also explained that the complexity of certain threats affect the timeliness of final mitigation solutions (e.g., those requiring technology solutions can involve multiple TSA offices); and before such solutions are developed, Inspection works with program offices to help them develop interim mitigation procedures. Additionally, they cited factors beyond TSA s control that can delay mitigation efforts, such as changes to agency leadership or in staff within a particular office. For example, mitigation has been delayed for one of the vulnerabilities under review for over 2 years, due to changes in agency leadership in 2016, among other things. In another example, TSA officials told us that mitigation for a vulnerability under review had been delayed for over two years due to personnel changes within the office tasked with developing and leading mitigation efforts. Inspection officials told us that while officials are working on mitigation solutions for identified vulnerabilities, Inspection will assist TSA program offices with implementing interim mitigation procedures before formal mitigation plans are developed. For example, Inspection officials stated that they worked with Security Operations to provide interim guidance to TSA airport officials to address an identified vulnerability that involved Transportation Security Specialists for Explosives using screening equipment incorrectly to clear passengers through the checkpoint. Although TSA has implemented interim mitigation steps for some vulnerabilities while its program offices develop long-term solutions, in some cases Inspection s findings represent system-wide vulnerabilities to commercial aviation that could result in potentially serious consequences for TSA and the traveling public. For this reason, it is important that TSA make timely progress on formal mitigation solutions. Moreover, tracking progress for a given vulnerability against timeframes and milestones would not necessarily preclude TSA program managers from accounting for complex mitigation efforts. Program managers could, for example, establish longer timeframes at a mitigation effort s onset and adjust these as needed, should challenges arise. The Standard for Program Management states that the governance of programs includes establishing minimum acceptable criteria for success and the standards by which they are measured and communicated to achieve desired outcomes. Additionally, programs should include the concept of time and incorporate schedules through which specific milestone achievements are measured to ensure that appropriate progress is made toward achieving a defined set of outcomes. In TSA s case, this would mean the mitigation of identified vulnerabilities. The Standard for Program Management further states that program governance plans are to describe the systems and methods to be used to monitor a given program, and the responsibilities of specific roles for ensuring the timely and effective use of those systems and methods. TSA officials agreed that their vulnerability management process lacks a clear set of deadlines for the timely completion of mitigation steps, as well as a method for monitoring completion of these steps to ensure vulnerabilities are closed. By establishing timeframes and milestones for vulnerability mitigation, TSA would better ensure that progress toward addressing vulnerabilities continues, despite internal challenges, such as personnel changes, or external factors. In addition, by establishing the methods by which TSA s Strategy, Policy Coordination, and Innovation office will monitor milestones for completion, and the steps it will take when mitigation is not progressing as planned, TSA will be better positioned to ensure that the agency is making measured progress toward addressing the vulnerabilities managed through this process. <4.2. Security Operations Uses Test Data for Feedback and Reporting to Airports and Others, but Does Not Analyze National Data to Identify Potential Vulnerabilities in Screener Performance> <4.2.1. Security Operations Monitors Covert Test Data to Identify Potential Vulnerabilities> Security Operations program managers said that they continuously monitor covert test results to identify potential vulnerabilities and to assess progress at airports in addressing vulnerabilities identified through covert tests. Security Operations primarily monitors TSO performance by reviewing information within its TPF tool. Specifically, program officials said that they monitor the database each month to identify gaps between HET and FET detection rates at an individual airport and regional level. Security Operations officials said that they will alert TSA officials at airports if they detect anomalies or large disparities between their HET and FET test rates, and suggest strategies for conducting tests. While reviewing the data, Security Operations officials told us they may also identify specific test scenarios that TSOs are experiencing difficulties with, and sometimes develop strategies to improve performance. For example, officials said that when TSOs demonstrated difficulty with a scenario involving colorimetric testing, Security Operations developed a pamphlet for TSOs to clarify those procedures. Security Operations monitoring has also resulted in changes to processes and procedures. For example, according to TSA documentation, in early 2016 Security Operations officials conducted an ad hoc analysis of relevant covert test data. This analysis led to the implementation of Enhanced Accessible Property Screening procedures for personal property screened at airport checkpoints. According to TSA documentation, these new procedures are intended to help TSA officers obtain a clearer X-ray image to enhance screening effectiveness. Among other things, they involve advising passengers to remove organic materials from carry-on bags for X-ray screening, requiring that electronics larger than a cell phone be removed from carry-on bags and placed in bins for X-ray screening, and more targeted property search protocols. In addition to periodic monitoring of test data within the TPF tool s database, Security Operations officials also told us they monitor Threat Detection Improvement Plans, which are based on recommended actions stemming from each airport s covert testing results. TSA officials told us that these plans can include test-specific action plans and high-level improvement strategies. Security Operations now monitors airport progress against these plans in order to ensure that airports are taking the necessary actions to improve TSO performance deficiencies identified in covert testing. <4.2.2. Security Operations Uses Test Data to Provide Feedback and Reporting to Airports and Other Stakeholders> Security Operations officials told us they use covert test results as the basis for feedback and periodic reporting on TSO performance and the quality of covert test programs or results to headquarters, regional, and local TSA officials and other stakeholders. According to Security Operations officials, this feedback and reporting includes the following. HET reports and feedback: Security Operations directly communicates with TSA officials at airports on HET test performance. For example, in our observations of HET tests at airports, testers conducted an equal number of post-test reviews, during which they reviewed with TSOs and their supervisors the intent and results of the HET tests, reinforced actions resulting in test successes, and reviewed the correct procedures for any failures. In addition to post- test reviews, at the conclusion of each HET test at an airport, Security Operations program managers provide TSA management at the airport a report compiling the results of the recent HET test and statistics on the quality of the covert test program at the airport. According to TSA documentation, these reports include a comparison of local FET test results against the results of HET tests that were conducted during that visit. TPF Report: On a monthly basis, according to TSA documentation, Security Operations also provides a classified spreadsheet report to FSDs that contains a high-level analysis of HET and FET covert test data collected for the fiscal year to date, as well as a copy of the most current test results in the TPF tool s database. Security Operations program managers stated that allowing airports access to the entire database allows FSDs to compare their airport s performance against counterparts in other regions and address any areas in which they are lagging. In our interviews with FSDs, we found that officials from all of the airports we spoke with used the TPF data to help manage TSOs. For example five FSDs told us they download the raw test data into local systems for use in their local processes for monitoring TSO performance. Classified monthly conference calls: According to TSA officials, Security Operations hosts monthly classified conference calls with local and regional TSA officials to discuss issues related to covert testing. Security Operations officials told us these discussions typically include the results of specific covert test rounds, methods for using covert tests results, and FSDs beneficial practices for carrying out covert testing at their airports. Reporting to senior leadership and other stakeholders: Security Operations officials said they continue to use covert test results for monthly briefings to FSDs and TSA senior leadership. According to TSA documentation, these briefings include high-level analysis of regional covert test performance, as well as overall comparisons of detection rates for on-person, in-property, and checked baggage tests against the national averages. As previously discussed, TSA also uses FET test results as the basis of a performance measure reported quarterly to the Office of Management and Budget. FSDs we spoke with told us they find the feedback and reporting they receive from Security Operations program managers to be helpful. In particular, all 10 FSDs we spoke with told us they find both the HET test reports and accessibility to TPF data in the monthly spreadsheet report to be beneficial and useful. FSDs also noted that the HET reports help inform their assessments on individual and airport workforce performance and efforts to improve their airport s screening operations overall. <4.2.3. Security Operations Does Not Conduct and Share a Comprehensive Analysis of National Covert Test Data to Identify Potential Vulnerabilities> While Security Operations program officials perform some high-level analysis of TPF data for periodic reporting, they do not analyze all Security Operations-collected covert test data to identify potential national trends in screener performance that could constitute system-wide vulnerabilities. For example, according to officials and TSA documentation, Security Operations officials use FET and HET covert test data to describe broad trends in screening performance in monthly briefings to TSA management. However, the briefings do not include a breakdown of the different screening tasks and processes that may be most often associated with TSO failures nationally. In addition, although the TPF tool s database contains information on the task, process, and factors associated with each TSO test failure, Security Operations does not typically include a comprehensive analysis of this information within the monthly covert test reports it provides to TSA leadership at airports. For example, based on our review of Security Operations monthly TPF reports, they identify which processes have resulted in the most failures, but do not identify which factors knowledge, skill, or value were the root cause of these failures. Moreover, none of this reporting reflects a broader analysis to identify whether failures or causes were associated with a certain size of airport or reflected across one or more regions. Standards for Internal Control in the Federal Government states that an agency should design its information systems to respond to the entity s objectives and risks. Furthermore, agencies may use information from these systems to evaluate the agency s performance in achieving key objectives. As discussed previously, Security Operations officials have performed similar types of analysis in the past with positive results. For example, when TSA developed the Enhanced Accessible Property Screening procedures in 2017, these actions were based (in part) on ad hoc analysis Security Operations conducted with national covert test data. At the time, Security Operations analysis showed that X-ray operators at checkpoints had problems determining the threat nature of certain categories of objects. This led to repeated failures in detection given the time and cognitive load requirements for interpreting those types of X-ray images. In response, TSA created or adjusted specific procedures based on the analysis of root causes of testing failures and the results of piloting new screening procedures at multiple sites to ensure effectiveness and efficiency could be sustained. Security Operations officials agreed that conducting a more comprehensive, national-level analysis, and utilizing more of the covert test data currently within the TPF tool s database, would be useful in identifying system-wide vulnerabilities that could inform efforts to improve TSO performance. Security Operations officials told us that at present, they do not have a standard process to comprehensively analyze and report trends in TPF data across all airports. This is because the intent of the current program has been to make test data available to TSA airport and regional officials so they can identify factors affecting screener performance and take actions to remediate and improve any deficiencies. In addition, Security Operations officials cited a lack of resources available to dedicate to this activity, given that headquarters officials have been more focused on revising and improving their current covert test program. However, Security Operations TPF tool and database has enabled it to document and communicate detailed information on TSO performance, such as the different screening tasks (e.g., advanced imaging technology operation) and processes (e.g., resolving advanced imaging technology anomalies) where screeners encounter difficulties. Given the breadth of testing conducted and information collected, more comprehensive analysis of TPF data could help TSA identify and communicate important potential trends in the vulnerabilities that TSOs face across all airports. A comprehensive analysis of TSO performance at the national level beyond calculation of overall detection rates would provide Security Operations greater knowledge about the reasons for, and factors associated with, system-wide vulnerabilities due to TSO performance of checkpoint and checked baggage screening, which would better position TSA to address these security gaps. For example, having this information could allow Security Operations to provide more focused training and testing for these functions at the airport level. The information could also position TSA to allocate resources for high-priority issues across all airports. <4.3. TSA Airport Officials Have Developed Beneficial Practices for Conducting Covert Tests and Using Test Data, but Security Operations Does Not Systematically Document and Disseminate This Information> TSA officials at individual airports reported using different tools, techniques, and processes for conducting covert tests and using test data, but Security Operations does not document and disseminate this information. In our discussions with 10 FSDs and their management teams, officials identified a variety of tools, processes, and methods that were developed based on their experiences with covert tests and the resulting actions they took to utilize test data to improve TSO performance. Specifically, 5 of the 10 FSDs we spoke with said their teams developed some type of customized internal databases to aggregate all of their airports covert test results, other performance- related data, and any additional Inspection information. FSDs and their staff said such a tool helped present a holistic picture of TSO performance for training and development purposes. Likewise, 5 of the 10 FSDs we spoke with said that they use test results to develop TSO performance baselines and training plans with requirements that exceed TSA s minimum standards for remediation. Additionally, 5 of 10 FSDs stated that they now include supervisory TSOs and/or TSA leadership officials at airports in remediation discussions with individual TSOs after covert tests take place to provide leadership officials with experience on how best to coach and develop staff. TSA officials we spoke with at airports and at the regional level said that individual airports are often a source for innovation with respect to executing covert tests and using test results, which has at times led to pilot efforts that were adopted at other airports either regionally or nationally. For example, officials from one TSA region told us that they were the first to develop and use performance scorecards (which incorporate covert test results) as an additional tool for improving screener performance. These scorecards were eventually adopted nationwide. Most of the FSDs we spoke with said they communicate with their counterparts at other airports to discuss covert test practices and beneficial methods for using test results at their respective airports. For example, officials from one airport we spoke with reported traveling to an airport in a different region to learn more about the team s TSO remediation process, which involved using the results of covert testing, Threat Image Projections, and other assessments to create tailored corrective action plans for TSOs. The officials said that this process was an improvement from the one they used previously because it incorporated a greater variety of remediation actions, such as training courses or shadowing opportunities. As discussed previously, Security Operations officials communicate with TSA officials at airports on their covert test programs during a monthly classified call with all FSDs and their teams. This allows Security Operations program managers to provide FSDs with an update on results from recent HET and FET tests, among other things. Security Operations program managers stated that during these calls, they encourage TSA officials not only to discuss particular issues or challenges they have faced with respect to covert testing at their airports, but also to highlight beneficial practices for conducting tests and using test results to improve TSO performance that they and their teams have self-identified and implemented. Therefore, these calls also serve as a forum for FSDs to discuss successful techniques for running covert tests and using test results. In our discussions with 10 FSDs, 8 out of 10 told us they have independently adopted beneficial practices used by other airports. Security Operations program managers are privy to beneficial practices discussed during their teleconferences with local and regional TSA officials, but they told us that they do not regularly document or disseminate this information to TSA officials at airports. Security Operations program managers explained that the call itself is adequate for TSA airport officials to share information, and that local or regional officials can follow up with one another if they want to discuss them further. However, while a monthly conference call may be helpful for informal sharing of practices, it does not capture the breadth of methods or practices used by some TSA airport officials. Moreover, according to headquarters officials, while conference calls provide an opportunity for FSDs to discuss beneficial practices, sharing is ad hoc and the level of detail provided about methods and practices can vary. Systematically documenting and disseminating these practices would provide TSA officials at airports more accurate and complete information about beneficial practices in use at airports nationwide, so that they could be more readily implemented at other airports. The National Infrastructure Protection Plan states that in order to ensure that situational awareness capabilities keep pace with a dynamic and evolving risk environment, officials should improve practices for sharing information and applying the knowledge gained through changes in policy, process, and culture based on shared understanding of efforts to improve security and resilience. This plan also states that documenting and building upon beneficial practices is a key part of information sharing within a critical infrastructure risk management framework. Our interviews with FSDs revealed an array of tools, techniques, and processes for covert testing that TSA officials at airports developed to address local and regional needs. A process to systematically document and disseminate more accurate and complete information on these tools, techniques, and processes that captures the breadth of methods or practices used by some TSA airport officials could help TSA conduct better covert tests and more successfully use test results to improve TSO performance, as well as inform revisions to TSA s national covert test program. <5. Conclusions> Given the persistent threats to the aviation system, TSA must ensure that its covert testing program operates as effectively as possible to identify and address potential vulnerabilities in the checkpoint and checked baggage screening systems across the nation s airports. TSA has strengthened the quality and rigor of its covert test programs since 2016, but additional steps are needed to better ensure that TSA targets the areas of highest risk in selecting attack scenarios for testing. Without using a risk-informed approach to selecting screening activities to test, TSA cannot ensure that it is targeting those aspects of TSA screening that pose the greatest known risks. In addition, without documenting its rationales behind how and why certain scenarios are selected for covert testing, TSA cannot demonstrate how its selections reflect identified risks in the aviation environment. New processes for covert testing implemented by Security Operations and Inspection have identified important vulnerabilities in checkpoint and checked baggage screening for fiscal years 2016 and 2017. However, these results can only be useful if they meet internal standards for quality test results. While Inspection s new process generally produced quality test results on screening vulnerabilities, Security Operations continues to face challenges with the quality of test results collected by TSA staff at local airports. Without taking steps to ensure that Security Operations collects more valid and usable information on vulnerabilities, including the root cause of test failures, TSA will not be positioned to reliably identify and address important security vulnerabilities. In addition, without documenting its methodology for comparing the results of covert tests, TSA cannot ensure that its quality assurance process is consistently applied and transparent. Once vulnerabilities have been identified through covert testing, it is paramount that they are effectively and efficiently mitigated or addressed. Establishing the Security Vulnerability Management Process was a good step toward better tracking the vulnerabilities identified through covert tests and deploying resources to mitigate them, but key identified vulnerabilities have been stalled in the process and none have been closed using this process. This has largely been caused by the absence of timeframes and milestones for achieving mitigation and monitoring key activities in the process. Unless TSA incorporates these aspects into its vulnerability management guidance, it cannot ensure that it is effectively addressing security vulnerabilities that could result in potentially serious consequences for the traveling public. Additionally, while TSA shares some covert test information with TSA officials at airports, more comprehensive analysis of covert test information is needed to enhance TSA s knowledge about the reasons for, and the factors associated with, TSO performance vulnerabilities that exist system-wide. Furthermore, although TSA officials at individual airports informally share information about beneficial practices they use to conduct covert tests and how they use test information, without systematically documenting and disseminating these practices, TSA cannot ensure that airport officials are fully informed about the different tools, techniques, and processes used by their colleagues. <6. Recommendations for Executive Action> We are making the following nine recommendations to TSA: The Administrator of TSA should document its rationale for key decisions related to its risk-informed approach for selecting covert test scenarios, for both the Security Operations and the Inspection s testing process. (Recommendation 1) The Administrator of TSA should incorporate a more risk-informed approach into Security Operations process for selecting the covert test scenarios that are used for tests conducted by TSA officials at airports. (Recommendation 2) The Administrator of TSA should assess the current covert testing process used by TSA officials at airports including factors that may affect the covertness and consistency of the tests to identify opportunities to improve the quality of test data, and make changes as appropriate. (Recommendation 3) The Administrator of TSA should assess Security Operations guidance for applying root causes for test failures, and identify opportunities to clarify how they should be applied. (Recommendation 4) The Administrator of TSA should document the methodology for using the results of covert testing conducted by headquarters staff as a quality assurance process for covert testing conducted by TSA officials at airports. (Recommendation 5) The Administrator of TSA should establish timeframes and milestones for key steps in its Security Vulnerability Management Process that are appropriate for the level of effort required to mitigate identified vulnerabilities. (Recommendation 6) The Administrator of TSA should revise existing guidance for the Security Vulnerability Management Process to establish procedures for monitoring vulnerability owners progress against timeframes and milestones for vulnerability mitigation, including a defined process for escalating cases when milestones are not met. (Recommendation 7) The Administrator of TSA should develop processes for conducting and reporting to relevant stakeholders a comprehensive analysis of covert test results collected by TSA headquarters officials and TSA officials at airports to identify vulnerabilities in screener performance and common root causes contributing to screener test passes and failures. (Recommendation 8) The Administrator of TSA should develop a standard process for systematically documenting and disseminating to airport Federal Security Directors beneficial practices for conducting covert tests and using test results. (Recommendation 9) <7. Agency Comments and Our Evaluation> We provided a draft of this report to DHS and TSA for review and comment. DHS provided written comments which are reprinted in appendix II. In its comments, DHS concurred with all 9 recommendations and described actions planned to address them. TSA also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or russellw@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report addresses the Transportation Security Administration s (TSA) covert testing for checkpoint and checked baggage screening. More specifically, the report (1) describes how TSA has changed its covert test processes since 2016 and analyzes the extent to which these processes are risk-informed; (2) analyzes the extent to which TSA covert tests for fiscal years 2016 through March 2018 produced quality information; and (3) analyzes the extent to which TSA has used the results of covert tests to address any identified security vulnerabilities. To understand how both the Security Operations and Inspection offices changed their respective covert test processes since 2016, we reviewed agency documentation, interviewed agency officials, and observed 22 Security Operations and 4 Inspection covert tests at 5 different airports. In addition to Inspection testing, our observations included two types of testing overseen by Security Operations Headquarters Evaluation Team (HET) testing and Field Evaluation Team (FET) testing. To gather information on how covert tests are carried out in different airport environments, we observed tests at four category X and one category I airports. We selected airports for observations on the basis of airport category and screener workforce (private vs. TSA-employed screeners). For all observations, we were able to observe TSOs performing checkpoint or checked baggage screening activities during tests. Following all observations, we observed post-test reviews and, when appropriate, interviewed TSA airport officials, including the Transportation Security Officers (TSO) and private sector screeners (collectively referred to as TSOs in this report) who were tested, about their experience with these tests. To determine the extent to which Security Operations and Inspection testing is risk-informed, we reviewed program documentation and spoke with agency officials. Specifically, we reviewed operational guidance and test scenarios, which describe the overall intent of the test, the threat item, and method of execution (e.g., an explosive device concealed in a shoe carried through the checkpoint) to identify how program officials incorporated the components of risk threat, vulnerability, and consequence in their selection of threats and airports to test. We also reviewed the TSA risk assessments that would have been available to Inspection and Security Operations when planning which threats and airports to test for fiscal year 2017, namely TSA s 2016 Transportation Sector Security Risk Assessment and TSA s 2012 Current Airports Threat Assessment. The 2016 Transportation Security Sector Risk Assessment contained attack scenarios for the five transportation modes for which TSA is responsible, including domestic and international commercial aviation, as well as other mass transit systems, such highway and mass transit. For our analysis, we used those scenarios relevant to our scope domestic commercial checkpoint and checked baggage screening. We compared the results of these assessments to the threat items and locations that Security Operations selected for tests in fiscal year 2017 and Inspection selected for tests in fiscal years 2016 and 2017. We evaluated each office s process for making risk-informed decisions with Department of Homeland Security (DHS) risk management policies, which require that agencies use risk information and analysis to inform decision making, and that risk management methodologies should be transparent and properly documented. To assess the quality of Security Operations data, we reviewed program guidance and interviewed program officials to understand how Security Operations uses HET test results to validate the quality of FET testing at local airports. We also reviewed a 2016 validation study of Security Operations test process conducted by the DHS Office of Science and Technology, and spoke with subject matter experts who conducted the study about their findings and recommendations related to improving the quality of test information. We concluded the study s findings were reasonably sufficient to use as additional support for patterns we also observed during site visits. We were also informed by our HET and FET test observations, which included observations of 19 HET tests at 3 different airports, and 3 FET tests at 1 airport. We supplemented our understanding of how airports conduct FET tests through semi-structured telephone interviews with 10 different Federal Security Directors (FSD) and their staff. To select FSDs for interviews, we identified the airports at which TSA conducted more than the average number of HET covert tests in fiscal year 2017. We focused on the number of HET (as opposed to FET) tests because they are Security Operations quality assurance method for airport covert test programs, and we wanted to ensure FSDs had sufficient experience with these tests to provide us perspectives. From this group, we identified the airports with the highest and lowest pass rates for HET tests, and selected among these to reflect variation in several factors, including airport category, difference between HET and FET detection rates, and whether the airport had been tested by Inspection in fiscal years 2016 and 2017. Finally, to assess the quality of Security Operations testing, we calculated detection rates for its two types of testing Headquarters Evaluations Team (HET) tests, in which Security Operations headquarters staff travel to airports to conduct tests, and Field Evaluations Team (FET) tests, which are conducted by staff at local airports. We assessed FET test results against Security Operations criterion stating that differences in HET and FET detection rates must be within a designated number of percentage points. We made these comparisons analyzing complete test results for fiscal year 2017 and the first 6 months of fiscal year 2018, over three 6-month periods in order to identify trends. We used for our analysis the12,000 fiscal year 2017 Security Operations TPF records documenting the results of individual covert tests, and an additional 3,600 records from fiscal year 2018. For our analysis, we calculated HET and FET detection rates (i.e., number of items successfully detected) for three screening paths: a checkpoint test with the item concealed on the tester, a checkpoint test with the item concealed in a carry-on bag, and a checked baggage test with the item concealed in the checked bag. In calculating these detection rates, we included only results for scenarios tested within the 18-month period that had both HET and FET tests, and we excluded any test results for scenarios involving enhanced screening. Also, in our calculation of the FET detection rate, we included FET test results for all airports, including those from smaller (category III and IV) airports, which HET teams generally do not visit. We chose to include FET results from all airports in our analysis because it better reflected the overall performance of airports on covert tests. In addition to comparing Security Operations quality assurance process against the program s criteria, we assessed it against federal internal control criteria for documenting processes. To assess the quality of Inspection testing, we reviewed program guidance to identify testing requirements, methods, and limitations. We also observed four different tests conducted at a Category X airport. In addition, we reviewed Inspection guidance to identify and assess requirements for analyzing and reporting covert test results, and reviewed completed reports to identify the extent to which Inspection followed these requirements. We met with Inspection technical experts to discuss Inspection processes for selecting a sample of airports for tests and for analyzing and compiling covert test findings. To assess the extent to which Inspection and Security Operations address security vulnerabilities, we reviewed their efforts separately because each office utilized a different approach. To assess Inspection s efforts, we focused on its use of the Security Vulnerability Management Process, an agency-wide process that Inspection designated in 2016 as the principal means by which it addresses its identified vulnerabilities. To obtain a more complete understanding of the extent to which this process has addressed Inspection vulnerabilities, we reviewed documentation related to the process (such as its charter) and other information pertaining to all vulnerabilities Inspection has submitted to the process, including those that were unrelated to checkpoint and checked baggage screening (e.g., cargo screening). We analyzed timeframes associated with the vulnerabilities reviewed under the process and the progress made toward closing nine Inspection-identified vulnerabilities. We assessed the vulnerability management process against standards for program management issued by the Project Management Institute, a not-for-profit association that provides global standards for, among other things, project and program management. Given the focus of Security Operations testing on screener performance, the vulnerabilities it identified involved TSO failures on tests of specific procedures. To determine how Security Operations headquarters officials address vulnerabilities involving screener performance, we reviewed program documentation, including program guidance and periodic reporting of results, and interviewed program managers. To understand how the results of covert testing are used at the airport level to improve TSO performance and address other identified vulnerabilities, we conducted semi-structured interviews with 10 TSA FSDs stationed at airports across the United States, and with three TSA Regional Directors. We selected the latter based on whether the Regional Director had under his or her direction at least 1 of 10 FSDs we selected for interviews, and to reflect variety in geographic location. We assessed Security Operations and TSA officials at airports efforts to use covert test results to address vulnerabilities against federal internal control standards and criteria within the National Infrastructure Protection Plan. This is the public version of a classified report that we issued on January 10, 2019. The classified report included an objective related to identifying the results of covert testing for fiscal years 2016 and 2017 and assessing the quality of this test information. DHS deemed covert testing results (including detection rates and identified vulnerabilities) to be classified information, which must be protected from loss, compromise, or inadvertent disclosure. Consequently, this report omits part of an objective identifying the results of covert testing. DHS also deemed some of information in our January report to be sensitive security information. Therefore, this report omits information describing TSA screening procedures, the results of agency risk assessments, and airport-level covert test results. The performance audit upon which this report is based was conducted from September 2017 to January 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient and appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained from this work provides a reasonable basis for our findings and conclusions based on our audit objectives. We worked with DHS from February 2019 through April 2019 to prepare this unclassified, non- sensitive version of the original classified report for public release. This public version was also prepared in accordance with these standards. Appendix III: GAO Contact and Staff Acknowledgments William Russell (202) 512-8777 or RussellW@gao.gov. <8. Staff Acknowledgments> In addition to the contact named above, Ellen Wolfe (Assistant Director), Mona Nichols Blake (Analyst in Charge), James Ashley, Chuck Bausell, Jason Blake, Michele Fejfar, Eric Hauswirth, Susan Hsu, Tom Lombardi, Minette Richardson, and Nina Thomas-Diggs made significant contributions to this report. | Why GAO Did This Study
TSA uses covert testing to identify potential vulnerabilities in checkpoint and checked baggage screening systems at U.S. airports. In 2015, TSA identified deficiencies in its covert testing process, and in 2017, the Department of Homeland Security Office of Inspector General's covert testing identified deficiencies in screener performance. Since these findings, TSA has taken steps intended to improve its covert test processes and to use test results to better address vulnerabilities.
GAO was asked to review TSA's covert test programs, including how the results are used to address vulnerabilities. This report analyzes the extent to which (1) TSA covert tests are risk-informed, (2) TSA covert tests for fiscal years 2016 through March 2018 produced quality information, and (3) TSA uses covert test results to address any identified security vulnerabilities.
GAO observed 26 TSA covert tests, reviewed TSA guidance, analyzed test data for fiscal years 2016, 2017, and through March 2018, and interviewed TSA officials.
What GAO Found
Two offices within the Transportation Security Administration (TSA) conduct covert tests at U.S. airports—Inspection and Security Operations. The Department of Homeland Security requires that agencies use risk information to make decisions, and TSA issues annual risk assessments of threats that its program offices should consult when making risk-based decisions, such as what covert tests to conduct. Of the two TSA offices that conduct covert tests, Inspection officials used TSA's risk assessment to guide their efforts. However, Security Operations officials relied largely on their professional judgment in making decisions about what scenarios to consider for covert testing. By not using a risk-informed approach, TSA has limited assurance that Security Operations is targeting the most likely threats.
Both Inspection and Security Operations have implemented processes to ensure that their covert tests produce quality results. However, GAO found that only Inspection has established a new process that has resulted in quality test results. Specifically, for the two reports Inspection completed for testing conducted in fiscal years 2016 and 2017 using its new process, GAO found that the results were generally consistent with quality analysis and reporting practices. On the other hand, Security Operations has not been able to ensure the quality of its covert test results, and GAO identified a number of factors that could be compromising the quality of these results. Unless TSA assesses the current practices used at airports to conduct tests, and identifies the factors that may be impacting the quality of covert testing conducted by TSA officials at airports, it will have limited assurance about the reliability of the test results it is using to address vulnerabilities.
In 2015, TSA established the Security Vulnerability Management Process to leverage agency-wide resources to address systemic vulnerabilities; however, this process has not yet resolved any identified security vulnerabilities. Since 2015, Inspection officials submitted nine security vulnerabilities identified through covert tests for mitigation, and as of September 2018, none had been formally resolved through this process. GAO found that in some cases, it took TSA officials overseeing the process up to 7 months to assign an office responsible to begin mitigation efforts. In part, this is because TSA has not established time frames and milestones for this process or established procedures to ensure milestones are met, in accordance with best practices for program management. Without doing so, TSA cannot ensure efficient and effective progress in addressing security vulnerabilities.
This is a public version of a classified report that GAO issued in January 2019. Information that TSA deemed classified or sensitive security information, such as the results of TSA's covert testing and details about TSA's screening procedures, have been omitted.
What GAO Recommends
GAO is making nine recommendations, including that TSA use a risk-informed approach for selecting covert test scenarios, take steps to improve the quality of airport covert test results, and establish time frames and milestones for the key steps in its vulnerability management process. TSA concurred with all nine GAO recommendations. |
gao_GAO-19-625T | gao_GAO-19-625T_0 | <1. Information on the Potential Economic Effects of Climate Change in the United States Could Help Federal Decision Makers Better Manage Climate Risks> We reported in September 2017 that, while estimates of the economic effects of climate change are imprecise due to modeling and information limitations, they can convey useful insight into broad themes about potential damages in the United States. We reported that, according to the two national-scale studies available at the time that examined the economic effects of climate change across U.S. sectors, potential economic effects could be significant and these effects will likely increase over time for most of the sectors analyzed. For example, for 2020 through 2039, one of the studies estimated from $4 billion to $6 billion in annual coastal property damages from sea level rise and more frequent and intense storms. In addition, the national-scale studies we reviewed and several experts we interviewed for the September 2017 report suggested that potential economic effects could be unevenly distributed across sectors and regions. For example, one of the studies estimated that the Southeast, Midwest, and Great Plains regions will likely experience greater combined economic effects than other regions, largely because of coastal property damage in the Southeast and changes in crop yields in the Midwest and Great Plains (see figure 1). This is consistent with the findings of the Fourth National Climate Assessment. For example, according to that assessment, the continued increase in the frequency and extent of high-tide flooding due to sea level rise threatens America s trillion-dollar coastal property market and public infrastructure sector. As we reported in September 2017, information on the potential economic effects of climate change could help federal decision makers better manage climate risks, according to leading practices for climate risk management, economic analysis we reviewed, and the views of several experts we interviewed. For example, such information could inform decision makers about significant potential damages in different U.S. sectors or regions. According to several experts and our prior work, this information could help federal decision makers identify significant climate priorities as an initial step toward managing climate risks. Such a first step is consistent with leading practices for climate risk management and federal standards for internal control. For example, leading practices from the National Academies call for climate change risk management efforts that focus on where immediate attention is needed. As noted in our September 2017 report, according to a 2010 National Academies report, other literature we reviewed, and several experts we interviewed, to make informed choices, decision makers need more comprehensive information on economic effects to better understand the potential costs of climate change to society and begin to develop an understanding of the benefits and costs of different options for managing climate risks. <2. The Federal Government Faces Fiscal Exposure from Climate Change Risks, but Does Not Have Certain Information Needed to Help Make Budget Decisions> The federal government faces fiscal exposure from climate change risks in a number of areas, and this exposure will likely increase over time, as we concluded in September 2017. In the March 2019 update to our High-Risk List, we summarized our previous work that identified several of these areas across the federal government, including programs related to the following: Disaster aid. The rising number of natural disasters and increasing reliance on federal assistance are a key source of federal fiscal exposure, and this exposure will likely continue to rise. Since 2005, federal funding for disaster assistance is at least $450 billion. In September 2018, we reported that four hurricane and wildfire disasters in 2017 created an unprecedented demand for federal disaster resources and that hurricanes Harvey, Irma, and Maria ranked among the top five costliest hurricanes on record. Subsequently, the fall of 2018 brought additional catastrophic disasters such as Hurricanes Florence and Michael and devastating California wildfires, with further needs for federal disaster assistance. Disaster costs are projected to increase as certain extreme weather events become more frequent and intense due to climate change as observed and projected by USGCRP. In July 2015, we reported that the federal government does not adequately plan for disaster resilience and that most federal funding for hazard mitigation is available after a disaster. In addition, our prior work found that the Federal Emergency Management Agency s (FEMA) indicator for determining whether to recommend that a jurisdiction receive disaster assistance which was set in 1986 is artificially low because it does not accurately reflect the ability of state and local governments to respond to disasters. Without an accurate assessment of a jurisdiction s capability to respond to a disaster without federal assistance, we found that FEMA runs the risk of recommending that the President award federal assistance to jurisdictions that have the capability to respond and recover on their own. Federal insurance for property and crops. The National Flood Insurance Program (NFIP) and the Federal Crop Insurance Corporation are sources of federal fiscal exposure due, in part, to the vulnerability of the insured property and crops to climate change. These programs provide coverage where private markets for insurance do not exist, typically because the risk associated with the property or crops is too great to privately insure at a cost that buyers are willing to accept. From 2013 to 2017, losses paid under NFIP and the federal crop insurance program totaled $51.3 billion. Federal flood and crop insurance programs were not designed to generate sufficient funds to fully cover all losses and expenses, which means the programs need budget authority from Congress to operate. The NFIP, for example, was about $21 billion in debt to the Treasury as of April 2019. Further, the Congressional Budget Office estimated in May 2019 that federal crop insurance would cost the federal government an average of about $8 billion annually from 2019 through 2029. Operation and management of federal property and lands. The federal government owns and operates hundreds of thousands of facilities and manages millions of acres of land that could be affected by a changing climate and represent a significant federal fiscal exposure. For example, the Department of Defense (DOD) owns and operates domestic and overseas infrastructure with an estimated replacement value of about $1 trillion. In September 2018, Hurricane Florence damaged Camp Lejeune and other Marine Corps facilities in North Carolina, resulting in a preliminary Marine Corps repair estimate of $3.6 billion. One month later, Hurricane Michael devastated Tyndall Air Force Base in Florida, resulting in a preliminary Air Force repair estimate of $3 billion and upwards of 5 years to complete the work. In addition, we recently reported that the federal government manages about 650 million acres of land in the United States that could be vulnerable to climate change, including the possibility of more frequent and severe droughts and wildfires. Appropriations for federal wildland fire management activities have increased considerably since the 1990s, as we and the Congressional Research Service have reported. Although the federal government faces fiscal exposure from climate change across the nation, it does not have certain information needed by policymakers to help understand the budgetary impacts of such exposure. We have previously reported that the federal budget generally does not account for disaster assistance provided by Congress which can reach tens of billions of dollars for some disasters or the long-term impacts of climate change on existing federal infrastructure and programs. For Example, as we reported in April 2018, the Office of Management and Budget s (OMB) climate change funding reports we reviewed did not include funding information on federal programs with significant fiscal exposures to climate change identified by OMB and others such as domestic disaster assistance, flood insurance, and crop insurance. A more complete understanding of climate change fiscal exposures can help policymakers anticipate changes in future spending and enhance control and oversight over federal resources, as we reported in October 2013. For budget decisions for federal programs with fiscal exposure to climate change, we found in the April 2018 report that information that could help provide a more complete understanding would include: (1) costs to repair, replace, and improve the weather- related resilience of federally-funded property and resources; (2) costs for federal flood and crop insurance programs; and (3) costs for disaster assistance programs, among other identified areas of fiscal exposure to climate change. To help policymakers better understand the trade-offs when making spending decisions, we recommended in the April 2018 report that OMB provide information on fiscal exposures related to climate change in conjunction with future reports on climate change funding. <3. Federal Investments in Resilience to Climate Change Impacts Have Been Limited> Although the federal government faces fiscal exposure to climate change, its investments in resilience to climate change impacts have been limited. One way to reduce federal fiscal exposure is to enhance resilience by reducing or eliminating long-term risk to people and property from natural hazards. For example, in September 2018 we reported that elevating homes and strengthened building codes in Texas and Florida prevented greater damages during the 2017 hurricane season. In addition, one company participating in a 2014 forum we held on preparing for climate- related risks noted that for every dollar it invested in resilience efforts, the company could prevent $5 in potential losses. Finally, a 2018 interim report by the National Institute of Building Sciences examined a sample of federal grants for hazard mitigation. The report estimated approximate benefits to society (i.e., homeowners, communities, etc.) in excess of costs for several types of resilience projects through the protection of lives and property, and prevention of other losses. For example, while precise benefits are uncertain, the report estimated that for every grant dollar the federal government spent on resilience projects, over time, society could accrue benefits amounting to the following: About $3 on average from projects addressing fire at the wildland urban interface, with most benefits (69 percent) coming from the protection of property (i.e., avoiding property losses). About $5 on average from projects to address hurricane and tornado force winds, with most benefits (89 percent) coming from the protection of lives. This includes avoiding deaths, nonfatal injuries, and causes of post-traumatic stress. About $7 on average from projects that buy out buildings prone to riverine flooding, with most benefits (65 percent) coming from the protection of property. The interim report also estimated that society could accrue benefits amounting to about $11 on average for every dollar invested in designing new buildings to meet the 2018 International Building Code and the 2018 International Residential Code the model building codes developed by the International Code Council with most benefits (46 percent) coming from the protection of property. We reported in October 2009 that the federal government s activities to build resilience to climate change were carried out in an ad hoc manner and were not well coordinated across federal agencies. Federal agencies have included some of these activities within existing programs and operations a concept known as mainstreaming. For example, the Fourth National Climate Assessment reported that the U.S. military integrates climate risks into its analysis, plans and programs, with particular attention paid to climate effects on force readiness, military bases, and training ranges. However, according to the Fourth National Climate Assessment, while a significant portion of climate risk can be addressed by mainstreaming, the practice may reduce the visibility of climate resilience relative to dedicated, stand-alone approaches and may prove insufficient to address the full range of climate risks. In addition, as we reported in March 2019, the Disaster Recovery Reform Act of 2018 (DRRA) was enacted in October 2018, which could improve state and local resilience to disasters. DRRA, among other things, allows the President to set aside, with respect to each major disaster, a percentage of the estimated aggregate amount of certain grants to use for pre-disaster hazard mitigation and makes federal assistance available to state and local governments for building code administration and enforcement. However, it is too early to tell what impact the implementation of the act will have on state and local resilience. The federal government has made some limited investments in resilience and DRRA could enable additional improvements at the state and local level. However, we reported in September 2017 that the federal government had not undertaken strategic government-wide planning to manage significant climate risks before they become fiscal exposures. We also reported in July 2015 that the federal government had no comprehensive strategic approach for identifying, prioritizing, and implementing investments for disaster resilience. As an initial step in managing climate risks, most of the experts we interviewed for the September 2017 report told us that federal decision makers should prioritize risk management efforts on significant climate risks that create the greatest fiscal exposure. However, as we reported in our March 2019 High-Risk List, the federal government had not made measurable progress since 2017 to reduce fiscal exposure in several key areas that we have identified. The High-Risk List identified Limiting the Federal Government s Fiscal Exposure by Better Managing Climate Change Risks as an area needing significant attention because the federal government has regressed in progress toward one of our criterion for removal from the list. <4. The Federal Government Could Reduce Its Fiscal Exposure by Focusing and Coordinating Federal Efforts> As we reported in March 2019, the federal government could reduce its fiscal exposure to climate change by focusing and coordinating federal efforts. However, the federal government is currently not well organized to address the fiscal exposure presented by climate change, partly because of the inherently complicated and crosscutting nature of the issue. We have made a total of 62 recommendations related to limiting the federal government s fiscal exposure to climate change over the years, 12 of which have been made since February 2017. As of December 2018, 25 of these recommendations remained open. In describing what needs to be done to reduce federal fiscal exposure to climate change, our March 2019 High-Risk report discusses many of the open recommendations. Implementing these recommendations could help reduce federal fiscal exposure. Several of them, including those highlighted below, identify key government-wide efforts needed to help plan for and manage climate risks and direct federal efforts toward common goals, such as improving resilience: Develop a national strategic plan: In May 2011, we recommended that appropriate entities within the Executive Office of the President (EOP), including OMB, work with agencies and interagency coordinating bodies to establish federal strategic climate change priorities that reflect the full range of climate-related federal activities, including roles and responsibilities of key federal entities. Use economic information to identify and respond to significant climate risks: In September 2017, we recommended that the appropriate entities within EOP use information on the potential economic effects of climate change to help identify significant climate risks facing the federal government and craft appropriate federal responses. Such federal responses could include establishing a strategy to identify, prioritize, and guide federal investments to enhance resilience against future disasters. Provide decision makers with the best available climate information: In November 2015, we reported that federal efforts to provide information about climate change impacts did not fully meet the climate information needs of federal, state, local, and private sector decision makers, which hindered their efforts to plan for climate change risks. We reported that these decision makers would benefit from a national climate information system that would develop and update authoritative climate observations and projections specifically for use in decision-making. As a result, we recommended that EOP (1) designate a federal entity to develop and periodically update a set of authoritative climate observations and projections for use in federal decision-making, which other decision makers could also access; and (2) designate a federal entity to create a national climate information system with defined roles for federal agencies and nonfederal entities with existing statutory authority. Consider climate information in design standards: In November 2016, we reported that design standards, building codes, and voluntary certifications established by standards-developing organizations play a role in ensuring the resilience of infrastructure to the effects of natural disasters. However, we reported that these organizations faced challenges to using forward-looking climate information that could help enhance the resilience of infrastructure. As a result, we recommended in the November 2016 report that the Department of Commerce, acting through the National Institute of Standards and Technology which is responsible for coordinating federal participation in standards organizations convene federal agencies for an ongoing government-wide effort to provide the best available forward-looking climate information to standards-developing organizations for their consideration in the development of design standards, building codes, and voluntary certifications. In conclusion, the effects of climate change have already and will continue to pose risks that can create fiscal exposure across the federal government and this exposure will continue to increase. The federal government does not generally account for such fiscal exposure to programs in the budget process nor has it undertaken strategic efforts to manage significant climate risks that could reduce the need for far more costly steps in the decades to come. To reduce its fiscal exposure, the federal government needs a cohesive strategic approach with strong leadership and the authority to manage risks across the entire range of related federal activities. The federal government could make further progress toward reducing fiscal exposure by implementing the recommendations we have made. Chairman Yarmuth, Ranking Member Womack, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. <5. GAO Contact and Staff Acknowledgments> If you or your staff have any questions about this testimony, please contact me at (202) 512-3841or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are J. Alfredo G mez (Director), Joseph Dean Thompson (Assistant Director), Anne Hobson (Analyst in Charge), Celia Mendive, Kiki Theodoropoulos, Reed Van Beveren, and Michelle R. Wong. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
Since 2005, federal funding for disaster assistance is at least $450 billion, including approximately $19.1 billion in supplemental appropriations signed into law on June 6, 2019. In 2018 alone, there were 14 separate billion-dollar weather and climate disaster events across the United States, with a total cost of at least $91 billion, according to the National Oceanic and Atmospheric Administration. The U.S. Global Change Research Program projects that disaster costs will likely increase as certain extreme weather events become more frequent and intense due to climate change.
The costs of recent weather disasters have illustrated the need for planning for climate change risks and investing in resilience. Resilience is the ability to prepare and plan for, absorb, recover from, and more successfully adapt to adverse events, according to the National Academies of Science, Engineering, and Medicine. Investing in resilience can reduce the need for far more costly steps in the decades to come.
Since February 2013, GAO has included Limiting the Federal Government's Fiscal Exposure by Better Managing Climate Change Risks on its list of federal program areas at high risk of vulnerabilities to fraud, waste, abuse, and mismanagement or most in need of transformation. GAO updates this list every 2 years. In March 2019, GAO reported that the federal government had not made measurable progress since 2017 to reduce fiscal exposure to climate change.
This testimony—based on reports GAO issued from October 2009 to March 2019—discusses (1) what is known about the potential economic effects of climate change in the United States and the extent to which this information could help federal decision makers manage climate risks across the federal government, (2) the potential impacts of climate change on the federal budget, (3) the extent to which the federal government has invested in resilience, and (4) how the federal government could reduce fiscal exposure to the effects of climate change.
GAO has made 62 recommendations related to the Limiting the Federal Government’s Fiscal Exposure by Better Managing Climate Change Risks high-risk area. As of December 2018, 25 of those recommendations remained open.
What GAO Found
The estimated economic effects of climate change, while imprecise, can convey useful insight about potential damages in the United States. In September 2017, GAO reported that the potential economic effects of climate change could be significant and unevenly distributed across sectors and regions (see figure). This is consistent with the recent findings of the U.S. Global Change Research Program's Fourth National Climate Assessment, which concluded, among other things, that the continued increase in the frequency and extent of high-tide flooding due to sea level rise threatens America's trillion-dollar coastal infrastructure.
Information about the potential economic effects of climate change could inform decision makers about significant potential damages in different U.S. sectors or regions. According to prior GAO work, this information could help decision makers identify significant climate risks as an initial step toward managing them.
The federal government faces fiscal exposure from climate change risks in several areas, including:
Disaster aid: due to the rising number of natural disasters and increasing reliance on federal assistance. GAO has previously reported that the federal government does not adequately plan for disaster resilience. GAO has also reported that, due to an artifically low indicator for determining a jursidiction's ability to respond to disasters that was set in 1986, the Federal Emergency Management Agency risks recommending federal assistance for juridisctions that could recover on their own.
Federal insurance for property and crops: due, in part, to the vulnerability of insured property and crops to climate change impacts. Federal flood and crop insurance programs were not designed to generate sufficient funds to fully cover all losses and expenses. The flood insurance program, for example, was about $21 billion in debt to the Treasury as of April 2019. Further, the Congressional Budget Office estimated in May 2019 that federal crop insurance would cost the federal government an average of about $8 billion annually from 2019 through 2029.
Operation and management of federal property and lands: due to the hundreds of thousands of federal facilities and millions of acres of land that could be affected by a changing climate and more frequent extreme events. For example, in 2018, Hurricane Michael devastated Tyndall Air Force Base in Florida, with a preliminary repair estimate of $3 billion.
The federal budget, however, does not generally account for disaster assistance provided by Congress or the long-term impacts of climate change on existing federal infrastructure and programs. GAO has reported that more complete information about fiscal exposure could help policymakers better understand the trade-offs when making spending decisions.
Further, federal investments in resilience to reduce fiscal exposures have been limited. As GAO has reported, enhancing resilience can reduce fiscal exposure by reducing or eliminating long-term risk to people and property from natural hazards. For example, a 2018 interim report by the National Institute of Building Sciences estimated approximate benefits to society in excess of costs for several types of resilience projects. While precise benefits are uncertain, the report estimated that for every dollar invested in designing new buildings to particular design standards, society could accrue benefits amounting to about $11 on average.
The federal government has invested in individual agency efforts that could help build resilience within existing programs or projects. For example, the National Climate Assessment reported that the U.S. military integrates climate risks into its analysis, plans, and programs. In additon, as GAO reported in March 2019, the Disaster Recovery Reform Act of 2018 could improve resilience by allowing the President to set aside a portion of certain grants for pre-disaster mitigation. However, the federal government has not undertaken strategic government-wide planning to manage climate risks.
GAO's March 2019 High-Risk report identified a number of recommendations GAO has made related to fiscal exposure to climate change. The federal government could reduce its fiscal exposure by implementing these recommendations. Among GAO's key government-wide recommendations are:
Entities within the Executive Office of the President (EOP) should work with partners to establish federal strategic climate change priorities that reflect the full range of climate-related federal activities;
Entities within EOP should use information on potential economic effects from climate change to help identify significant climate risks and craft appropriate federal responses;
Entities within EOP should designate a federal entity to develop and update a set of authoritative climate observations and projections for use in federal decision making, and create a national climate information system with defined roles for federal agencies and certain nonfederal entities; and
The Department of Commerce should convene federal agencies to provide the best-available forward-looking climate information to organizations that develop design standards and building codes to enhance infrastructure resilience. |
gao_GAO-19-284 | gao_GAO-19-284_0 | <1. Background> Signed into law on May 9, 2014, the DATA Act expands on previous federal transparency legislation. It requires federal agency expenditures to be disclosed and agency spending information to be linked to federal program activities so that policymakers and the public can more effectively track federal spending. The DATA Act also requires government-wide reporting on a greater variety of data related to federal spending, such as budget and financial information, as well as tracking these data at multiple points in the federal spending life cycle. To accomplish these goals, among others, the act gives OMB and Treasury responsibility for establishing government-wide financial data standards for any federal funds made available to or expended by federal agencies. These standards specify the data to be reported under the DATA Act and define and describe what is to be included in each data element with the aim of ensuring that information reported will be consistent and comparable. As Treasury and OMB implemented the DATA Act s requirement to create and apply data standards, the overall data standardization effort has been divided into two distinct, but related, components: (1) establishing definitions which describe what is included in each data element with the aim of ensuring that information will be consistent and comparable, and (2) creating a data exchange standard with technical specifications which describe the format, structure, tagging, and transmission of each data element. In the implementation of the DATA Act, OMB took principal responsibility for the definitions, while Treasury took principal responsibility for the technical standards that express these definitions, which federal agencies use to report spending data for publication on USAspending.gov. The act also holds agencies accountable for submitting complete and accurate data to USAspending.gov and requires that agency-reported award and financial information comply with the data standards established by OMB and Treasury. <2. The Importance of Data Governance for Ensuring Data Quality> One of the purposes of the DATA Act is to establish government-wide data standards to provide consistent, reliable, and searchable spending data that are displayed accurately for taxpayers and policymakers on USAspending.gov (or a successor website). As we have reported previously, establishing a data governance structure an institutionalized set of policies and procedures for providing data governance throughout the life cycle of developing and implementing data standards is critical for ensuring that the integrity of the standards is maintained over time. The need for a data governance structure is underscored by our previous analyses of the quality of the federal spending data available on USAspending.gov and inconsistencies we identified in how agencies report data according to data standards. A data governance structure could be useful for adjudicating revisions, monitoring, and ensuring compliance with the standards over time. As we have noted, such a structure, if properly implemented, would greatly increase the likelihood that the data made available to the public will be accurate. A data governance structure can also provide consistent data management during times of change and transition. We have previously reported that gaps in leadership can occur as administrations change. This can impair the effectiveness and efficiency of complex government- wide efforts, potentially resulting in delays and missed deadlines. Accordingly, in 2015, we recommended that OMB, in collaboration with Treasury, establish a set of clear policies and processes for developing and maintaining data standards that are consistent with leading practices. OMB and Treasury did not comment on our recommendation. We plan to conduct work intended to help inform OMB s and Treasury s efforts. This work may include the development of a maturity model that could provide a framework for assessing data governance activities related to federal spending data. This work may also have broader government-wide implications as agencies begin implementing the requirements of the Foundations for Evidence-Based Policymaking Act enacted on January 14, 2019, including designating Chief Data Officers with data governance and implementation responsibilities. <3. Although Some Governance Procedures Are in Place, a Formal Structure for Governing Data Standards Continues to Evolve> <3.1. Roles of Data Governance Interagency Advisory Groups Have Shifted During DATA Act Implementation> In December 2018, OMB staff told us that they are transitioning from the governance structure used for initial DATA Act implementation to a new structure for managing data standards within the broader context of efforts to establish a federal data strategy. According to OMB staff, the initial data governance structure reflected OMB s and Treasury s focus on creating the design and build functions required to meet the statutory requirements of the DATA Act. The President s Management Agenda (PMA), released in March 2018, outlines a long-term vision for modernizing federal operations. To address the issues outlined in the PMA, the administration established a number of cross-agency priority (CAP) goals. These goals, required by the GPRA Modernization Act of 2010, are to address issues in a limited number of policy areas requiring action across multiple agencies, or management improvements that are needed across the government. According to OMB staff, several of the 2018 goals relate to data standardization, and a new governance structure is needed to achieve those goals. OMB staff informed us in July 2018 that the governance structure used for initial implementation efforts, which included the DATA Act Interagency Advisory Committee and Data Standards Committee, had been disbanded, and that the advisory roles of these groups were assumed by the Chief Financial Officers Council s DATA Act Working Group (CFOC Working Group). According to OMB staff, the working group includes four subgroups, which focus on Policy, Internal Controls and Data Quality, Audit Coordination, and the DATA Act Information Model Schema (DAIMS), respectively. OMB staff also told us that by December 2018 an interagency board and council, both led by the General Services Administration (GSA), had begun to advise OMB on policy matters. According to an action plan that OMB and GSA released along with the March 2018 PMA, the new interagency Shared Solutions Governance Board (SSGB) and Business Standards Council (BSC) are responsible for setting goals and providing advice to promote a government-wide marketplace for shared services. Specifically, they cover mission-support services such as human resources and financial management that a small number of providers offer to many agencies. According to OMB staff, this oversight function involves creating and administering government-wide data standards, including data standards established to support the DATA Act. The SSGB includes executives from across the federal government and is responsible for making recommendations to OMB on shared services policy. The BSC provides expertise on various subject matter areas (e.g., procurement and financial assistance) to promote the development of common business capabilities and data standards. The action plan does not discuss how the functions carried out by the SSGB and BSC apply specifically to the data standards established under the DATA Act. In commenting on a draft of this report, OMB staff told us that the Governance Ecosystem page on the website of Unified Shared Services Management (USSM) describes the SSGB and links functions of the SSGB and BSC to the DATA Act. They said it does this by showing that OMB and Treasury have key roles in all three entities. However, this common membership does not, in itself, provide the transparency and clarity of documented policies and procedures for governing DATA Act standards. Treasury officials said that the CFOC Working Group is involved in aligning DATA Act data standards across various functional communities, including procurement and financial assistance. Further, the group is considering making recommendations to OMB regarding changes to data definitions and other policy matters. For example, Treasury officials told us that in fall 2018, the DAIMS Subgroup identified difficulties in aligning different definitions of Period of Performance Start Date used for procurement and in financial assistance awards, and plans to elevate this issue to the Policy Subgroup for review. Specifically, the DAIMS Subgroup found that it is not always clear whether the start date should be reported as the date when a specific transaction occurred or the date when the original underlying award was made. This choice about how to interpret the data element can have substantial consequences for the consistency of the data reported. For example, in some cases, the underlying awards for recent transactions were made in the 1960s or 1970s. According to OMB staff and Treasury officials, at the center of this shifting array of advisory bodies, the DATA Act Executive Steering Committee (ESC) has continued to meet regularly and to serve as the top-level governance body for DATA Act implementation. OMB staff told us that the ESC is chiefly responsible for setting government-wide policy for the data standards based on the recommendations from various advisory bodies. In addition to the ESC, Treasury has continued to maintain and update the DAIMS and DATA Act Broker, following a set of change control procedures that involve consultation with stakeholders and public release of information about updates. <3.2. OMB and Treasury Have Instituted Some Data Governance Activities but Have Not Established a Set of Clear Policies and Processes> Although OMB has taken some steps to address our recommendation, efforts are still needed to establish a clear set of policies and procedures for governing the data standards established under the act. The key practices for data governance that we identified in our previous work are shown in table 1. In the specific context of the DATA Act standards, Treasury and OMB have taken steps to enforce the use of data standards by directing agencies to develop and maintain data quality plans and requiring agencies to submit data through the DATA Act Broker. The broker performs validations to improve data quality and ensure the consistent application of data standards. However, because the approach to governing DATA Act data standards has continued to evolve during the past few years, and because a set of data governance policies and procedures is not documented, we were unable to conduct a comprehensive assessment of OMB s and Treasury s data governance efforts against leading practices. While some data governance activities have been undertaken within the specific context of DATA Act data standards, others are part of broader efforts under the PMA. In July 2018, OMB staff told us that governance over the DATA Act data standards would be accomplished within the broader context of the CAP goals established under the PMA. For example, OMB established a governance structure to achieve the objectives of CAP goals related to Results-Oriented Accountability for Grants. As part of this broader effort to standardize grants management business processes and data to increase efficiency and reduce reporting burden, OMB, the Department of Health and Human Services, and other federal agencies have published a list of draft grants management data standards for public comment. However, published documents describing this effort do not explain how the process for developing grants management standards under this CAP goal would apply specifically to the data standards established under the DATA Act. Nor do they address if or how these new standards align with those established under the act. Further, none of the documentation on the PMA s governance structure for grants management mentions the DATA Act. In commenting on a draft of this report, OMB staff told us that the staff members from OMB and Treasury who are responsible for the grants management standards are the same people involved in managing the DATA Act standards. While this connection between the two efforts may provide adequate communication in the short term, staffing is likely to change over time, and there is no assurance that the same people will always be involved. As we have reported previously, having documented policies in place that delineate clear roles and responsibilities for decision-making could help to ensure continuity into the future. As the Comptroller General testified in 2015, in the absence of a clear set of institutionalized policies and processes for developing standards and for adjudicating necessary changes, the ability to sustain progress and maintain the integrity of established data standards may be jeopardized as priorities and data standards shift over time. <4. OMB Does Not Have Procedures for Updating Data Definition Standards> <4.1. OMB Has Not Established Procedures for Making Decisions about Changes to Existing Data Definitions> Managing and controlling changes to data standards is a key activity for data standardization and effective data governance. The DATA Act requires OMB and Treasury, in consultation with the heads of federal agencies, to establish government-wide financial data standards that include common data elements for financial and payment information required to be reported by federal agencies and entities receiving federal funds. Among other requirements, these standards, to the extent reasonable and practicable, must be capable of being continually upgraded as necessary. According to key practices for data governance that we identified in our previous work, organizations should have documented policies and procedures for making decisions about changes to existing data standards. In June 2018, OMB staff changed certain data definitions in the publicly accessible website that serves as the official repository for the data definitions. However, OMB does not have a documented procedure for updating or making changes to these definitions. In commenting on a draft of this report, OMB staff stated that the DATA Act Information Model Schema (DAIMS) change control procedures were the method for updating data standards. However, OMB s website for data definitions is maintained separately from the DAIMS, and the DAIMS procedures only address changes to the DAIMS, and do not address this separate repository of data definition standards. OMB staff said that the June 2018 revisions were made in response to the findings of our November 2017 report. Specifically, OMB revised the Primary Place of Performance Address definition to no longer include a street address or county. OMB amended the definition of Record Type to clarify that it applies to financial assistance awards only. As shown in figure 1, OMB also amended the explanatory text preceding the definitions to revise and clarify its policy regarding agencies use of data definitions. OMB staff described the changes to definitions as minor technical corrections to align with the reporting instructions in the DAIMS. In December 2018, OMB staff informed us that OMB s procedure for making changes to the data definitions it maintains in the official repository can be found on the Governance Ecosystem page of the website of Unified Shared Services Management (USSM). However, our review of that page in January 2019, including the links it provides to other pages, found no evidence that the website provides any documentation related to the DATA Act. In particular, we found no evidence of a documented procedure for making changes to the data definitions in OMB s official repository. The staff were unable to provide documentation to show that any standard procedure was followed in making the June 2018 changes, or that the DATA Act Executive Steering Committee approved the changes. As discussed earlier in this report, that committee is the top-level governing body for DATA Act implementation and is responsible for approving changes to data standards. The evolution of OMB s approach to developing a governance structure to maintain the integrity of the DATA Act data standards could in part explain the lack of a documented procedure for updating the definitions. As discussed above, OMB has created and disbanded various advisory bodies for DATA Act data standards and has only recently decided on an approach for formalizing governance over the standards, namely the decision to integrate governance of these standards with the governance processes administered by the SSGB. In 2015, we reported that establishing a formal framework for providing data governance throughout the life cycle of developing and implementing these standards is critical for ensuring that the integrity of the standards is maintained over time. Without established written procedures for making revisions to data definitions, needed changes may not be made in a timely manner, which could impair data quality. For example, if the definitions in the DATA Act official repository and definitions in other sources are not aligned, then agency staff responsible for DATA Act compliance and reporting may make inconsistent choices about which definitions to apply when creating and submitting data. As we have previously reported, the current data governance structure did not prevent inconsistencies between the DAIMS and the official repository for data definitions. <4.2. OMB Revised Data Definition Standards without Transparently Communicating the Changes to Stakeholders> Changes to data standards for federal spending data should be transparently communicated to stakeholders, including the public. The DATA Act requires OMB and Treasury to consult with public and private stakeholders in establishing data standards. In addition, according to key practices for data governance that we identified in 2016, organizations should have documented policies and procedures for managing, controlling, monitoring, and enforcing consistent application of data standards and for obtaining input from stakeholders and involving them in key decisions, as appropriate. Standards for internal control in the federal government state that management should externally communicate the necessary quality information to achieve the entity s objectives. These objectives can include those relating to the release of reliable information in accordance with appropriate standards, applicable laws and regulations, and expectations of stakeholders. In the context of standards for transparently reporting federal spending data, stakeholders include the general public as well as staff at federal agencies. OMB did not transparently communicate the June 2018 revisions. OMB staff said that the changes were communicated in OMB Memorandum M- 18-16, which was issued on June 6, 2018. As shown in figure 2, a footnote in that memorandum contains a link to the official web page for OMB s Office of Federal Financial Management. That page includes a link, labeled DATA Act Data Standards, to the public MAX.gov page that serves as the official repository for the data definition standards. However, neither this footnote nor other text in the memorandum makes reference to changes made to the definitions and policy. As of March 18, 2019, the official repository did not indicate that any changes have been made since the initial creation of the definitions in 2015. OMB did not provide documentation showing that the revisions were communicated to the public or to specific categories of stakeholders, such as users of the data standards within the federal government. As described below, the procedures that Treasury has implemented for managing changes to technical guidance, including publishing revision histories for guidance documents, represent one potentially effective approach to informing stakeholders, including the public, about changes to data standards. OMB staff viewed the revisions made in June 2018 as minor technical corrections that were needed to align the definitions with other OMB policies and with the consensus view of stakeholders at the time the data standards were first established. Consequently, they did not believe it was necessary to communicate these revisions publicly or indicate in the official repository that changes had been made. However, these revisions required significant changes in some federal agencies use of data definitions. As we reported in November 2017, some agencies applied DATA Act definitions directly when generating data to be reported to USAspending.gov. The new explanatory text added to the data definition repository instructs agencies not to apply these definitions directly, but instead to apply the more detailed definitions contained in regulations and policies governing the making and management of awards. Without transparent communication of changes to data definition standards, stakeholders including staff at federal agencies required to report data according to these definitions may miss important information relating to changes in how, when, and by whom data definitions are to be applied. The staff may then report data that are not consistent and comparable across the federal government. Such inconsistent reporting can undermine the transparency goals of the DATA Act, particularly when it affects key data elements, such as those describing geographical information. For example, we found in November 2017 that inconsistent data were reported about the locations where the federal government spends money, because some agencies used OMB s DATA Act definition of the Primary Place of Performance data element, while other agencies used definitions from other sources, such as the data dictionary for the Federal Procurement Data System Next Generation (FPDS-NG). In addition, a revision history showing when clarifications of policy and corrections to data standards were made could assist users of federal spending data, including historical data, in interpreting those data and assessing their reliability and quality. <4.3. Treasury Has Procedures in Place for Communicating with Stakeholders Regarding Changes to Technical Guidance> Treasury has established procedures for consulting with and informing stakeholders, including the public, about changes to technical guidance and reporting processes. Treasury s stakeholder engagement process includes regular review of and revisions to its technical guidance. Before revisions to guidance are put into effect, Treasury staff circulate proposed changes through an email list that any member of the public can subscribe to, discuss these changes at frequent meetings with federal agency staff responsible for DATA Act reporting, and provide opportunities for agencies to test reporting under the new rules and provide feedback from this testing to Treasury. In addition, the guidance documents provide logs of all changes that have been made since the documents were created. According to Treasury staff, the most important tools for ensuring that agencies report consistent and comparable data are the DATA Act Information Model Schema (DAIMS) and the DATA Act Broker. Treasury s documentation states that the DAIMS is the data standard of the DATA Act and contains standardized data elements that are complete and reflect the requirements of the act. The DAIMS includes reporting guidance that provides agencies with a complete listing of data elements they must report as well as a complete listing of data elements that will be extracted from government-wide systems, such as FPDS-NG. The DAIMS also includes a validation rules document that describes the business rules employed by the DATA Act Broker, which is Treasury s system for collecting and validating agency data. Treasury provides federal agencies with detailed procedures for submitting DATA Act data to the broker. Before making changes to the DAIMS and DATA Act Broker, Treasury provides stakeholders with information about the planned changes and an opportunity to comment on them. For example, in June 2018, Treasury released DAIMS 1.3, an updated version of the DAIMS to be implemented during fiscal year 2019. Before releasing the final version of DAIMS 1.3, on June 29, 2018, Treasury shared its plans for the release with stakeholders through the Chief Financial Officers Council s DATA Act Working Group (CFOC Working Group) and office hours calls. Treasury also transmitted a notice of proposed changes to federal agencies, collected comments from agencies during a designated comment period, and included responses to these comments in the final version of the release. Before implementing any of the changes in DAIMS 1.3 in the DATA Act Broker, Treasury provided agencies with a testing environment that allowed agency staff to identify any issues with the changes before those changes were applied to data published on USAspending.gov. Treasury s documentation for the public and for federal agencies includes detailed information about the history of changes to the DAIMS. Each of the DAIMS guidance documents includes a change log that shows revisions made since the document was first created. The detailed information Treasury provides about changes to technical guidance can promote data quality and transparency by ensuring that federal staff are aware of reporting requirements, and that users of the data understand how those data are created and reported. <5. Conclusions> Since 2014, OMB and Treasury have made significant strides to address the DATA Act s requirements for standardization of federal spending data. As they move forward, appropriately and effectively managing changes to data standards will be critical to ensuring the quality and comparability of the data across the federal government. Treasury has instituted regular procedures for making changes to technical data standards, including procedures for consulting with stakeholders and for recording and communicating changes. OMB has taken responsibility for maintaining an official list of data definition standards separate from the technical data standards maintained by Treasury. However, OMB lacks comparable procedures for maintaining these data definition standards. OMB made changes to some of the definitions and clarified policies about how they are to be applied, but did not communicate those changes to stakeholders, including the public. Definitions of data elements and policies about how those definitions are to be applied are a key component of the management of federal spending data under the DATA Act. Although OMB has completed the task of creating an initial set of definitions, it has not formally and explicitly documented a consistent approach for maintaining the integrity of the data definition standards over time as we previously recommended. Until OMB establishes procedures to ensure that changes are controlled, it will continue to be a challenge to apply and interpret these definitions consistently, presenting risks to data quality. In addition, clearly identifying the changes that have already been made in the official repository could aid agency officials in reporting data and users in understanding the context in which past data have been reported. These actions would be important steps toward improving control over the data standards, creating an effective governance structure, and ultimately improving the consistency and quality of federal spending data. <6. Recommendations for Executive Action> We are making two recommendations to the Office of Management and Budget: The Director of OMB should clarify and document OMB s procedure for changing official data definition standards for DATA Act reporting, for example, by explicitly describing how change procedures developed for other government-wide initiatives apply to DATA Act definition standards in a public source of guidance or information. (Recommendation 1) The Director of OMB should ensure that the June 2018 policy changes regarding DATA Act data definition standards are clearly identified and explained in the official repository or another authoritative public source of DATA Act standards and guidance, such as by including a revision history along with the current version of the definitions. (Recommendation 2) <7. Agency Comments and Our Evaluation> We provided a draft of this report to OMB and Treasury for review and comment. OMB neither agreed nor disagreed with our recommendations, and OMB staff from the Office of Federal Financial Management provided oral comments, which are summarized below and incorporated as appropriate in the report. Treasury informed us that they had no comments on the draft report. In their oral comments, OMB staff stated that on the whole, the report correctly described the complex ecosystem of governance over data standards for federal spending data. However, OMB staff stated that in certain places the report did not fully capture the extent of OMB s actions related to data governance for the DATA Act data standards. According to OMB staff, the Shared Solutions Governance Board (SSGB), under OMB s direction, plays an important role in governing DATA Act data standards. OMB staff said that this relationship exists because the same agencies and staff participate in both the SSGB and the governance of the DATA Act data standards. In addition, OMB staff confirmed that descriptions of the specific roles and responsibilities of the SSGB, CFOC Working Group, and the Treasury office that manages the DAIMS have not been documented. OMB staff said that many of the same agency personnel participate in all of these groups, and therefore work closely together on a regular basis. OMB staff stated that this close involvement results in effective communication and a consistent approach to governance, and ensures an understanding of the procedures for changing data standards even though those procedures are not formally documented. We acknowledge OMB s assertion that the various groups for creating and administering government-wide data standards (including data standards established to support the DATA Act) share many of the same staff. However, OMB s approach relies on the continued participation of the same staff in order to maintain continuity rather than relying on documented policies, procedures, roles, and responsibilities for data governance functions. A key benefit of having a robust, institutionalized data governance structure is to provide consistent data management during times of change and transition, such as during staffing transitions or administration changes. It is important for OMB to clearly delineate roles and responsibilities so stakeholders understand how governance of the DATA Act standards is accomplished within the broader context of the PMA and other efforts. OMB staff also said they have communicated all changes to DATA Act data standards that have been made since the standards were created. OMB staff told us that the DAIMS is the official data standard for DATA Act reporting and, as such, includes logs that record all changes to the standards. According to OMB staff, OMB updated its public data standards web page on www.max.gov in June 2018 to fix an error and ensure that the page matched the DAIMS. Staff stated their belief that such a correction did not represent an actual change to a data standard and therefore did not need to be recorded in the DAIMS change log or communicated publicly. However, guidance issued in June 2018, OMB Memorandum M-18-16, identifies the MAX.gov web page as the official repository of the data standards. Specifically, the guidance directs agencies to report data in accordance with the standards maintained by OMB and Treasury pursuant to FFATA, as amended by the DATA Act, and provides a link to the OFFM website s listing of data standards definitions. If OMB chose to identify the DAIMS instead of the MAX.gov page as the official source of data standards, then the issue about changes not being identified on the MAX.gov page would not be important. Although OMB made conforming changes based on our input to align the definition of Primary Place of Performance on the MAX.gov web page, having clearly documented procedures for making changes to the data standards and for ensuring that changes are communicated widely is important for ensuring the consistent and comparable reporting envisioned under the act. Additionally, in June 2018, OMB made an important change to the explanatory text that precedes these official data definitions as posted on the MAX.gov website, clarifying OMB s policy regarding the use of the DATA Act data definitions. OMB staff acknowledged that that this clarification could have been publicized more effectively, which is why we continue to believe that our second recommendation to include a revision history along with the current version of the DATA Act data definitions remains valid. We are sending copies of this report to the Secretary of the Treasury and the Acting Director of OMB, as well as interested congressional committees and other interested parties. This report will be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Michelle Sager at 202-512-6806 or SagerM@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. Key contributors to this report are listed in appendix III. Appendix I: Interagency Groups Responsible for DATA Act Governance Appendix I: Interagency Groups Responsible for DATA Act Governance GitHub is a web-based software repository hosting service. The Federal Spending Transparency website can be found at: http://fedspendingtransparency.github.io/. JIRA is an online software development tool that Treasury uses to provide responses to stakeholder questions and comments related to the development of the broker. Appendix II: Status of Open GAO Recommendations Related to the DATA Act Appendix III: GAO Contact and Staff Acknowledgments <8. GAO Contact> <9. Staff Acknowledgments> In addition to the contact named above, Peter Del Toro, Assistant Director, and Kathleen M. Drennan, Analyst-in-Charge, supervised the development of this report. Theodore Alexander and Sherrice Kerns made major contributions to this report. Also contributing to this report in their areas of expertise were David M. Ballard, Ann Czapiewski, Jenny Chanley, Robert Gebhart, Michael LaForge, Carl Ramirez, Andrew J. Stephens, and James Sweetman, Jr. Related GAO Products Open Data: Treasury Could Better Align USAspending.gov with Key Practices and Search Requirements. GAO-19-72. Washington, D.C.: December 13, 2018. DATA Act: Reported Quality of Agencies Spending Data Reviewed by OIGs Varied Because of Government-wide and Agency Issues. GAO-18-546. Washington, D.C.: July 23, 2018. DATA Act: OMB, Treasury, and Agencies Need to Improve Completeness and Accuracy of Spending Data and Disclose Limitations. GAO-18-138. Washington, D.C.: November 8, 2017. DATA Act: As Reporting Deadline Nears, Challenges Remain That Will Affect Data Quality. GAO-17-496. Washington, D.C.: April 28, 2017. DATA Act: Office of Inspector General Reports Help Identify Agencies Implementation Challenges. GAO-17-460. Washington, D.C.: April 26, 2017. DATA Act: Implementation Progresses but Challenges Remain. GAO-17-282T. Washington, D.C.: December 8, 2016. DATA Act: OMB and Treasury Have Issued Additional Guidance and Have Improved Pilot Design but Implementation Challenges Remain. GAO-17-156. Washington, D.C.: December 8, 2016. DATA Act: Initial Observations on Technical Implementation. GAO-16-824R. Washington, D.C.: August 3, 2016. DATA Act: Improvements Needed in Reviewing Agency Implementation Plans and Monitoring Progress. GAO-16-698. Washington, D.C.: July 29, 2016. DATA Act: Section 5 Pilot Design Issues Need to Be Addressed to Meet Goal of Reducing Recipient Reporting Burden. GAO-16-438. Washington, D.C.: April 19, 2016. DATA Act: Progress Made but Significant Challenges Must Be Addressed to Ensure Full and Effective Implementation. GAO-16-556T. Washington, D.C.: April 19, 2016. DATA Act: Data Standards Established, but More Complete and Timely Guidance Is Needed to Ensure Effective Implementation. GAO-16-261. Washington, D.C.: January 29, 2016. DATA Act: Progress Made in Initial Implementation but Challenges Must Be Addressed as Efforts Proceed. GAO-15-752T. Washington, D.C.: July 29, 2015. Federal Data Transparency: Effective Implementation of the DATA Act Would Help Address Government-wide Management Challenges and Improve Oversight. GAO-15-241T. Washington, D.C.: December 3, 2014. Government Efficiency and Effectiveness: Inconsistent Definitions and Information Limit the Usefulness of Federal Program Inventories. GAO-15-83. Washington, D.C.: October 31, 2014. Data Transparency: Oversight Needed to Address Underreporting and Inconsistencies on Federal Award Website. GAO-14-476. Washington, D.C.: June 30, 2014. Federal Data Transparency: Opportunities Remain to Incorporate Lessons Learned as Availability of Spending Data Increases. GAO-13-758. Washington, D.C.: September 12, 2013. Government Transparency: Efforts to Improve Information on Federal Spending. GAO-12-913T. Washington, D.C.: July 18, 2012. Electronic Government: Implementation of the Federal Funding Accountability and Transparency Act of 2006. GAO-10-365. Washington, D.C.: March 12, 2010. | Why GAO Did This Study
The DATA Act required OMB and Treasury to establish data standards for the reporting of federal government spending and required agencies to report spending data using these standards beginning in May 2017. GAO's prior work examining the quality of the data reported under the act found significant challenges that limit the usefulness of the data for Congress and the public. These data quality challenges underscore the need for OMB and Treasury to make progress on addressing GAO's prior recommendation to establish a set of clear policies and processes for developing and maintaining data standards.
The DATA Act includes a provision for GAO to report on the implementation and use of data standards, and on the quality of the data reported using those standards. This report (1) describes the status of OMB's and Treasury's efforts to establish policies and procedures for governing data standards; and (2) evaluates the extent to which procedures for changing established data standards are consistent with key practices for data governance.
What GAO Found
The Office of Management and Budget (OMB) and the Department of the Treasury (Treasury) have established some procedures for governing the data standards established under the Digital Accountability and Transparency Act of 2014 (DATA Act), but a formal governance structure has yet to be fully developed. Since enactment, OMB has relied on a shifting array of advisory bodies to obtain input on data standards. As of December 2018, some governance procedures are in place, but others continue to evolve. OMB staff told us that the governing bodies involved in initial implementation efforts had been disbanded, and that the functions previously performed by these advisory bodies over governance of DATA Act data standards would be accomplished within the broader context of the cross-agency priority goals established under the 2018 President's Management Agenda (PMA). However, the documentation of the governance structure established for these goals does not make explicit how it would apply to the data standards established under the DATA Act. Clarifying the connection between this governance structure and the DATA Act could help stakeholders understand how governance of the DATA Act standards is accomplished within the broader context of the PMA.
With regard to one specific data governance function—making changes to existing standards—GAO found that OMB does not have procedures for managing changes to the web page it identifies in guidance as the authoritative source for data definition standards. The DATA Act requires, to the extent reasonable and practicable, that data standards be capable of being continually upgraded. In addition, key practices for data governance state that organizations should document policies and procedures for making decisions about changes to standards. In June 2018, revisions were made to the Primary Place of Performance Address data element without following a documented process. OMB staff described these revisions as minor technical corrections to align the definitions with the technical guidance agencies were already using to report data. However, without documented procedures for revising the definitions, needed changes may not be made in a timely manner, which could lead to inconsistent reporting.
OMB also did not transparently communicate to stakeholders these changes to data definition standards. Along with the corrections to definitions, in June 2018 OMB changed introductory text on the data definitions web page to clarify policy about how agencies should use DATA Act definitions. However, OMB did not publicly announce this clarification or identify on the website that changes had been made. Without transparent communication of changes to data definition standards, stakeholders—including staff at federal agencies required to report data according to these definitions—may miss important information relating to changes in how, when, and by whom data definitions are to be applied.
Although OMB lacks procedures governing changes to DATA Act data definitions, Treasury has established a process for changing related technical guidance in consultation with stakeholders. Treasury's procedures contribute to the objectives of data quality and transparency by helping to ensure that agencies are aware of reporting requirements and users understand how those data are created and reported.
What GAO Recommends
GAO makes two recommendations to OMB to (1) document its procedure for changing data definition standards for DATA Act reporting, and (2) ensure that changes made in June 2018 to clarify policy regarding data definitions are identified in an authoritative public source of DATA Act standards and guidance. OMB neither agreed nor disagreed with the recommendations, but provided comments, which GAO incorporated as appropriate. |
gao_GAO-19-497 | gao_GAO-19-497_0 | <1. Background> Over the next 10 years, the Navy plans to continue developing critical technologies, complete detail design, and begin construction of the lead Columbia class submarine. In December 2017, we found that the schedule to deliver the lead submarine was aggressive, with extensive overlap or concurrency between development, design, and construction, as shown in figure 1. Our prior work reviewing shipbuilding programs has shown that the programs with the greatest amount of overlap between shipbuilding phases often have the highest cost and schedule growth, as well as quality and performance issues. The National Defense Authorization Act for Fiscal Year 2018 included reporting requirements for the Columbia class program. As part of these annual reporting requirements, the Navy must submit to Congress matrices that identify (1) key milestones, events, and performance goals for the design and construction of the Columbia class program; and (2) costs associated with the design and construction period of the Columbia class program. The Navy submitted its initial matrices to Congress in February 2018 and an update to the matrices in October 2018. The next matrices update is due in March 2019 and annually, thereafter, until the lead Columbia submarine is delivered. <1.1. Columbia Class Critical Technologies> The Navy is developing a number of new technologies related to submarine propulsion, missile tubes, and survivability that are planned to ensure that the Columbia class will remain operationally relevant throughout its planned 42.5-year service life, as shown in figure 2. In 2015, as part of its technology readiness assessment, the Navy identified two technologies the advanced carbon dioxide removal unit and the stern area system as critical technology elements. However, as we found in 2017, several Columbia class technologies that met GAO s definition of a critical technology element were not identified by the Navy as critical technologies. In addition, several of these were immature, with technology readiness levels (TRL) used to describe the maturity of critical technologies of less than 7. See appendix II for a description of TRLs. As part of its matrices to Congress, the Navy is required to report on the TRLs of the integrated power system, nuclear reactor, propulsor, coordinated stern features, stern area system, and common missile compartment which are the critical technologies we identified in our prior report. Table 1 lists each GAO-identified critical technology and its TRL as of October 2018, as reported by the Navy. <1.2. Columbia Design and Construction Approach> Two shipbuilders General Dynamics Electric Boat (Electric Boat) and Huntington Ingalls Industries Newport News (Newport News) design and build nuclear submarines. Electric Boat is the prime contractor for both design and construction of the Columbia class program, with Newport News serving as a subcontractor. Similar to the Virginia class program, each shipbuilder will construct segments of the submarine, but Electric Boat will complete final outfitting and deliver the submarines to the Navy. The Navy awarded a detail design contract in September 2017 to Electric Boat for work including completion of the submarine s design, component and technology development, and prototyping efforts. The detail design process for the Columbia class program encompasses three activities, which began after the Navy set the technical requirements for the submarine in 2016: Arrangements outline the steel structure and routes distributive systems such as electrical or piping systems throughout the submarine. At this time, the shipbuilder generates a three-dimensional computer-aided design model for the area. Disclosures complete the design work for even the lowest-level items of the submarine, including material information. After these are completed, the shipbuilder can begin ordering material and long lead items for the submarine. Work instructions are three-dimensional electronic products that shipyard workers use to construct the submarine. Figure 3 illustrates the design phases for the Columbia class program. The shipbuilder will design and construct Columbia class submarines in six large hull segments, referred to as super modules, a method also used to construct most of the Virginia class submarines. During construction, the modules will largely be outfitted with systems and connections prior to being attached together during final assembly. According to the shipbuilder, this method is more efficient than outfitting the hull after it is constructed because more workspace is available to install equipment. Figure 4 illustrates the super modules within the submarine. <1.3. Cost Estimating> A reliable cost estimate is critical to program success. It provides the basis for informed investment decision making, realistic budget formulation and program funding, meaningful progress measurement, proactive course correction when warranted, and accountability for results. GAO s Cost Estimating and Assessment Guide states that reliable cost estimates reflect four characteristics, which encompass 19 best practices. These characteristics comprehensive, well documented, accurate, and credible are shown in table 2. For Navy shipbuilding programs, including the Columbia class, several different entities are involved in cost estimating: The Naval Sea Systems Command (NAVSEA) Cost Engineering and Industrial Analysis Group develops the program life-cycle cost estimate, which is an estimate accounting for the total cost to the government of acquisition and ownership of a system over its full life. NCCA develops an independent cost assessment for certain Navy programs, such as the Columbia class program, at milestone events in the defense acquisition system. This assessment is not a separate estimate, but rather a review of the NAVSEA program life-cycle cost estimate. A cost review board, comprised of multiple Navy offices, establishes a service cost position based on their review of the program life-cycle cost estimate and the independent cost assessment. The Office of the Secretary of Defense s CAPE conducts or approves independent cost estimates for major defense acquisition programs. Independent cost estimates are statutorily required for major defense acquisition programs at milestone events. The milestone decision authority, which in the case of the Columbia class program is the Under Secretary of Defense for Acquisition and Sustainment, reviews the service cost position and independent cost estimate and selects the cost estimate to baseline and fund the program. The most recent milestone event for the Columbia program was the Milestone B decision in January 2017, where the program received approval to proceed to the next acquisition phase engineering and manufacturing development, which includes detail design of the lead submarine. In a memo documenting that decision, the milestone decision authority noted that significant development risks remain for the Columbia program and cost control must remain a priority. To limit program cost growth, the milestone decision authority established an affordability cap: the average submarine procurement cost should not exceed $8.0 billion in constant year 2017 dollars. Figure 5 summarizes the cost estimating process for the Columbia class program s Milestone B review. <2. Navy Is Managing an Aggressive Build Schedule, but Early Design and Construction Challenges Signal Schedule Risk> The Navy is attempting to mitigate an aggressive schedule for lead submarine construction by (1) setting a goal to mature a significant amount of the submarine s design prior to the start of construction and (2) beginning advance construction of submarine modules prior to October 2020. The shipbuilder is working to improve design performance and would have to maintain this increased pace to achieve its design goal, which is necessary to mitigate schedule risk associated with constructing the lead submarine. This may prove challenging as it must complete an increasingly higher volume and complexity of design products. At the same time, the Navy is continuing to develop several critical technologies and recent manufacturing defects with the integrated power system and missile tubes are among the challenges that the Navy is facing in ensuring timely delivery of critical components to the shipyard. Finally, to achieve Columbia s aggressive construction schedule, while simultaneously building Virginia class submarines, the shipbuilder is working to ensure that it has sufficient shipyard capacity including new facilities, additional suppliers, and an increased workforce. <2.1. Shipbuilder Would Have to Maintain Its Increased Pace to Meet Its Design Maturity Goal and Reduce Schedule Risk> The shipbuilder has failed to achieve its planned rates for completing design arrangements and disclosures to meet its design maturity goal in recent months hampered by implementation of a new design software tool and an insufficient number of designers to meet monthly design completion rates. As we reported in December 2017, the Navy s priority is to complete a high level of design specifically, 100 percent of design arrangements and 83 percent of design disclosures by the start of lead submarine construction in October 2020. By maturing the design before beginning construction on the lead submarine, the Navy is attempting to mitigate the risk of costly rework from design changes and subsequent delays to the Columbia class program s 84-month construction schedule, which the Navy has acknowledged is aggressive. The Navy established the design maturity goal for Columbia based on lessons learned from the Virginia class program, when the shipbuilder began constructing the lead submarine with only 76 percent of arrangements and 43 percent of disclosures completed and, subsequently, realized 21 percent cost growth. Since the shipbuilder began work on the detail design, it has generally met its overall goal of completing the arrangements on schedule. As detail design continues, however, the shipbuilder is transitioning from relatively simple designs for the hull to the more complex designs for the submarine s internal systems, increasing the pace needed to complete the remaining designs, as shown in figure 6. Navy officials stated that design disclosures are generally considered the most challenging phase of design work, where the shipbuilder specifies the lowest-level items and defines all aspects of the submarine. The shipbuilder has to maintain this increased pace in order to achieve the design maturity goal by the start of lead submarine construction. However, the shipbuilder s design progress in completing disclosure products has fallen short of its plan in recent months as the planned pace and complexity of the design has increased. Using data from the program s cost performance reports, we analyzed the shipbuilder s monthly design progress according to a schedule performance index that measures the value of the work completed against the work scheduled. For example, if the schedule performance index is less than 1.00, then the shipbuilder has completed less than a dollar s worth of work for each dollar that was scheduled. As shown in figure 7, since January 2018, schedule performance has consistently fallen below 1.00. Both DOD and Navy officials attributed the shipbuilder s design delays to challenges adapting to a new design software tool. Beginning with the Columbia class program, the shipbuilder transitioned to a new customized software tool for design and construction because its prior software was no longer supported by the original developer. However, the shipbuilder has experienced problems developing the tool, which has resulted in slower progress to complete both design arrangements and disclosures, as certain aspects of the software s functionality were delayed. Navy officials stated that, as of June 2018, they believe that design software functionality was performing at a level that no longer impeded design progress. While the designers have gained proficiency with the new design tool to complete arrangements and disclosures, according to Navy officials, the shipbuilder is now facing similar challenges using the tool to generate work instructions. Navy program officials also stated that the shipbuilder has not delivered some of the software functionality needed to produce work instructions as scheduled. Further, Navy officials noted that the process to create work instructions from completed disclosures takes longer with the new design software so the shipbuilder has begun generating work instructions earlier. According to Navy officials and shipbuilder representatives, the shipbuilder hired 150 additional designers in an effort to recover its design schedule and meet future monthly design goals. However, adding designers to recover and maintain the shipbuilder s design schedule ultimately increases the program s design costs. Similar to the schedule analysis above, we used data from cost performance reports to analyze the shipbuilder s monthly design progress according to a cost performance index that measures the budgeted value of the work completed against what it actually costs to complete it. For example, if the cost performance index is less than 1.00, then less than a dollar s worth of work has been completed for each dollar spent. As shown in figure 8, the shipbuilder s cost performance has consistently fallen below 1.00 since December 2017. If the shipbuilder cannot address challenges associated with using the software tool to generate work instructions discussed above, it will likely need additional design hours in the future, resulting in higher costs in order to mature the design on schedule. <2.2. Navy s Use of Advance Construction to Mitigate Aggressive Schedule Is Not without Risk> Navy officials and shipbuilder representatives expect to mitigate risks associated with the Columbia construction schedule by accelerating the building of certain components more than a year in advance of the formal start of construction. They anticipate that this advance construction strategy will allow them to gain 2 months of schedule margin for final assembly and testing prior to delivery of the lead submarine. Starting in December 2018, the shipbuilder will begin constructing modules of the submarine as part of its advance construction effort. In 2017, we reported that the Navy had planned to begin advance construction for four of the submarine s six super modules, but since our report was issued, it now plans to begin construction on all six super modules including building components like the stabilizers, impulse tanks, and others. Figure 9 shows the start of advance construction for each super module. Navy officials estimate that the current advance construction efforts will require approximately 631,000 labor hours. In addition, advance construction efforts would require that the Navy accelerate delivery of equipment provided to the shipbuilder for installation on the submarine, such as pumps and valves. Shipbuilder representatives stated that a lesson learned from the Virginia class program was that construction of certain complex components should begin as early as possible if capability requirements and designs are stable. However, based on its plan, the shipbuilder will begin advance construction having completed less than 40 percent of the total design disclosures for the Columbia class submarine, as shown in figure 10. The number of disclosures completed at the start of advance construction is less than half of those the shipbuilder plans to complete by the start of lead submarine construction in October 2020. Navy officials stated that they believe the risk associated with beginning construction with a less mature overall design is mitigated because the program selected components for advance construction that are well understood and unlikely to be affected by design changes, like ballast tanks, decking, and hull segments. In addition, Navy officials stated that they will not begin construction on the component or hull unless the arrangements associated with the structure of that area of the submarine are complete. However, based on the shipbuilder s design plans, the arrangements and disclosures of adjoining areas of the super module may not be complete, which could negatively affect construction. Specifically, the shipbuilder s design plans indicate that it will have completed 100 percent of disclosures for only one super module at the start of advance construction. As we have found in our prior work, proceeding with construction despite having completed fewer designs than planned increases the likelihood of design changes later that may, in turn, require costly and time-intensive re-work to change components that have already been built. Shipbuilder representatives acknowledged that there is risk in starting construction of some components prior to completing the design for individual super modules or the entire submarine. However, shipbuilder representatives stated that they believe this risk is reduced by only starting construction on components for which the disclosures are complete. <2.3. Recent Challenges with Critical Technologies Have Reduced Available Schedule Margin> While ship design is underway, the Navy is continuing to develop and mature the critical technologies related to the Columbia class program. While these critical technologies are not required at the shipyard for several years, recent challenges have eroded available schedule margin, as illustrated below: Integrated Power System: In 2017, we reported that the Navy experienced manufacturing problems associated with the integrated power system. We found that the Navy continues to experience problems with the electric drive of the integrated power system that could potentially affect construction of the lead submarine. A manufacturing defect that affected the system s first production- representative propulsion motor required extensive repair that consumed 9 months of schedule margin at the land-based test facility. The Navy now plans to test the motor at the same time it had originally scheduled to make any final design changes before starting production. This could constrain opportunities to implement timely, corrective actions if problems are discovered during testing. Common Missile Compartment: Navy officials stated that, in July 2018, the shipbuilder identified substantial weld defects in missile tubes from one of three tube suppliers and resulted in investigations of the missile tubes from all suppliers. These defects were discovered after seven tubes in various stages of outfitting had already been delivered to the shipyard and five additional tubes under production have been affected. Navy program officials stated defects occurred because inexperienced welders performed the complex work and inspectors at the supplier s facility subsequently failed to identify the defects. While the Navy and shipbuilder are still determining the cost and schedule impacts of the weld defects, program officials estimated that addressing this issue will consume up to 15 of the 23-month schedule margin for these components. In addition, program officials stated that the Navy likely will be responsible for some of the cost associated with investigating the root cause of the defects and risk mitigation efforts going forward. Given the erosion of available schedule margin, there is less time available to address issues without resulting in schedule delays. For example, the shipbuilder s construction plans for two super modules do not include schedule margin to accommodate any delays that may occur as the technologies are matured and detail design is completed. One of these, the stern super module contains three technologies that are not fully mature the integrated power system, stern area system, and advanced propulsor bearing. The integrated power system is not expected to reach full maturity until October 2019 and the remaining two technologies will not be mature until after the shipbuilder begins construction on the lead submarine, not including those components that begin advance construction years earlier. Without schedule margin to accommodate any changes or issues, any delays in delivering equipment to the shipyard on time could disrupt the shipbuilder s construction sequence for the lead submarine. <2.4. Shipbuilder Is Facing Oversight and Capacity Challenges in Preparation for Columbia Construction> To meet the Navy s aggressive construction schedule for the lead submarine, the shipbuilder has to ensure that it has the capacity to meet a substantially higher workload and effectively balance Columbia and Virginia class construction. At the same time as construction on Columbia begins in 2020, the shipbuilder will also have begun constructing two modified Virginia class attack submarines per year. To accommodate the construction of both submarine classes, the shipbuilder is planning an extensive expansion of its facilities, including new buildings, a pier, an ocean transport barge, and a floating dry dock. The anticipated increases in workload at the shipyard will also require the shipbuilder to manage a higher volume of build materials and an expansion of its workforce. While construction of new facilities is progressing on schedule, according to shipbuilder representatives, it faces other challenges preparing for Columbia class construction. <2.4.1. Ensuring Supplier Oversight> Achieving the planned construction schedule will require the Navy and shipbuilder to ensure that materials arrive on time and meet quality expectations, but according to Navy officials, supplier oversight has been a challenge for this shipbuilder in the past. Both Navy officials and shipbuilder representatives stated that they are concerned about the capacity of its suppliers to meet the demand for high-quality components given an industrial base that has diminished significantly since previous major submarine construction efforts in the 1980s. Many of the parts and equipment on Columbia class are common with those used on Virginia class submarines but, in other instances, suppliers are producing components for the first time after a considerable break, such as missile tubes that have not been produced since the early 1990s. Navy program officials and shipbuilder representatives stated that they monitor supplier capacity and quality among other areas and they have several methods to intervene if a supplier is not able to perform as needed. The shipbuilder and the Navy have formed a group to assess the three primary areas of supplier performance: Capability: includes the uniqueness of the supplier s product on the market, challenges in shifting to a different supplier due to intellectual property rights or technical knowledge, and the ability for the supplier to sustain their own supply base. Capacity: includes the supplier s ability to increase production without decreasing quality, maintain that capacity over the program s production, their financial dependence on Navy programs for revenue, lead time needed to meet new orders, and the capacity of their own suppliers. Cost: includes the costs of increasing production spread out across demand from Navy programs. In 2017, the shipbuilder assessed its supplier base using these areas, identified the criticality and risk of each supplier based on their potential impact to the program and potential alternate suppliers, and conducted a gap analysis comparing the supplier s current performance to the program s desired performance. Based on the results of the analysis, the shipbuilder identified and is monitoring at-risk suppliers in coordination with the Navy to determine if immediate intervention is needed, such as investing in new facilities for the supplier, improving manufacturing workflow, or finding new sources of material from that supplier. Despite these efforts, supplier oversight remains an issue, because in the instance of the missile tube welds mentioned above the shipbuilder focused on managing certain anticipated risks, as opposed to actively managing the supplier s quality and performance with on-site independent inspections, according to Navy officials. In response to the missile tube issues, the shipbuilder has proposed additional supplier oversight by assessing the need for on-site inspection teams depending on the risk each supplier poses to the program. Navy officials stated that they have begun some assessments but, as of March 2019, had yet to determine who will pay for this additional oversight. We plan to more fully assess the Navy and shipbuilder s oversight of its suppliers for the Columbia class program in future work. <2.4.2. Building Workforce Capacity and Capability> According to shipbuilder representatives, the start of lead submarine construction for the Columbia class, combined with expanding Virginia class construction, increases the demand for hiring and retaining skilled workers at levels not seen at this shipyard since the 1980s. Navy officials expressed concerns about the risk of adding large numbers of new workers, including an influx of inexperienced welders and inspectors issues that also contributed to the defects in missile tubes discussed above. To support growing workload from both the Columbia and Virginia submarine programs, the shipbuilder plans to increase workforce at its two facilities over the next decade: by 66 percent at Quonset Point, Rhode Island where the components and individual submarine modules will be constructed and 174 percent at Groton, Connecticut where the super modules will undergo final outfitting and assembly. To meet this increased demand in a skilled workforce, the shipbuilder assessed future demographic trends in the area surrounding its facilities and found that, while sufficient labor will likely be available, more training will be necessary. Consequently, the shipbuilder established internal and external training programs and partnerships with educational institutions in the area to grow the qualified workforce in time to begin lead submarine construction in October 2020. The influx of inexperienced workers can temporarily decrease construction efficiency as compared to a current, more experienced workforce. For example, when the Virginia class program expanded its workforce to build a second submarine each year, the addition of new staff contributed to an 8 percent decrease in cost efficiency for the program. Shipbuilder representatives at one production facility have already reported reduced efficiency following increased hiring of new workers. The shipbuilder s goal is to maintain an average of 8 years of experience for workers in core trades, such as welding. However, the shipbuilder s projections show that the new workforce ramp-up at the Groton facility will reduce workers average experience from 13.1 years to a low of 5.6 years in 2028 just after the shipbuilder plans to deliver the lead Columbia class submarine. If workforce growth or efficiency assumptions are not met, the shipbuilder may resort to scheduling overtime work or outsourcing some activities to meet the program s construction schedule, which would have cost impacts for the program. <3. Columbia Class Cost Estimate Is Not Reliable and Does Not Reflect Program Risks> The Navy s procurement cost estimate of $115 billion to construct Columbia class submarines is not reliable because it does not reflect likely program costs and risks. We assessed the Columbia class cost estimate by comparing it with the best practices identified in GAO s Cost Estimating and Assessment Guide. We found that it substantially met the criteria for the comprehensive characteristic of a reliable cost estimate, and partially met the criteria for the remaining characteristics, including accurate and credible. In particular, we found that the cost estimate does not accurately reflect program costs because it is based on overly optimistic labor hour assumptions, and is not fully credible because while the Navy conducted risk and sensitivity analyses to test the likelihood of achieving its assumptions, it selected a specific cost estimate that informs the program s budget which does not include any margin in case those assumptions are not achieved. In addition, the cost estimates and assessments conducted by other entities produced a range of results, indicating that there is a high degree of uncertainty regarding program costs. See appendix III for the full results of our assessment of the Navy s cost estimate. Navy officials stated they plan to update the Columbia class cost estimate in support of DOD s decision to authorize construction of the lead submarine and this decision is expected to occur in summer 2020. Navy officials also stated that they expect that the cost estimate will be complete by the end of fiscal year 2019, followed by an independent cost assessment to support the authorization decision. However, this timeframe does not provide assurance that both the update and the independent assessment will be complete before the Navy requests funding from Congress for lead submarine construction, as part of its fiscal year 2021 budget request, which could be submitted as early as February 2020. If so, decision makers may be basing their decisions on outdated or incomplete information. <3.1. Columbia s Cost Estimate Is Not Accurate Because It Relies on Overly Optimistic Labor Hour Assumptions> The Columbia class cost estimate relies on optimistic program assumptions and does not reflect the likely labor hour costs that the Navy will incur to construct the submarines. As part of our assessment of the Columbia program cost estimate, we found that it did not fully meet the best practices for an accurate estimate. A cost estimate is considered accurate when it is based on an assessment of the most likely costs that is, it is neither overly conservative nor overly optimistic. The Navy estimates that it will need $115 billion to design and construct 12 submarines and NAVSEA cost estimators identified labor costs as a primary source of cost risk. As discussed below, if the program s optimistic assumptions are not realized, the program may require more funding than originally planned to construct the Columbia class. The Navy anticipates that it will need 12 million labor hours to directly construct the lead submarine referred to as touch labor. This represents 17 percent fewer labor hours than what was needed for the lead Virginia class submarine, when adjusted for weight differences. To develop this estimate, NAVSEA estimators relied heavily on historical touch labor hour data from the construction of the lead Virginia class submarine and cost data from the Ohio class submarine program for unique ballistic submarine components, such as missiles. NAVSEA estimators took the following steps to develop the Columbia lead submarine estimate: In general, heavier ships cost more to construct, so NAVSEA cost estimators calculated a weight-adjusted estimate based on Virginia class labor hours to account for the heavier weight of the Columbia class. This resulted in an initial estimate of 14.5 million touch labor hours for the lead submarine. NAVSEA cost estimators then made numerous adjustments in the cost estimate that reduced the expected number of labor hours based on multiple assumptions that differences in the design and construction process would lead to more efficient construction of Columbia class submarines than previous submarine classes. These adjustments subsequently decreased the estimate to 12 million touch labor hours for the lead submarine. NAVSEA cost estimators then used the lead Columbia submarine estimate as the basis to calculate labor hours for follow-on submarines, estimating an average of 8.9 million touch labor hours. Figure 11 illustrates NAVSEA s touch labor hour calculation for the lead submarine. However, the touch labor hour estimate is overly optimistic with assumptions on construction efficiencies that are either unsubstantiated or unprecedented compared to Virginia class and other shipbuilding historical data. Compared to the Navy s estimate, Columbia s estimated touch labor hours, as calculated by other organizations, are more conservative. For example, CBO questioned the Navy s assumption that ballistic submarines are less expensive to build than attack submarines, after accounting for weight differences and estimated that for the overall class, including the lead and follow-on submarines, the Navy would more likely realize an 8 percent reduction rather than the 19 percent reduction estimated by the Navy. While the shipbuilder will likely realize some efficiencies from initiatives to improve design and construction processes, our analysis of the Navy s assumptions used to develop the cost estimate indicates that they likely overstate the labor hour reduction the shipbuilder can realistically achieve. These assumptions include that the program (1) achieves its design goals at the start of construction; (2) is constructed more efficiently than Virginia class submarines; and (3) successively reduces the number of hours needed to construct follow-on submarines. If these assumptions are not realized, overall program costs could be higher than the Navy s procurement estimate of $115 billion. Navy officials stated that they believe that these assumptions are valid and that the cost estimate is achievable. However, our assessment indicates that the assumptions for the cost estimate are overly optimistic, as discussed below. <3.1.1. Risk of Unrealized Design Goals> The Navy s cost estimate does not reflect the risk that the shipbuilder may not achieve its planned design completion goals. As we reported above, design performance to date has slowed and the shipbuilder has had to hire additional designers in an effort to mature its design on schedule. NAVSEA cost estimators stated that they recognize that an incomplete design at the start of ship construction was a significant driver of cost growth on other shipbuilding programs. For the Columbia class, NAVSEA cost estimators assumed that achieving the design maturity goal would eliminate 2 million labor hours by reducing costs associated with rework and out of sequence work. In October 2018, NCCA officials stated that they recently reviewed shipbuilder data and the expected design completion at construction start continues to range between 55 and 75 percent complete the same range that they estimated in their independent assessment. While this lower rate would be an improvement over the Virginia class program, it would still fall short of the 84 percent assumption built into the cost estimate. If the shipbuilder does not complete the design at its planned rate and begins construction with a less mature design, it may need additional labor hours to construct the ship, resulting in increased program costs. <3.1.2. Overly Optimistic Estimate of Efficiencies> The Navy s cost estimate includes assumptions that reduce Columbia s estimated touch labor hours due to efficiencies from constructing Columbia and Virginia class submarines concurrently, an assumption with which the shipbuilder does not agree. NAVSEA cost estimators calculated a 1.1 million-labor hour reduction, attributing the decrease to efficiencies gained from constructing multiple submarines at the same time, basing their assessment on shipbuilder estimates of the Virginia class. However, it is unclear how increased shipyard production would result in fewer labor hours to construct each submarine. Shipbuilder representatives stated that rather than a reduction in touch labor hours, they expect to realize efficiencies from increased production primarily from reduced overhead rates and material costs. Further, the Navy s independent assessment analyzed labor hour data for Virginia class construction and found that there was no correlation between the number of submarines constructed at a time and the total number of labor hours. However, increasing shipyard production to include both Virginia and Columbia class construction may increase schedule risk for the shipbuilder, which could result in additional costs if the shipbuilder does not achieve planned increases in its workforce and facility upgrades. When the number of Virginia class submarines under construction increased, both shipyards experienced inefficiencies due to poorly planned ramp-up requirements. In addition, DOD officials stated that problems encountered on one program could affect the other as the shipbuilder is relying on the same workforce and vendor base for both programs. The Navy s cost estimate also assumed construction efficiencies because the Columbia class submarine will be less dense than the Virginia class submarine another assumption with which the shipbuilder does not agree. Navy officials stated that less dense submarines are less costly to construct as the additional space within the hull allows for faster and more efficient work. However, the shipbuilder conducted analysis to compare the density of various areas of the Virginia class and Columbia class submarines and found that areas had very similar density. Specifically, there was only a 1 percent and 3 percent difference, between the forward compartments and aft compartments, respectively some of the more complex areas of the submarine. If the shipbuilder does not realize these construction efficiencies, more total labor hours would be required to construct the submarine, resulting in increased cost. <3.1.3. Learning Curve Assumption> The Navy s cost estimate assumes that the costs for follow-on Columbia class submarines will decrease at a rate that may overstate the improvements the shipbuilder can realistically achieve. The Navy expects the number of labor hours to construct Columbia class follow-on submarines to decrease based on an assumed learning curve rate. Learning occurs when construction is consistent and continuous and the shipbuilder learns how to do repetitive tasks more efficiently. The decrease in the number of expected labor hours is expressed as a learning curve rate, where a lower percentage indicates that less labor is required for follow-on units. NAVSEA cost estimators calculated a learning curve of 88.9 percent for Columbia class submarines. A learning curve indicates that as the number of units doubles, unit cost decreases by a constant percentage. In this case, the cost estimate assumed that the fourth submarine would require only 88.9 percent the amount of labor to build the second submarine. NAVSEA s assumption may overstate the potential learning rate that Columbia can expect to achieve. The first four Virginia class submarines, hull numbers SSN 774 through 777, incorporated modular construction techniques where submarines were built in 10 modules. The next six Virginia class submarines, hull numbers SSN 778 through 783, were constructed using four modules. As a result of the improvements in the modular construction process, construction across the first ten submarines was not consistent, which is a condition that is necessary to determine the learning curve rate. Therefore, there is no way to determine what share of the labor hour reduction on later submarines was due to learning or process improvements. Rather, SSN 778, the first Virginia class submarine to use the four modular construction approach is a better starting point to determine the shipbuilder s capacity for learning. The Navy s independent assessment included a separate learning curve analysis for Virginia class submarine hulls SSN 778 through 791 and calculated a potential learning curve rate of 93.9 percent. A learning curve assumption applies to all follow-on submarines and has a cumulative effect on the number of labor hours and, ultimately, the cost of these submarines. In the case of the Columbia program, the rate will apply to the second through twelfth submarines. Figure 12 shows how the difference in the learning curve rate can affect the estimated labor hours for follow-on submarines. Therefore, a small change in the assumed learning curve rate can have a significant effect on the cost estimate for follow-on submarines. For example, the Navy s independent assessment of the cost estimate calculated that production costs could increase by $3.59 billion in constant year 2010 dollars if a learning curve of 93.9 percent was realized, rather than the 88.9 percent rate estimate. Our previous work on Navy shipbuilding performance has shown that the Navy has consistently underestimated the costs for follow-on ships, with costs for Virginia class submarines underestimated by close to 40 percent. <3.2. Columbia Cost Estimate Is Not Credible Because It Does Not Adequately Account for Program Risks> The Columbia program cost estimate did not fully meet the best practice criteria to be considered credible because, in part, Navy program management did not sufficiently account for program risks when it selected the final estimate. To determine the estimate s credibility, we examined the extent to which NAVSEA cost estimators tested, among other things, the sensitivity of key cost elements such as labor hours and conducted uncertainty analyses to quantify risks; and an independent cost estimate and assessment were conducted by groups outside the acquiring organization (specifically, CAPE and NCCA) to determine whether other estimating methods produced similar results. We found that while the Navy program management s $115 billion procurement cost estimate for the Columbia class is overly optimistic in some of its assumptions, the estimate does not reflect any contingency to offset the likely effects of not meeting the assumptions, which is a best practice. In addition, the independent cost estimates and assessments conducted by other organizations had varying results, indicating the high level of uncertainty regarding Columbia program costs. We further address these issues below. <3.2.1. Sensitivity and Risk Analysis Indicate Insufficient Cost Risk Coverage> Navy leadership s decision to select $115 billion as the program cost estimate means that there is no margin in the program budget to cover likely program costs if risks are realized. The best practices identified in GAO s cost estimating guide state that the results of a risk analysis should be used to select a cost estimate that is sufficient to manage program risks. NAVSEA cost estimators conducted a risk analysis to identify and quantify program risks, and determined the effects of changing key cost driver assumptions and factors important steps in creating a high quality estimate. However, while NAVSEA cost estimators identified 54 risk parameters for construction costs, we found that some of the inputs for these ranges resulted in a cost estimate that understates the potential impact of program cost risks. For example, the risk ranges do not sufficiently account for the issues we identified above, including that increased shipyard construction could result in similar inefficiencies that occurred in the production of the Virginia class, requiring more labor hours than estimated; and shipbuilder workforce ramp-up could result in decreased efficiency and quality due to the influx of new workers even greater than the issues observed on the Virginia class when shipyard construction increased. For other risk parameters, such as cost of material provided by the shipbuilder, the cost estimate documentation was not sufficient for us to analyze whether the risk ranges included in the estimate were reasonable (i.e., not overly optimistic or pessimistic). As a result, we could not determine whether the risk analysis sufficiently captures the risk of program cost growth, or what the probability is of achieving the $115 billion procurement cost estimate. Further, Columbia s program management and the milestone decision authority selected $115 billion as the program s procurement cost estimate, without adjusting for the likelihood of cost growth in the design or construction of Columbia class submarines identified in the risk analysis. As we reported in December 2017, the risk analysis developed by NAVSEA indicated that there is only a 45 percent probability that the overall program cost estimate will be sufficient to cover program costs. The cost estimating best practices identified in our cost estimating guide state that a risk-adjusted cost estimate helps ensure that sufficient funding will be available for the expected program costs. Additionally, a risk-adjusted cost estimate is consistent with federal internal control standards, which indicate that risk mitigation efforts should be selected to sufficiently respond to risks. However, Columbia program officials stated that they believe program risks can be managed within the current cost estimate which they consider to be conservative as it does not account for all of the program s potential cost savings. Specifically, the Navy anticipates that the program will realize up to $1.9 billion in additional cost savings from use of authorities associated with the National Sea-Based Deterrence Fund (the Fund), such as the authority to purchase components for multiple submarines which we discuss later in this report. As a result, the program office estimate represents the program manager s cost goal for the Columbia program, rather than the risk- adjusted estimate. Even if the Navy were to achieve the full anticipated $1.9 billion savings, these savings represent only 1.5 percent of program costs. Such cost savings are unlikely to cover program cost overruns for a high-risk program, such as Columbia, given that historically shipbuilding programs experience 27 percent cost growth. As the current estimate does not include any reserves for cost overruns, program management is relying on these potential savings to help mitigate likely cost growth. <3.2.2. Independent Cost Reviews Indicate Varying Results> Several entities have conducted independent reviews of the Columbia program cost estimate, with varying results. CAPE conducted an independent cost estimate and NCCA conducted an independent cost assessment of the program estimate in support of the Columbia class program s Milestone B review. CAPE s independent cost estimate was 3 percent lower than the Navy s service cost position, which it stated was due to CAPE s use of lower shipyard labor rates. However, NCCA s assessment did not produce similar results as the program cost estimate and concluded that the program is at risk of up to $6.14 billion in cost growth. The program manager reviewed the recommendations in the independent cost assessment and determined that the program office estimate appropriately weighs program risks. Navy leadership selected the program office estimate to serve as the Navy s service cost position because program officials stated that they believe program risks can be managed within the program cost estimate. CBO also conducted a cost estimate and projected that procurement of 12 submarines would be 6 percent higher than the program estimated. The results of these cost estimates and NCCA s assessment are summarized in table 3. As part of the Milestone B review, the milestone decision authority reviewed the service cost position and CAPE s independent cost estimate. The independent cost assessment was reviewed by Navy leadership as part of the service cost position process and, therefore, was not briefed as part of the milestone review. The milestone decision authority accepted the Navy service cost position and directed the Navy to use this estimate as the basis of its fiscal year 2018 budget request. It also established an $8 billion affordability cap for the average procurement cost of all 12 submarines to control future program costs. <3.3. Congress May Not Have Up-to-Date Cost Information When Considering Columbia Class Budget Request for Lead Submarine Funding> Navy officials stated that they plan to update the cost estimate for the lead submarine in support of a planned Defense Acquisition Board review, in the third quarter of fiscal year 2020. At that point, the Navy will be seeking approval from the milestone decision authority to award the contract for construction of the lead submarine. However, the Navy and DOD s general timeframes do not provide assurance that the planned update of the cost estimate would be completed prior to the fiscal year 2021 budget request, which will include funding for lead submarine construction, as shown in figure 13 below. The milestone decision authority has directed CAPE, with assistance from NCCA, to assess the lead submarine cost estimate to support the decision to authorize the Navy to award the contract for lead submarine construction. Since this assessment will occur after the Navy has updated the lead submarine cost estimate, it is even less likely that the program budget request will reflect the results from the independent cost assessment. Additionally, the current program cost estimate the Navy developed for the Milestone B review does not reflect the program s current strategy to use authorities associated with the Fund to achieve cost savings, as discussed further below. The best practices identified in GAO s cost estimating guide state that cost estimates should be regularly updated and reflect the program acquisition baseline. Updating the cost estimate and risk analysis to include these anticipated savings and current program data would improve its reliability and help ensure that budget requests are sufficient to execute the Columbia program as planned. After we provided our draft report to DOD for comment, Navy officials briefed us on the changes they had made to the program s estimate to date, stating that they updated the cost risk analysis as part of an internal program review. While the Navy plans to update the lead submarine cost estimate again by the end of fiscal year 2019 to support the Defense Acquisition Board review in the summer of 2020, it has yet to provide specific details on the steps it will take to update this estimate to ensure that it would include likely program costs and risks, such as the cost data it plans to include or the assumptions it may reassess. Further, since the Navy will likely submit its budget request to Congress as early as February 2020, Congress may be asked to authorize and fund lead submarine construction without the benefit of any changes to the estimate that may occur as a result of recommendations stemming from an independent review of the update. Further, although the Navy reports Columbia program cost information to Congress through annual matrices submissions, updates to the program cost estimate will not be reflected in these reports. For example, the Navy plans to report program manager and contractor cost estimates for individual submarines in the matrices once the submarines are under construction. Since these estimates are based on shipbuilder contract performance, they are initially calculated only after construction of each submarine is 15 percent complete, when sufficient data are available to show performance trends. While the Navy plans to award the contract for the lead submarine in October 2020, limited contractor performance data will be available in time for the February 2021 matrix submission. As a result, the earliest opportunity to report on the cost of the lead submarine would be the Navy s next submission in February 2022, at which point the Navy will have already requested funding for the second and third Columbia submarine. <4. Navy Is Using National Sea-Based Deterrence Fund and Associated Authorities, but Anticipated Savings May Be Overestimated> In 2014, Congress created a National Sea-Based Deterrence Fund (the Fund) that provides DOD with greater discretion to fund the design, construction, purchase, alteration, and conversion of the Columbia class. Since then, Congress has provided the Navy with enhanced acquisition authorities to buy and construct submarines and certain key components early, in bulk, and continuously, when using these funds. The Navy anticipates saving over $1.9 billion through use of these authorities, but these savings, which were not included in the Columbia class program s cost estimate, may be overestimated. <4.1. Navy Executes Columbia Program through the Fund and Its Associated Authorities> Since its inception in 2014, Congress has expanded the special acquisition authorities under the Fund, in part, to allow the Columbia class program to gain economic efficiencies and realize cost savings. The timeline of the establishment of the Fund and legislative changes are shown in figure 14. The following authorities have been included as part of the use of the Fund: Economic order quantity: Permits awarding of contracts that provide a quantity of supplies that will result in a total cost and unit cost most advantageous to the government by achieving economic efficiencies based on production economies. Advance construction: Allows for manufacturing and fabrication efforts prior to ship authorization. Multiyear procurement authority: Permits a single contract for more than one year of critical components. Incremental funding authority: Facilitates the purchase of long lead items through partial funding of a contract with the expectation that full funding will be provided later. Using the Fund s associated authorities, the Navy is able to purchase significant components and start advance construction prior to receiving Congress s authorization of and funding to purchase each submarine. In total, the Navy will have requested and received $8.6 billion in funding, including 33 percent of funding for the lead submarine, before it receives authorization and funding to begin construction of the lead submarine in October 2020. At that point, the Navy will also have already requested funding for the propulsor and advance construction for the second submarine. Under law, the Navy is required to deposit all appropriations for the Columbia class construction and design into the Fund. To date, the Navy has made three deposits from the Shipbuilding and Conversion, Navy account into the Fund, totaling over $1.6 billion. The Navy is using initial deposits of $773 million in fiscal year 2017 and $862 million in fiscal year 2018 for detail design and continuous production of missile tube components. The Navy Comptroller initiates all deposits into the Fund, which are approved by the DOD Comptroller as internal reprogramming actions, as shown in figure 15. <4.2. Anticipated Savings from Use of Fund s Associated Authorities May Be Overestimated> The Navy anticipates achieving over $1.9 billion in savings through the use of the Fund s associated authorities, but the Navy did not evaluate these savings when it developed the program office cost estimate. Table 4 provides a description of each authority and the Navy s plans and estimated potential savings resulting from use of the authorities. Overall, while we were unable to fully assess the methodology and assumptions the Navy used to estimate anticipated savings, the information we reviewed indicated that the Navy may have overestimated some of the savings the program can realistically achieve through use of the Fund s associated authorities. While the Navy provided some documentation of the cost estimate methodologies, we could not fully validate that the estimated savings were realistic because, in general, the documentation provided by the Navy did not include a detailed description of how the estimates were calculated or how historical data were used to develop the estimate a best practice identified in GAO s cost estimating guide. In some cases, such as for individual critical components, the total value of the component costs was not documented. For other savings, such as advance construction, the Navy could not provide documentation of the calculations or a rationale for the estimated savings. In addition, the Navy assumes a higher rate for Columbia multiyear procurement savings than what has been typically achieved for other programs. The Navy has generally used multiyear procurement contracts after production has begun and some units have already been purchased. For example, according to the Navy, it did not receive multiyear procurement authority for the DDG 51 Arleigh Burke-class destroyer program until 1998 more than 10 years after the contract for the lead ship was awarded and 38 ships had been purchased. We have reported that DOD typically overestimates savings from multiyear procurement authority. Further, in a 2017 presentation to Congress, the Navy stated that multiyear procurement savings are historically 10 to 12 percent. When the Navy requested multiyear procurement authority for the DDG 51 program in fiscal year 2013, it estimated achieving a savings of 8.7 percent. Similarly, when planning material purchases for the Virginia class submarine, the shipbuilder estimated that it would achieve 10 to 15 percent savings through the use of multiyear procurement authority. However, the Navy estimates that the Columbia class program will realize savings of 15 to 20 percent using multiyear procurement authority. A realistic estimate of savings is essential because program management is essentially relying on these savings as the only cost reserve to address any issues that arise during design and construction of the submarines. Updating the cost estimate to reflect these savings will provide program management with a more realistic assessment of the margin available and resources needed to achieve their costs. <5. Conclusions> The Columbia class program is driven by the continued and pressing need to meet the Navy s nuclear deterrent requirements as the legacy submarine fleet cannot extend its life any longer. From the outset this has translated into an aggressive and concurrent schedule for lead submarine construction. To counterbalance this schedule risk, the program plans to complete a substantial amount of the design before starting construction, which may prove challenging as the shipbuilder must complete an increasingly higher volume and complexity of disclosures. This, coupled with failures in missile tubes already delivered to the shipyard, highlight the potential for management challenges ahead. This is not to suggest that in a program of this size and complexity that some issues are not to be expected. Rather, the challenge for the Columbia class program is that the Navy has a limited ability to slow the pace of the program given the mission imperatives. At present, the need for additional resources appears likely because the Navy s margin to mitigate any cost growth from issues that develop during design and construction relies on overestimated savings from use of the Fund s associated authorities. The steps that the Navy takes between now and the fiscal year 2021 budget request to understand and plan for likely program costs will determine whether sufficient funding is in place to cover potential cost growth. The Navy plans to update the lead submarine cost estimate to reflect its current acquisition strategy and, in doing so, the Navy has the opportunity to incorporate more realistic information into the risk analysis and lead submarine cost estimate. In addition, a realistic and well-documented estimate of savings from use of the Fund s associated authorities would help ensure that the Navy has allocated the necessary resources to address any issues that emerge during design or construction of the lead submarine. Such steps will likely improve the reliability of the lead submarine cost estimate and would position the Navy to better align its fiscal year 2021 budget request with funding it will likely need to construct the lead submarine the next key decision point in the Columbia class program. Without an updated cost estimate with more realistic assumptions, Congress will be asked to commit billions of dollars for the lead submarine without knowing the full potential cost of construction and the possible effect on other shipbuilding programs. <6. Recommendations for Executive Action> We are making three recommendations to the Secretary of the Navy: The Secretary of the Navy should direct NAVSEA to incorporate current cost and program data and an updated cost risk analysis in its planned update of the Columbia class lead submarine cost estimate. (Recommendation 1) The Secretary of the Navy should direct NAVSEA to develop a realistic and well-documented estimate of savings from use of the authorities associated with the Fund and incorporate the savings associated with the lead submarine into the Columbia lead submarine cost estimate. (Recommendation 2) The Secretary of the Navy should direct the Columbia class program office to update the lead submarine cost estimate and cost risk analysis prior to requesting funds for lead submarine construction. (Recommendation 3) <7. Agency Comments and Our Evaluation> We provided a draft of the sensitive report to DOD for comment. DOD s written comments on the sensitive report are reprinted in appendix IV and summarized below. DOD concurred and described the actions they have taken or plan to take in response to all three of our recommendations. Regarding our recommendations to update its cost estimate update prior to requesting funds for lead submarine construction, the Navy has stated that it incorporated current cost and program data and an updated risk analysis into its cost estimate for the lead submarine in 2018, as part of an annual review. The Navy also stated that it will continue to update the lead submarine cost estimate with current data prior to requesting funding for lead submarine construction in fiscal year 2021. Until the updated estimate is independently validated an essential cost estimating step we cannot determine that the updated estimate is credible. Further, in response to our recommendation regarding the development of a realistic and well-documented estimate of savings from use of the Fund s associated authorities, the Navy stated that it incorporated savings in its updated cost estimate. However, it has not provided any additional evidence to demonstrate that estimated savings from use of the Fund s associated authorities are realistic and well-documented. Based on documentation that the Navy provided to us, it did not include a detailed description of how the estimates were calculated or how historical data were used to develop the estimate. Until these estimates are independently validated, the Navy cannot be confident that the program will achieve the planned amount of savings. The Navy also provided technical comments, which we incorporated as appropriate. DOD also raised a number of issues related to our assessment of the cost estimate, advance construction, and technology development, which we address in appendix IV. We are sending copies of this report to the appropriate congressional committees, the Acting Secretary of Defense, the Secretary of the Navy, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have questions, please contact me at (202) 512- 4841 or oakleys@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology This report evaluates the Navy s Columbia class submarine program. Specifically, we assessed (1) the Navy s progress and challenges, if any, associated with meeting design goals and preparing for lead submarine construction; (2) the reliability of the Navy s cost estimate for the Columbia class submarine program; and (3) how the Navy is implementing the National Sea-Based Deterrence Fund (the Fund) and associated authorities to construct Columbia class submarines. This report is a public version of a sensitive report that we issued in March 2019. The Department of Defense (DOD) deemed some of the information in our March report to be sensitive, which must be protected from public disclosure. Therefore, this report omits sensitive information about the Navy s development of critical technologies for the Columbia class program, including specific details about the technologies. Although the information provided in this report is more limited, the report addresses the same objectives as the sensitive report and uses the same methodology. To assess the Navy s progress and what challenges, if any, are associated with meeting design goals and preparing for lead submarine construction, we reviewed Navy and shipbuilder documents, including program briefings, schedules, and contract status reports to assess the schedule and performance risks of the Columbia class program. To evaluate the shipbuilder s progress in maturing the Columbia class design, we reviewed the Navy s plans for design management and completion, evaluated the shipbuilder s design schedule, and compared them against design progress reports to identify any delays. To evaluate the Navy s plans for advance construction, we analyzed metrics reported in Navy and shipbuilder documents, briefing slides, and other documentation including key dates and estimated construction plans. We compared design knowledge on the Columbia class program to our prior work on shipbuilding best practices. We reviewed ongoing development efforts and schedules for the Columbia class program s critical technologies to determine remaining risks to their development and integration. We also reviewed the matrices submitted by the Navy to Congress in February and October 2018, to determine the status of the program and identify any changes to the Navy s design and construction goals for the program since our last report in December 2017. We also analyzed available documentation related to the status of the nuclear reactor and integrated power system. We reviewed the shipbuilder s construction plans for its new facilities and its workforce hiring plans. We also reviewed the shipbuilder s and Navy s process for evaluating its suppliers. To corroborate documentary evidence and gather additional information in support of our review, we met with officials from the Navy s Columbia class submarine program office; Naval Nuclear Propulsion Directorate; Naval Surface Warfare Center Philadelphia; Office of the Chief of Naval Operations; Supervisor of Shipbuilding, Groton; the Office of the Deputy Assistant Secretary of Defense for Systems Engineering; and the Office of Undersecretary of Defense for Acquisition and Sustainment. Additionally, we met with shipbuilding representatives from General Dynamics Electric Boat the prime contractor as well as their main subcontractor, Huntington Ingalls Industries Newport News Shipbuilding to understand their role in Columbia class design and construction. To assess the reliability of the Navy s cost estimate for the Columbia class submarine program, we determined the extent to which the estimate met best practices as identified in GAO s Cost Estimating and Assessment Guide. We examined cost estimate documentation, such as the Columbia class program life-cycle cost estimate, briefs, memoranda, and other documents that contain cost, schedule, and risk information. We also examined the independent cost estimate conducted by the Office of the Secretary of Defense s Office of Cost Assessment and Program Evaluation (CAPE), the independent cost assessment conducted by the Naval Center for Cost Analysis (NCCA), and the cost estimate conducted by the Congressional Budget Office, to determine what methodologies and assumptions differed from the program cost estimate. We met with Navy officials who were responsible for developing the cost estimate to understand the processes used by the cost estimators, to clarify information, and to allow the Navy to provide additional documentation on the data and methodologies used in the estimate. We also observed portions of the Columbia class program s cost model during a presentation and discussion with Navy cost estimators. We also reviewed the matrices submitted by the Navy to Congress to identify any changes to the Navy s cost goals and reported information. To further corroborate documentary evidence and gather additional information in support of our review, we conducted interviews with relevant DOD and Navy officials responsible for developing, updating, and assessing the Columbia class program cost estimate, including CAPE; NCCA; the Naval Sea Systems Command s (NAVSEA) Cost Engineering and Industrial Analysis Group; and the Columbia class program office. To evaluate how the Navy is implementing the Fund and associated authorities to construct Columbia class submarines, we reviewed the legislation establishing and modifying the Fund, program budget request documents, and DOD reprogramming approvals. We also reviewed the Navy s basis of estimate for the savings it plans to achieve from these authorities. To further corroborate documentary evidence and gather additional information in support of our review, we met with officials from the Office of the Assistant Secretary of the Navy for Financial Management and Comptroller; Office of the Under Secretary of Defense (Comptroller); and the Columbia class program office to discuss the Navy s plans to use and execute the Fund and DOD s role in approving transfers into the Fund. The performance audit upon which this report is based was conducted from December 2017 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We subsequently worked with DOD from February 2019 to April 2019 to prepare this unclassified version of the original sensitive report for public release. This public version was also prepared in accordance with these standards. Appendix II: Technology Readiness Levels Appendix II: Technology Readiness Levels Description Lowest level of technology readiness. Scientific research begins to be translated into applied research and development. Examples might include paper studies of a technology s basic properties. Invention begins. Once basic principles are observed, practical applications can be invented. The application is speculative and there is no proof or detailed analysis to support the assumption. Examples are still limited to paper studies. Active research and development is initiated. This includes analytical studies and laboratory studies to physically validate analytical predictions of separate elements of the technology. Examples include components that are not yet integrated or representative. Basic technological components are integrated to establish that the pieces will work together. This is relatively low fidelity compared to the eventual system. Examples include integration of ad hoc hardware in a laboratory. Fidelity of breadboard technology increases significantly. The basic technological components are integrated with reasonably realistic supporting elements so that the technology can be tested in a simulated environment. Examples include high-fidelity laboratory integration of components. Representative model or prototype system, which is well beyond the breadboard tested for TRL 5, is tested in a relevant environment. Represents a major step up in a technology s demonstrated readiness. Examples include testing a prototype in a high-fidelity laboratory environment or in a simulated realistic environment. Prototype near or at planned operational system. Represents a major step up from TRL 6, requiring the demonstration of an actual system prototype in a realistic environment, such as an aircraft, vehicle, or space. Examples include testing the prototype in a test bed aircraft. Technology has been proven to work in its final form and under expected conditions. In almost all cases, this TRL represents the end of the true system development. Examples include developmental test and evaluation of the system in its intended weapon system to determine if it meets design specifications. Actual application of the technology in its final form and under mission conditions, such as those encountered in operational test and evaluations. In almost all cases, this is the end of the last bug fixing aspects of true system development. Examples include using the system under operational mission conditions. Appendix III: GAO s Assessment of the Reliability of the Navy s Cost Estimate for the Columbia Class Submarine Program To assess the reliability of the Navy s cost estimate, we determined the extent to which the estimate was consistent with cost estimating best practices as identified in GAO s Cost Estimating and Assessment Guide. This guide groups the best practices into four general characteristics: well documented, comprehensive, accurate, and credible. We reviewed documentation the Navy submitted for its cost estimate including limited portions of the Navy s cost model, conducted numerous interviews, and reviewed relevant sources. We determined that the Columbia class cost estimate substantially met one, and partially met three of the four characteristics of a reliable cost estimate, shown in figure 16. We determined the overall assessment rating by assigning each individual rating a number: Not Met = 1, Minimally Met = 2, Partially Met = 3, Substantially Met = 4, and Met = 5. Then, we calculated the average of the individual assessment ratings to determine the overall rating for each of the four characteristics as follows: Not Met = 1.0 to 1.4, Minimally Met = 1.5 to 2.4, Partially Met = 2.5 to 3.4, Substantially Met = 3.5 to 4.4, and Met = 4.5 to 5.0. We consider a cost estimate to be reliable if the overall assessment ratings for each of the four characteristics are substantially or fully met. If any of the characteristics are not met, minimally met, or partially met, then the cost estimate does not fully reflect the characteristics of a high-quality estimate and is not considered reliable. Appendix IV: Comments from the Department of Defense <8. GAO Comments> In addition to responding to our recommendations, DOD also provided observations on a number of issues related to our assessment of the cost estimate, advance construction, and technology development. Our response to DOD s observations is as follows. <8.1. Assessment of Columbia Class Program s Cost Estimate> In paragraph 4, page 1 of the letter above, the Navy did not agree with our assessment of the accuracy of the cost estimate and stated that the life cycle cost estimate includes accurate calculations, proper inflation tables, and updates to requirements. DOD also stated that GAO or other stakeholders did not identify any errors. This is incorrect. While the Navy allowed us to observe the model, we did not independently check the accuracy of the calculations because Navy officials stated that the cost model, which contains the cost calculations, could not be released. We informed the Navy that this would affect parts of our assessment. After we provided a draft of the report, the Navy provided a briefing summarizing the results of a program office cost checkpoint conducted in September 2018. At the briefing, we received information on updates that the Navy made to the program cost estimate. As a result, we updated our assessment to reflect that the Navy substantially met the best practice to regularly update the cost estimate to reflect significant changes. However, the additional information provided by the Navy did not change our assessment of the accuracy and, therefore, our overall assessment of the Columbia cost estimate remains valid. In paragraph 1, page 2, the Navy did not agree with our assessment of the credibility of the cost estimate and stated that the life cycle cost estimate includes analyses that address sensitivity, risks, and uncertainty within the estimate. As we point out in the report, the estimate is based, in part, on optimistic assumptions regarding the number of labor hours needed to construct Columbia class submarines. The Navy has made updates to the program cost estimate based on a 2018 checkpoint review and stated that the cost risk analysis has been updated and program costs are less than originally estimated. The Navy provided us with a high-level brief of these updates. However, due to the timing of this report, we were not able to fully assess the update to the cost model. Given the size and complexity of the Columbia class program, we continue to believe that the program s cost estimate does not adequately account for program risks. In paragraph 3, page 1, DOD stated that our findings were largely informed by an assessment conducted by the Naval Center for Cost Analysis (NCCA). However, our process for assessing program cost estimates is based on the extent to which the estimate met best practices outlined in GAO s Cost Estimating and Assessment Guide. In conducting our assessment, we examined multiple sources of information, including the Columbia class program life cycle cost estimate, NCCA s independent cost assessment, DOD s Office for Cost Assessment and Program Evaluation s (CAPE) independent cost estimate, and the cost estimate conducted by the Congressional Budget Office (CBO), to determine what methodologies and assumptions differed from the program cost estimate. We also relied on prior experience examining and reporting on the cost performance of Navy shipbuilding programs, issuing 26 reports over the past 10 years. We found, for example, that the cost estimate is based on optimistic labor assumptions which, while in agreement with NCCA s assessment and CBO s estimate, results from our independent assessment of the evidence we reviewed and on our prior work. <8.2. Advance Construction> In paragraph 2, page 2, the Navy stated that it identified super modules and selected components where acceleration would reduce construction schedule risk. We acknowledge in the report that the design for these components will be complete prior to starting construction. However, we continue to believe that starting construction for components of the lead submarine before the arrangements for the submarine are complete increases design and construction risk. Even if the components included in advance construction are fully designed, risk remains for the adjoining and interfacing components within the module that may have ongoing design work, potentially requiring costly and time-intensive rework. <8.3. Technology Development> In paragraph 4, page 2, the Navy notes that fully maturing all of the key technologies identified in our 2017 report such as the advanced propulsor bearing would require substantial investments in money and time. However, we continue to reinforce that a tenet of achieving design maturity is based on demonstrating a prototype in its final form, fit, and function in a realistic environment which requires a design resembling the final configuration. <8.4. Integrated Power System Motor Manufacturing Delays> In paragraph 6, page 2, the Navy stated that it does not agree with our characterization that the Navy is continuing to experience manufacturing problems with the electric drive of the integrated power system. DOD stated that while the vendor experienced delays in manufacturing the prototype motor, it has taken proactive measures to deliver the motor to the shipyard, as scheduled. However, the Navy s plan to concurrently test and finalize the design increases risk that any issues identified in testing could delay the delivery of the system to the shipyard. As a result, we continue to identify this as a key risk to the program. Additional details on this system are classified. Appendix V: GAO Contact and Staff Acknowledgments <9. GAO Contact> Shelby S. Oakley, (202) 512-4841 or oakleys@gao.gov. <10. Staff Acknowledgments> In addition to the contact above, the following staff members made key contributions to this report: Diana Moldafsky, Assistant Director; Laura Jezewski; Jessica Karnis; and Nathaniel Vaught. Other contributions were made by Brian Bothwell; Daniel Glickstein; Kurt Gurka; Stephanie Gustafson; and Robin Wilson. | Why GAO Did This Study
The Navy has identified the Columbia class submarine program as its top acquisition priority. It plans to invest over $100 billion to develop and purchase 12 nuclear-powered ballistic missile submarines to replace aging Ohio class submarines by 2031.
The National Defense Authorization Act for Fiscal Year 2018 and House Report 115-200 included provisions that GAO review the status of the program. This report examines (1) the Navy's progress and challenges, if any, in meeting design goals and preparing for lead submarine construction; (2) the reliability of the Navy's cost estimate; and (3) how the Navy is implementing a special fund and associated authorities to construct Columbia class submarines.
GAO reviewed Navy and shipbuilder progress reports, program schedules, and construction plans. GAO assessed the Navy's cost estimate and compared it to best practices for cost estimating. GAO also reviewed certain Navy funding and acquisition authorities and interviewed program officials.
This is a public version of a sensitive report that GAO issued in March 2019. Information that the Department of Defense (DOD) deemed sensitive has been omitted.
What GAO Found
The Navy's goal is to complete a significant amount of the Columbia class submarine's design—83 percent—before lead submarine construction begins in October 2020. The Navy established this goal based on lessons learned from another submarine program in an effort to help mitigate its aggressive construction schedule. Achieving this goal may prove to be challenging as the shipbuilder has to use a new design tool to complete an increasingly higher volume of complex design products (see figure). The shipbuilder has hired additional designers to improve its design progress. The Navy also plans to start advance construction of components in each major section of the submarine, beginning in fiscal year 2019, when less of the design will be complete.
The Navy's $115 billion procurement cost estimate is not reliable partly because it is based on overly optimistic assumptions about the labor hours needed to construct the submarines. While the Navy analyzed cost risks, it did not include margin in its estimate for likely cost overruns. The Navy told us it will continue to update its lead submarine cost estimate, but an independent assessment of the estimate may not be complete in time to inform the Navy's 2021 budget request to Congress to purchase the lead submarine. Without these reviews, the cost estimate—and, consequently, the budget—may be unrealistic. A reliable cost estimate is especially important for a program of this size and complexity to help ensure that its budget is sufficient to execute the program as planned.
The Navy is using the congressionally-authorized National Sea-Based Deterrence Fund to construct the Columbia class. The Fund allows the Navy to purchase material and start construction early on multiple submarines prior to receiving congressional authorization and funding for submarine construction. The Navy anticipates achieving savings through use of the Fund, such as buying certain components early and in bulk, but did not include the savings in its cost estimate. The Navy may have overestimated its savings as higher than those historically achieved by other such programs. Without an updated cost estimate and cost risk analysis, including a realistic estimate of savings, the fiscal year 2021 budget request may not reflect funding needed to construct the submarine.
What GAO Recommends
GAO is making three recommendations: that the Navy update the lead submarine cost estimate with cost risk analysis using current cost data, develop a realistic estimate of savings from use of the Fund's authorities, and use this updated cost estimate to inform its budget request for lead submarine construction. DOD concurred with GAO's recommendations. |
gao_GAO-19-337 | gao_GAO-19-337_0 | <1. Background> <1.1. EXIM Financing Product Types> As described in figure 1, to support U.S. exports, EXIM offers four major types of financing products: direct loans, loan guarantees, export-credit insurance, and working capital guarantees. Regardless of type, EXIM s financing products generally have one of three maturity periods: Short- term transactions are for less than 1 year; medium-term transactions are from 1 to 7 years long; and long-term transactions are more than 7 years. As we reported in July 2018, for all financing types, EXIM currently conducts a number of preauthorization and postauthorization antifraud activities. See the examples shown in figure 2. <1.2. Fraud Risk Management> Fraud and fraud risk are distinct concepts. Fraud obtaining something of value through willful misrepresentation can be challenging to detect and adjudicate because of its deceptive nature. Fraud risk exists when individuals have an opportunity to engage in fraudulent activity, have an incentive or are under pressure (e.g., financial pressures) to commit fraud, or are able to rationalize committing fraud. When fraud risks can be identified and mitigated, fraud may be less likely to occur. Although the occurrence of fraud indicates there is a fraud risk, a fraud risk can exist even if actual fraud has not yet been identified or adjudicated. According to the Standards for Internal Control in the Federal Government, executive-branch agency managers are responsible for managing fraud risks and implementing practices for combating those risks. Specifically, federal internal control standards call for agency management officials to assess the internal and external risks (including fraud risks) their entities face as they seek to achieve their objectives. The standards state that as part of this overall assessment, management should consider the potential for fraud when identifying, analyzing, and responding to risks. Risk management is a formal and disciplined practice for addressing risk and reducing it to an acceptable level. The leading practices in the Fraud Risk Framework call for agencies to identify inherent fraud risks affecting the program, examine the suitability of existing fraud controls, and then prioritize mitigating residual fraud risks that is, risks remaining after antifraud controls are adopted. Specifically, according to the assess component of the Fraud Risk Framework, managers who effectively assess fraud risks attempt to fully consider the specific fraud risks the agency or program faces, analyze the potential likelihood and impact of fraud schemes, and then ultimately document prioritized fraud risks. Moreover, managers can use the fraud risk assessment process to determine the extent to which controls may no longer be relevant or cost-effective. Leading practices that are consistent with this component include conducting quantitative or qualitative fraud risk assessments at regular intervals, or both, of the likelihood and impact of inherent risks on the program s objectives, and determining the agency s risk tolerance for the inherent fraud risks; identifying specific sources for gathering information about fraud risks, including information on fraud schemes that are reflected in adjudicated cases of fraud; examining the suitability of existing fraud controls for preventing fraud and mitigating fraud risks identified; and documenting in the program s fraud risk profile the analysis of the types of inherent fraud risks assessed, their perceived likelihood and impact, managers risk tolerance, and the prioritization of the inherent fraud risks and any residual fraud risks. As we reported in July 2018, the Fraud Reduction and Data Analytics Act of 2015 requires the Office of Management and Budget (OMB) to establish guidelines that incorporate the leading practices of GAO s Fraud Risk Framework. The act also requires federal agencies to submit to Congress a progress report each year, for 3 consecutive years, on implementation of the risk management and internal controls established under the OMB guidelines. OMB published guidance under OMB Circular A-123 in 2016 affirming that federal managers should adhere to the leading practices identified in the Fraud Risk Framework. As we reported in December 2018, EXIM identifies itself as subject to the act, and, as such, follows it. The Fraud Risk Framework is also aligned with federal internal control standards, specifically Principle 8 ( Assess Fraud Risk ) of the Green Book. Federal internal control standards also state that excessive pressures, such as financial pressures (e.g., delinquent federal debt), can pose a fraud risk factor to agency programs as these pressures can provide an incentive or motive to commit fraud. Although the existence of financial pressure alone does not necessarily indicate that fraud exists or will occur, financial pressure is often present when fraud does occur. <1.3. Delinquent Federal Debt and EXIM Financing Programs> Applicants for EXIM programs who have delinquent federal debt may not be able to obtain certain types of financing until they resolve their debts. Specifically, under 31 U.S.C. 3720B, applicants who are delinquent on federal nontax debts may not receive federal financial assistance, including such assistance provided by EXIM, until they satisfactorily resolve the delinquency (e.g., pay in full or negotiate a new repayment plan). However, 31 U.S.C. 3720B also provides that an agency head may waive this restriction. Additionally, OMB s Circular No. A-129, Policies for Federal Credit Programs and Non-Tax Receivables, prescribes to agencies the policies, procedures, and standards for screening program participants to determine whether they are delinquent on any federal debt when applying to federal credit programs. <2. EXIM Reported Antifraud Controls for Mitigating Fraud Risks Identified and a Fraud Risk Assessment That Considered Those Risks Closed Cases of Fraud Generally Involve Four Fraud Risk Factors> We identified fraud risks generally involving four overall fraud risk factors by examining EXIM-associated court cases of fraud adjudicated from calendar year 2012 through calendar year 2017. We then communicated these fraud risks to EXIM, and EXIM officials reported examples of existing controls it uses to help detect and mitigate these fraud risks. EXIM also provided documentation reflecting its efforts to conduct a fraud risk assessment that considered various fraud risks affecting its major financing product lines, including fraud risks we identified during this review. We identified fraud risks generally involving four overall fraud risk factors by examining 44 EXIM-associated closed court cases of fraud adjudicated from calendar year 2012 through calendar year 2017. Specifically, the various fraud risks we identified overall involved one or more of the fraud risk factors illustrated in figure 3 below: opportunities to falsify self-reported information on applications or financial pressures that potentially incentivized participants or employees to commit fraud; opportunities to circumvent or take advantage of EXIM or lender opportunities to circumvent the intent of EXIM s programs by diverting loan proceeds and other EXIM financing for personal use or benefit instead of for the export of U.S. goods. See appendix I for a summary of these 44 cases we reviewed. These 44 cases illustrate the financial risks associated with fraud against EXIM. Federal and state courts combined have ordered restitution of $82.4 million in the 44 adjudicated cases, but much of that restitution has not yet been paid. For example, as of October 2018, the total remaining unpaid restitution amount is $71.6 million, or over 80 percent. In one fraud case we reviewed, which was adjudicated in 2013, a federal court ordered a convicted U.S. exporter to pay EXIM $8.6 million in restitution for the fraud that he committed in a loan guarantee program. Since 2013, the participant has paid back $25.00 of this amount. <2.1. EXIM Reported Antifraud Controls for Mitigating Fraud Risks Identified in Closed Cases> EXIM reported having existing antifraud controls to mitigate the fraud risks we identified. Specifically, we communicated to EXIM the fraud risks we identified from our review of the 44 adjudicated cases. In response, EXIM officials described general antifraud controls the agency currently uses to help detect and mitigate each of the fraud risks we identified. The officials stated that EXIM has experience with all the fraud risks we identified and stated that they were generally confident that EXIM s antifraud controls were appropriate for mitigating the risks. EXIM officials consider many of the fraud risks that we identified as risks that could impact any of the agency s financing programs (i.e., credit insurance, loan guarantees, direct loans, or working capital guarantee programs). EXIM officials provided examples of the general antifraud controls that they said EXIM uses to mitigate the fraud risks we identified across all agency financing products. According to EXIM officials and as illustrated in figure 4 below, these controls include: fraud prevention and detection procedures; due diligence standards; and a list of red flags that EXIM staff should be aware of and is used to identify indicators of potential fraud and corruption that may appear on EXIM transaction documents. Officials said that their confidence in the controls stems from seeing a reduction in fraud cases since the early 2000s after these antifraud controls were put in place. EXIM officials clarified that this confidence does not stem from completing a comprehensive fraud risk assessment of fraud risks impacting all of its financing products consistent with the leading practices in the Fraud Risk Framework. <2.2. EXIM s Fraud Risk Assessment Considered Fraud Risks Identified> EXIM also provided documentation reflecting its efforts to conduct a fraud risk assessment that considered various fraud risks affecting its major financing product lines, including fraud risks we identified during this review. EXIM officials said that the fraud risks we identified were generally already known to EXIM as they relate to or are very similar to those fraud risk factors contained in EXIM s list of red flags. EXIM officials acknowledged that assessing its fraud risks and evaluating the agency s existing antifraud controls may indicate opportunities for EXIM to further adapt EXIM s antifraud controls to mitigate any residual fraud risks within its tolerance level. Such assessments can further help EXIM mitigate fraud and the resulting effects across all product lines before they occur, which includes the length of time it can take for EXIM to fully recover from restitution losses after fraud has been perpetrated, as illustrated in the 44 cases presented in appendix I. <3. EXIM Has Procedures for Detecting Delinquent Federal Debt Owed by Applicants and Participants but Is Missing Additional Opportunities to Use Readily Available SAM Data to Do So> EXIM has procedures for detecting delinquent federal debt owed by EXIM applicants and participants. However, EXIM is missing additional opportunities to use readily available SAM data to identify ineligible applicants or participants that may have delinquent federal debt, and to use such data to determine eligibility or assess repayment fraud risk. <3.1. EXIM Has Procedures to Detect Delinquent Federal Debt Owed by Applicants and Participants> EXIM has procedures to detect delinquent federal debt owed by applicants and participants that include reviewing their credit reports and requiring applicants to certify that they and other participants do not have such delinquent debt. Under 31 U.S.C. 3720B, applicants who are delinquent on federal nontax debts may not receive federal financial assistance, including direct loans, loan guarantees, or loan insurance until they satisfactorily resolve the delinquency (e.g., pay in full or negotiate a new repayment plan). 31 U.S.C. 3720B does not address delinquent federal tax debt; however, such delinquent federal debt may also pose a fraud risk or repayment fraud risk to EXIM s financing programs. Additionally, OMB Circular No. A-129 prescribes to agencies the policies, procedures, and standards for screening program participants to determine whether they are delinquent on any federal debt when applying to federal credit programs, including recommending that agencies ask applicants to self-certify on their applications that they have no delinquencies; requiring agencies to obtain and review applicants credit reports; and encouraging agencies to use appropriate databases, such as the Department of the Treasury s Do Not Pay portal sources to identify delinquent federal debtors during the application screening process. According to EXIM officials, the agency employs procedures to ensure its policies and processes meet these requirements for applicable financing products. Specifically, and as illustrated in figure 5 below, these procedures include reviewing the following: Self-certifications: EXIM applications for relevant financing programs include a self-certification by the applicant that the applicant does not have delinquent federal debt. However, as we have reported in the past, relying on applicants to self-report adverse actions on their applications, instead of verifying such information, could cause an agency to miss opportunities to develop a more-complete picture of the applicants. Credit reports: EXIM obtains credit reports for applicants and participants in some financing products. In particular, EXIM s internal Loan Guarantee and Credit Insurance Manual of 2015 communicates the 31 U.S.C. 3720B restriction to loan officers and instructs them to review the borrower s credit report to check whether the borrower is delinquent on any federal debt. If the loan officer finds that the credit report reflects such delinquent federal debt, the manual further instructs the loan officer to advise and request guidance from EXIM s Trade Finance Director and the Office of General Counsel. However, as we have reported in the past, some delinquent federal tax debt may not appear on the credit reports unless the Internal Revenue Service has filed a lien on the delinquent federal tax debt. World Check: EXIM, through the assistance of a third-party vendor, also makes use of some data sources listed in the Do Not Pay sources as part of its prescreening application process and possibly during postauthorization risk-based reviews. Specifically, EXIM officials told us that EXIM uses Thomson Reuters s World Check database to identify federal debts owed by applicants as part of its Character, Reputational, and Transaction Integrity (CRTI) review process that is managed by EXIM s Credit Review and Compliance Division. The World Check database currently checks over 20 different watch lists and other databases, including lists of entities excluded from doing business with the federal government maintained in GSA s SAM. According to EXIM, other sources in the World Check database that reveal such federal debts could also lead indirectly to the discovery of delinquent federal debt. However, as discussed below, this check of SAM does not involve a check of delinquent federal debt. This CRTI review process is conducted during the underwriting (i.e., the preauthorization review) phase and may occur throughout the life cycle of transactions, such as during EXIM s postauthorization risk-based reviews. EXIM officials told us that, as part of this process, loan officers or other EXIM officials send the names of applicants to EXIM librarians, who perform a manual search of the World Check database, review results, and return relevant results to EXIM officials for their consideration. EXIM officials noted that this process can be challenging, particularly when librarians perform searches on applicants with common names, which produce many results that are not useful. EXIM officials told us that EXIM does not track information on instances in which an applicant s delinquent federal debt prevents a transaction from moving forward or prevents a specific applicant s participation in a transaction. Consequently, EXIM officials told us that EXIM has no records of this happening. However as described in greater detail below, EXIM does not make use of readily available SAM data to identify delinquent federal debts owed by applicants and participants, which could limit its ability to detect instances in which applicants and participants owe these debts. <3.2. EXIM Is Missing Additional Opportunities to Use Readily Available SAM Data to Detect Applicants and Participants That May Have Delinquent Federal Debt> EXIM is missing additional opportunities to use readily available SAM registration data to identify potentially ineligible applicants and participants that may have delinquent federal debt or may otherwise pose a repayment fraud risk. Specifically, while EXIM employs procedures that may reveal applicants delinquent federal debts, as described above, EXIM s procedures for identifying applicants and participants with delinquent federal debt do not include a search of a specific data element in the SAM database that can be used to detect delinquent federal debtors. The data element we refer to here is the Debt Subject to Offset flag, which may reflect both nontax and tax delinquent federal debts owed. As mentioned previously, SAM is a government-wide information system that federal agencies can use to obtain information on businesses that do business with the federal government, including an entity s Debt Subject to Offset status. The Debt Subject to Offset data element in SAM indicates that the entity potentially has a delinquent federal debt subject to collection under the Treasury Offset Program. The GSA officials who maintain the SAM database told us that all federal agencies have the legal authority to use the SAM registration database free of charge. Specifically, all federal agencies can use this database to manually search by an entity s name, Data Universal Numbering System number, or Tax Identification Number for the purpose of detecting whether the entity potentially has delinquent federal debt, such as by identifying whether an entity s SAM record contains the Debt Subject to Offset flag. Further, GSA officials also told us that all federal agencies are able to request batches of SAM registration data free of charge, for the purpose of matching these data to agency data by entities names, Data Universal Numbering System numbers, or Tax Identification Numbers for the purpose of identifying entities that may have the Debt Subject to Offset flag in SAM, among other available data. Performing data analytics, such as batch matching, on available data is a leading practice cited in the Fraud Risk Framework that we have reported can help improve agency efforts to combat fraud. In particular, we have found in prior work that using available data to verify that EXIM s transaction applicants are not delinquent on federal debt can help EXIM assure applicant eligibility is consistent with federal guidance, provide reasonable assurance of repayment, and help prevent fraud. We have also found that using available data to independently verify self-reported delinquent federal debt information, such as self-reported information on delinquent federal tax debt owed, is a key detection and monitoring component of fraud prevention. We identified additional opportunities for EXIM to manually use SAM s online database or data-matching approaches to identify applicants or participants with potential delinquent federal debt. Specifically, we registered in SAM to conduct several manual searches (by entities Data Universal Numbering System numbers, Tax Identification Numbers, and names) and confirmed that it can be used to conduct such searches without incurring any external costs charged by GSA. For example, we conducted two Data Universal Numbering System number searches and found two active EXIM participants appearing in SAM s registration database with a Debt Subject to Offset flag. We also obtained historical SAM data from GSA and EXIM transaction data and confirmed that these data sources could be used to identify EXIM applicants and participants with potentially delinquent federal debt in a batch match (rather than manual, case-by-case searches). As illustrated in our batch-matching results below, we found this data-matching process can provide an opportunity to match these data sets using the Tax Identification Numbers and Data Universal Numbering System numbers for the entities in both data sets. Our batch-matching analyses indicated that, from calendar year 2014 through calendar year 2016, EXIM authorized transactions that had an aggregate authorization value of approximately $34.3 billion. Of that amount, we found the following: An aggregate authorization value of about $1.7 billion was associated with 32 U.S.-based companies that had a delinquent federal debt indicator in SAM in the same month that these transactions were authorized. The transactions mostly involved U.S.-based applicants and exporters. As mentioned above, associated parties we reviewed included not only the applicant, but also participants involved, including the borrower, buyer, and exporter, which may or may not be the applicant. While the results of this analysis do not mean that EXIM should have suspended these transactions in accordance with 31 U.S.C. 3720B, these results nonetheless indicate that the data in SAM that indicate delinquent federal debt could provide an opportunity for EXIM to identify important indicators of applicants or other transaction participants with potential delinquent federal debt when determining their program eligibility and assessing any related fraud risks or repayment risks they present during EXIM s preauthorization CRTI reviews. Because the Debt Subject to Offset flag may indicate either nontax debts or tax debts, it is possible that some of these entities owed delinquent federal nontax debts that are applicable under 31 U.S.C. 3720B, indicating EXIM should have considered suspending these transactions. However, it is also possible that some of these entities owed delinquent federal tax debts that are not applicable under 31 U.S.C. 3720B, but that may pose a fraud risk or repayment risk nonetheless. By using the Debt Subject to Offset flag as an indicator of these delinquent federal debts and gathering additional information on the specific facts and circumstances of each case, EXIM would be better positioned to assess the relevant compliance, fraud, and repayment risks an applicant s or participant s delinquent federal debt may pose. An aggregate authorization value of about $4.1 billion was associated with 97 U.S.-based companies that had a delinquent federal debt indicator in SAM during the transaction maturity period (i.e., after the month they were approved, but before the transactions maturity date). These transactions mostly involved U.S.-based applicants and exporters. As mentioned above, associated parties we reviewed included not only the applicant, but also participants involved, including the borrower, buyer, and exporter, which may or may not be the applicant. 31 U.S.C. 3720B may prevent applicants with federal financial debts from obtaining loans, guarantees, and insurance; thus, it does not apply to any delinquent federal debt accrued after loan approval. However, we looked at potential delinquent debt accrued after approval because delinquent debt accrued after approval and during the transaction maturity period might affect EXIM s view of a financing product s repayment risk. Further, EXIM already conducts similar postauthorization monitoring to identify such risks through its use of World Check as part of its CRTI process described above. Thus, these results nonetheless illustrate that EXIM can use SAM data during EXIM s postauthorization CRTI reviews to identify transaction participants with potential delinquent federal debt and determine the extent to which they may pose a repayment fraud risk. Prior to sharing our results with EXIM, EXIM officials told us that they have access to SAM entity registration records, but they believe searching the SAM registration database is a time-consuming process that should be reserved for rare circumstances. Further, EXIM officials also told us that using the SAM registration database to identify applicants or participants that have the Debt Subject to Offset flag in SAM would yield few results because the vast majority of their financing program participants are foreign-based entities, and thus would not also be contractors for the U.S. government and registered in SAM. However, we identified many U.S.-based entities that had a delinquent federal debt indicator either in the month a transaction was approved, or during the transaction s maturity period, by searching in the SAM database and analyzing SAM data for EXIM applicant and participants, as described above. Further, it is not clear whether performing manual searches or batch matches with SAM data to identify delinquent federal debtors would be any more time-consuming than EXIM s current procedures for doing so, which include manual searches of World Check and obtaining and reviewing credit reports, as described above. When we met with EXIM officials to communicate our batch-matching results above, they expressed concern that these results could imply that EXIM is doing business with applicants or participants with delinquent federal debt. They then indicated that they were interested in obtaining SAM registration data so that they could determine whether it would be feasible for them to perform the same type of analysis that we performed. In a subsequent meeting, EXIM officials informed us that they were also able to obtain current SAM registration data, analyze the SAM data against active EXIM participant data, and find dozens of active EXIM participants with the Debt Subject to Offset flag in SAM. The results of our analyses, as well as EXIM s own experience with the SAM data, suggest EXIM also has an additional and practical opportunity to incorporate searches of SAM entity registration data as part of its postapproval monitoring of transactions to enhance its monitoring of and response to risks in ongoing transactions. Standards for Internal Control in the Federal Government state that management should use quality data to achieve agency objectives. For example, this could include agencies obtaining relevant operational, financial, or compliance-related data from reliable internal and external sources in a timely manner based on identified information requirements, and then using such data to make informed decisions and evaluate performance in achieving program objectives and addressing risks. Without also pursuing available debt data in SAM s registration database, as an additional layer of due diligence, to identify applicants with delinquent federal debt during underwriting and compliance reviews, EXIM is potentially forgoing practical opportunities to use such data when determining applicants program eligibility and to adopt leading practices for managing repayment fraud risks across EXIM s financing programs. In particular, such available SAM data can provide opportunities to verify independently the applicants self-certification of delinquent federal debts they owe and assess whether the applicants may have misrepresented their delinquent federal debt status on their applications, which is a fraud risk in the application process; detect potential delinquent federal debts that are not apparent in credit make informed eligibility decisions during preauthorization CRTI reviews and assess repayment fraud risk during postauthorization CRTI reviews. <4. Conclusions> EXIM assumes the credit and country risks that the private sector is unable or unwilling to accept, including the risk of losses due to fraud. EXIM s financing products face various fraud risks, and EXIM has begun to take steps to consider these fraud risks as part of a full fraud risk assessment, as we recommended in July 2018. However, because it remains unclear whether EXIM s actions fully respond to the recommendations of our July 2018 report, we will continue to monitor EXIM s progress in fully assessing its fraud risks. EXIM also employs procedures to detect delinquent federal debt owed by EXIM applicants and participants. However, EXIM is missing opportunities to use readily available SAM data to identify applicants or participants that may misrepresent their delinquent federal debt status and pose a repayment fraud risk to EXIM financing programs. Applicants or participants with delinquent federal debt could be one of many repayment fraud risks that could indicate an increased risk of nonrepayment and incentives to commit fraud against EXIM. EXIM officials believe searching SAM is a time-consuming process that would yield few results. However, manually searching SAM s online registration database for the purpose of determining whether an applicant or participant may have a Debt Subject to Offset flag may not be any more time-consuming than what EXIM currently performs through its preauthorization or postauthorization CRTI reviews. Nevertheless, we demonstrate in this report the practicality and illustrate results of using such data through multiple approaches, such as batch matching, without incurring any external costs charged by GSA. By assessing the practicality of searching SAM data, EXIM may determine that this source of data provides an additional tool for combating fraud. Implementing these antifraud activities could further help EXIM verify program eligibility, identify repayment fraud risk, and provide EXIM with reasonable assurance that it is effectively and efficiently carrying out its mission of supporting U.S. jobs and the export of U.S. goods. <5. Recommendations for Executive Action> We are making the following two recommendations to EXIM: EXIM s chief operating officer should direct EXIM s Credit Review and Compliance Division to assess and document the practicality of incorporating into its preauthorization CRTI reviews searches of data elements in SAM that indicate delinquent federal debts owed by applicants, and, if practical, implement relevant approaches such as manual searches or batch matching. (Recommendation 1) EXIM s chief operating officer should direct EXIM s Credit Review and Compliance Division to assess and document the practicality of incorporating into its postauthorization CRTI reviews searches of data elements in SAM that indicate delinquent federal debts owed by applicants and participants, and, if practical, implement relevant approaches such as manual searches or batch matching. (Recommendation 2) <6. Agency Comments and Our Evaluation> We provided a draft of this report to EXIM for review and comment. In its written comments, reproduced in appendix II, EXIM concurred with our recommendations and stated that it will move forward to implement them. EXIM also provided technical comments, which we incorporated as appropriate. In its written comments, EXIM noted a number of points it referred to as key concerns. These points do not disagree with our findings, conclusions, or recommendations. Specifically, EXIM stated that the 44 cases we reviewed involved transactions that were approved between 2002 and 2012 and that it will continue to work with the Department of Justice to collect restitution payments. Additionally, EXIM stated that it is in full compliance with 31 U.S.C. 3720B and the related provisions of OMB Circular A-129 guidance regarding restrictions on doing business with delinquent federal debtors. However, assessing EXIM s compliance with 31 U.S.C. 3720B or OMB Circular A-129 was outside the scope of this report. Finally, for the purpose of implementing our recommendations, EXIM requested the data pertaining to the U.S.-based companies that we found to have a delinquent federal debt indicator in SAM. To identify those companies, we used (1) an extract of data that EXIM provided to us, and (2) GSA SAM data, which EXIM told us it can and has already obtained directly from GSA. We will provide EXIM with a copy of the EXIM data it requested. However, we believe EXIM will be better positioned to assess the practicality of checking the SAM delinquent federal debt flag by continuing to obtain the SAM data directly from GSA. We are sending copies of this report to the appropriate congressional committees, the president and board chairman of EXIM, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-6722 or bagdoyans@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Summary of GAO Review of 44 Cases Adjudicated from Calendar Years 2012 through 2017 The table below summarizes the information we reviewed during our review of the 44 Export-Import Bank of the United States (EXIM)- associated cases of alleged fraud that we were able to identify and determine were adjudicated from calendar years 2012 through 2017. Such information includes financing product types, dates adjudicated, fraud schemes, fraud risk factors involved, and the amount of EXIM restitution owed and paid to EXIM. As mentioned earlier, the fraud risks we identified in these 44 cases related to one or more of the following four fraud risk factors: (1) opportunities to falsify self-reported information on applications or transaction documents, (2) financial pressures that potentially incentivized participants or employees to commit fraud, (3) opportunities to circumvent or take advantage of EXIM or lender controls, or (4) opportunities to circumvent the intent of EXIM s financing programs by diverting loan proceeds and other EXIM financing for personal use or benefit instead of for the export of U.S. goods. Appendix II: Comments from the Export- Import Bank of the United States Appendix III: GAO Contact and Staff Acknowledgments <7. GAO Contact> <8. Staff Acknowledgments> In addition to the contact named above, Jonathon Oldmixon (Assistant Director), Flavio Martinez (Analyst in Charge), Mason Calhoun, Marcus Corbin, Anthony Costulas, Adam Cowles, David Dornisch, Heather Dunahoo, Paulissa Earl, Colin Fallon, Dennis Fauber, Jennifer Felder, Dragan Matic, Maria McMullen, Christopher H. Schmitt, Albert Sim, Sabrina Streagle, and Steve Westley made key contributions to this report. | Why GAO Did This Study
As the export credit agency of the United States, EXIM's mission is to help support U.S. jobs by facilitating the export of U.S. goods and services through direct loans, loan guarantees, working capital guarantees, and credit insurance. In September 2018, the total outstanding and undisbursed amount of these products and unrecovered default claims was about $60.5 billion, according to EXIM.
The Export-Import Bank Reform Reauthorization Act of 2015 included a provision for GAO to review EXIM's antifraud controls. This report (1) describes key antifraud controls EXIM says it has for mitigating fraud risks identified by GAO, and describes EXIM's efforts to perform a fraud risk assessment that considers these fraud risks; and (2) identifies EXIM's procedures to detect delinquent federal debt owed by applicants and participants, and assesses additional opportunities to use readily available data to do so. GAO analyzed 44 EXIM-associated court cases of fraud adjudicated from calendar years 2012 through 2017, examined EXIM transaction data, and interviewed EXIM and GSA officials. GAO also analyzed data identifying delinquent federal debt as well as EXIM's procedures for doing so.
What GAO Found
The Export-Import Bank of the United States (EXIM) reported having antifraud controls in place for mitigating the fraud risks that GAO identified and communicated to EXIM officials. GAO reviewed 44 EXIM-associated court cases involving fraud and identified fraud risks involving the four fraud risk factors illustrated in the figure below. GAO communicated these fraud risks to EXIM officials, and they provided examples of antifraud controls they use to help mitigate these fraud risks for their major financing products. In February 2019, EXIM also provided documentation reflecting its efforts to conduct a fraud risk assessment that considered various fraud risks affecting its major financing product lines, including fraud risks GAO identified during this review.
EXIM has procedures to identify applicants and participants with delinquent federal debt, such as obtaining applicants' credit reports that may indicate these debts when they apply to EXIM's financing programs. However, EXIM is missing additional opportunities to use readily available data containing delinquent federal debt indicators from the General Services Administration's (GSA) System for Award Management (SAM) to detect applicants and participants that may have delinquent federal debt. Federal law states that applicants who are delinquent on federal nontax debts may not receive federal direct loans, loan guarantees, or loan insurance until the delinquent debt is satisfactorily resolved. Using data from SAM, GAO found that, from calendar years 2014 through 2016, EXIM authorized transactions that had an aggregate authorization value of about $1.7 billion and were associated with 32 U.S.-based companies that had a delinquent federal debt indicator in SAM in the same month EXIM authorized these transactions . While these results alone do not mean EXIM should have suspended these transactions, they do indicate that there is a practical opportunity to use SAM data to help determine applicants' eligibility. Without assessing the practicality of pursuing such readily available data, EXIM is potentially forgoing opportunities to perform additional due diligence that would help inform its decisions about applicants' and participants' program eligibility and fraud risks.
What GAO Recommends
GAO is recommending that EXIM assess the practicality of using available SAM data and data-analytical approaches to detect applicants and participants with potential delinquent federal debt. EXIM concurred with GAO's recommendations. |
gao_GAO-19-609 | gao_GAO-19-609_0 | <1. Background> <1.1. Initial USAID Reform Efforts> In response to Executive Order 13781, USAID established the Transformation Task Team (T3) in June 2017 to plan and lead the agency s reform efforts. As noted in a previous GAO report, USAID launched several internal reform efforts and participated in a joint State- USAID redesign process during mid-2017, which resulted in a joint reform plan. USAID also developed a supplemental reform plan that focused on issues internal to USAID. State and USAID submitted these plans to OMB in September 2017. In January 2018, USAID suspended its participation in the joint State USAID redesign process and continued to plan and implement its own internal reforms. According to USAID, its reform efforts are intended to support its bilateral partners to become more self-reliant and capable of leading their own development, with the ultimate goal of ending the need for foreign assistance. To achieve this goal, USAID identified five objectives, referred to as desired outcomes, as the basis for its reform efforts. The five objectives are: (1) establish metrics and approaches to help host country recipients of assistance become more self-reliant; (2) restructure bureaus and offices to strengthen the organization s core capabilities; (3) advance national security interests; (4) improve human capital processes; and (5) maximize taxpayer investments in foreign assistance. According to USAID officials, OMB generally approved the USAID reform plans and associated projects by March 2018. Figure 1 shows the key events in the initial phases of USAID s reform efforts up to the point OMB provided this approval. <1.2. Key Practices for Agency Reform Efforts> In developing our June 2018 report to assist Congress, OMB, and agencies in assessing agency reform plans, we reviewed our prior work on key practices for organizational transformations; collaboration; government streamlining and efficiency; fragmentation, overlap, and duplication; and high risk and other long-standing agency management challenges. The resulting report includes 58 key questions to aid in assessing reform efforts. (See app. II for a complete list of the 58 key questions.) The questions are organized into four broad categories and 12 subcategories, as shown in table 1. These subcategories encompass the key practices that we used to assess USAID s reform efforts. For the purposes of this report, we determined that the subcategory of Workforce Reduction Strategies was not applicable to our assessment because USAID is not undertaking workforce reductions as part of its reform effort. <2. USAID Has Completed 19 Reform Projects, Is Implementing 12, and Is Planning One Other as of July 2019> USAID s reform efforts consist of a total of 32 reform projects 31 projects being implemented by USAID s Transformation Task Team (T3) and an additional Human Resources Transformation project that predates USAID s other reform efforts. As shown in table 2, as of July 2019, USAID has completed 19 projects and is implementing 12 others, all of which USAID intends to complete by 2021. The task team also has one project still in the planning phase. In order to develop and implement the 32 reform projects, USAID has identified approximately $33 million in estimated costs associated with its reforms up through April 2019. According to USAID, this total includes about $3 million to develop the T3 reform efforts in fiscal year 2018 and approximately $6 million to implement its reform efforts over a period of 2 years, which USAID assumes will cover fiscal years 2019 and 2020. In addition, USAID estimated that, as of April 2019, it has expended about $24 million in fiscal year 2017 2019 funds for human resource efforts that are associated with its ongoing Human Resources Transformation project. <3. Reform Efforts Generally Addressed Nearly All Key Practices, but Gaps Exist Related to Performance Measures and Strategic Workforce Planning> <3.1. USAID Generally Addressed Nine Key Practices for Planning and Implementing Agency Reforms> As shown in table 3, USAID s reform efforts generally addressed nine of the key practices that we previously identified as critical to the success of agency reforms, and its reform efforts partially addressed two others. <3.1.1. Determining the Appropriate Role of the Federal Government> USAID determined the appropriate role of the federal government by considering the private sector and governments ability to manage responsibility for and invest their own resources into foreign development and humanitarian assistance programs. Our prior work shows it is important for agencies engaged in reforms to reexamine the role of the federal government in carrying out specific missions and programs, policies, and activities by reviewing their continued relevance and determining whether the federal government is best suited to provide that service or if it can be provided by some other level of government or sector more efficiently or effectively. In line with the USAID Administrator s vision of ending the need for foreign assistance, USAID has developed several projects under its Journey to Self-Reliance objective to increase bilateral partner countries ability to plan, finance, and implement solutions to solve their own development challenges. Beginning in mid-2017, USAID launched a process to identify a set of third-party metrics for assessing a country s level of self-reliance. In June 2018, USAID announced the identification of 17 metrics to capture a country s overall commitment and capacity for self-reliance. The publicly available metrics cover areas such as open and accountable governance; inclusive development; economic policy; and the relative capacities of the government. Starting in fiscal year 2019, USAID produced 136 country roadmaps, or tools for measuring each low- and middle-income country s overall level of self-reliance through its performance on the 17 metrics. USAID is using the country roadmaps as a tool to inform strategic decision-making and resource allocation processes, better focus USAID s investments, and indicate when a recipient country should be considered for a strategic transition to a new partnership model with the U.S. government. For example, USAID identified Albania as a country to pilot this concept, which envisions a new partnership model for a country exhibiting an advanced level of self-reliance and the development of a strategy and plan for how to shift to this new model over time. In addition, USAID s Journey to Self-Reliance efforts include a project to expand its engagement with the private sector. According to a USAID document, donor agencies are unable to fulfill their goals for sustainable development on their own; in contrast, the private sector has the scale and resources to address the complexity of challenges that developing countries face in becoming self-reliant. In December 2018, USAID released a new Private Sector Engagement Policy intended to increase and deepen the collaboration of USAID staff and its partners with the private sector across all areas of the agency s work. <3.1.2. Involving Employees and Key Stakeholders> USAID involved its employees and key stakeholders in its internal reform efforts. Our prior work has shown that it is important for agencies to directly and continuously involve not only their employees but also key stakeholders in the development of major reforms. USAID has involved its employees in its reform efforts through a variety of means. For example, since 2017, USAID reform leaders have conducted town-hall style meetings with employees in Washington, D.C., and in the field. USAID reform leaders have also briefed senior management, bureau- and office-level leadership, and mission directors about reform efforts. In addition, they have communicated reform updates in the agency s internal newsletter and have informed employees of reform projects through multiple venues, such as web-based seminars and agency notices. USAID has also involved key stakeholders, including Congress and State, in its reform efforts. The Administrator has testified before Congress, and USAID officials have briefed Congress about the status of the reform efforts. USAID also submitted reorganization proposals to congressional committees for review and approval. Moreover, USAID engaged with State officials at the senior and working levels on several of its reform projects, including its self-reliance metrics, strategic transitions, and workforce flexibility and mobility projects. However, T3 officials noted that its engagement with State has been hindered by leadership challenges at State, including the lack of a single official or entity at State with responsibility for coordinating with USAID on reform efforts. In our prior work, we found a lapse in State s leadership focus on reform efforts, and we recommended that State establish a dedicated team to manage the implementation of all reform projects that the Secretary of State decides to pursue. <3.1.3. Using Data and Evidence> USAID s T3 used various sources of evidence and data to design its reform plans, including recommendations made by external organizations and employee feedback. Our prior work has shown that agencies are better equipped to address management and performance challenges when managers effectively use data and evidence, such as from program evaluations and performance data that provide information on how well a program or agency is achieving its goals. USAID developed its reform projects based on research and recommendations from various sources, including GAO, the USAID Office of Inspector General, USAID s Advisory Committee on Voluntary Foreign Aid, think tanks, and coalitions of organizations focused on international development. For example, USAID s reform proposal to merge and restructure its Offices of U.S. Foreign Disaster Assistance and Food for Peace into the Bureau for Humanitarian Assistance stems, in part, from the results of an in-depth, external study that USAID commissioned in 2016, which entailed significant consultations with internal and external stakeholders as well as data analysis. As another example, USAID s Explore Delivery of Human Resources Operations project was based, in part, on two GAO reports recommending steps to improve the collection of contract data. In May 2017, State launched a listening tour intended to gather ideas and feedback from State and USAID employees on the joint State-USAID redesign process. As a key component of this outreach effort, State hired a contractor to design and administer a confidential, online listening survey, which was sent to State and USAID employees. The listening survey identified pain points, recommendations, and themes that informed USAID s reform plans. For example, USAID s projects aimed at reorganizing its structure address a listening tour theme regarding the need to better align its bureau and office functions with USAID s core mission. In another example, some of USAID s human resource reform projects address another listening tour theme related to the need to support USAID employees in focusing more of their attention on achieving strategic priorities and less time on inefficient and burdensome administrative tasks. <3.1.4. Addressing Fragmentation, Overlap, and Duplication> According to USAID, it sought to reduce or better manage fragmentation, overlap, and duplication through multiple reform efforts, including its restructuring projects, its consolidated framework for private sector engagement, and efforts aimed at redefining and rationalizing roles and responsibilities in areas such as countering violent extremism and civilian- military coordination. In our prior work, we have identified actions that agencies could take to achieve greater efficiency or effectiveness by reducing or better managing programmatic fragmentation, overlap, and duplication. In July and August 2018, USAID sent to various congressional committees for approval a series of initiatives to restructure its bureaus and offices to streamline operations and gain efficiencies. USAID included a proposal to restructure the Office of the Administrator by adding two associate administrators. According to a USAID document, this change would allow the administrator to more effectively manage the complexity of USAID s work and reduce the number of entities directly reporting to the administrator from 27 to 11. One of the new associate administrators would manage USAID s relief, response and resilience functions, and the other would manage the agency s strategy, management, and operations. The congressional committees had not approved all of these proposals as of June 2019, according to USAID. As of June 2019, according to USAID, the congressional committees had approved five of the seven reorganized bureaus proposed by USAID: the Bureau for Humanitarian Assistance; the Bureau for Resilience and Food Security; the Bureau for Conflict Prevention and Stabilization; the Bureau for Development, Democracy, and Innovation; and the Bureau for Asia. Two other proposed bureaus had not yet received approval from all of the committees: the Bureau for Management and the Bureau for Policy, Resources, and Performance. Figure 2 shows USAID s proposed changes to its headquarters organizational structure. According to USAID documents, reorganizing these bureaus is in part intended to reduce fragmentation, overlap, and duplication, as well as to make the agency more functionally aligned and field-focused. For example, USAID states that the Bureau for Humanitarian Assistance will reduce duplication and fragmentation by unifying humanitarian assistance and eliminating the distinction between food and non-food emergency response, eliminating confusion in the field, and providing beneficiaries and partners with one cohesive USAID platform and voice on humanitarian assistance. As another example, USAID states that the Bureau for Policy, Resources, and Performance would consolidate USAID s policy, budget, and performance functions, which are currently divided among five bureaus and offices. <3.1.5. Addressing High Risk Areas and Long-Standing Management Challenges> USAID s reform efforts address several high risk and long-standing management challenges, including a project to specifically address external audit findings and implement auditors recommendations. Our prior work noted that reforms improving the effectiveness and responsiveness of the federal government often require addressing long- standing weaknesses in how some federal programs and agencies operate. For example, agency reforms provide an opportunity to address the high risk areas and government-wide challenges that we have called attention to and that are vulnerable to fraud, waste, abuse, and mismanagement, or are in need of transformation. USAID has undertaken multiple projects to address high risk areas and long-standing challenges. USAID T3 s Addressing the Audit Backlog project was specifically designed to review, enhance, and revise USAID s management of audit engagements and recommendations by eliminating the agency s backlog of unresolved audit recommendations, developing and implementing practices that would strengthen current programs, and reducing the potential for a future backlog. In this way, USAID intends to save taxpayer dollars by preventing and responding to fraud, mismanagement, wasteful practices, and other challenges identified in the audits. USAID reported that it had eliminated the backlog of unresolved audit recommendations as of May 2018. As of early April 2019, USAID had implemented 75 of GAO s 86 recommendations from fiscal years 2015 through 2018. In addition, several other reform projects address high risk areas and long-standing management challenges identified by the USAID Office of Inspector General (OIG). For example, USAID s Working in Non- Permissive Environments project addresses challenges USAID faces working in insecure, inaccessible, or unstable environments. USAID OIG identified developing strategies to work effectively in non-permissive and contingency environments, as one of the five top management challenges for USAID in fiscal year 2017. <3.1.6. Leadership Focus and Attention> USAID s leadership has demonstrated focus on and attention to the planning and conduct of USAID s reform efforts. Our prior work shows that a dedicated team of high-performing leaders within the agency should lead organizational transformations, such as agency reforms. USAID has demonstrated leadership at various levels to manage and guide the agency s reform efforts. For example, USAID s Administrator first outlined his vision of USAID s mission as being focused on ending the need for foreign assistance in August 2017, and USAID s reform efforts are aimed at operationalizing the Administrator s vision to end the need for foreign assistance. USAID s Administrator has had visible and continuous involvement in USAID s reform efforts, including through informing various congressional committees, on multiple occasions, of ongoing developments with USAID s reform process. USAID has designated leaders who are responsible for the day-to-day management of USAID s reform efforts. In June 2017, USAID s Acting Administrator established the Transformation Task Team (T3) to lead the agency s response to Executive Order 13781 and the subsequent guidance from OMB. T3 is led by a Coordinator who concurrently serves as the Assistant to the Administrator in USAID s Bureau for Policy, Planning, and Learning. The Coordinator told us that he meets with the USAID Administrator on a regular basis to report the status of USAID s projects. T3 also includes seven deputy coordinators who are accountable for the progress of all of the projects within a desired outcome as well as 24 project managers who lead project implementation. The T3 Coordinator indicated that the size of his team will decrease over time as it hands over management of USAID s reform projects to bureau-level leaders. USAID also assigned Senior Leader Champions to each of its reform projects. The champions provide strategic guidance and act as the representational face and voice of the project to Congress and the agency. Further, USAID also established a Transformation Advisory Council made up of senior leaders of USAID who have provided strategic guidance to USAID s reform efforts since October 2017. The council is chaired by the T3 Coordinator and made up of Senior Leader Champions, mission director liaisons, T3 leadership, and other standing members. The Transformation Advisory Council meets to discuss the progress of reform projects, ensure cross-project coordination, and to resolve any duplication or dependencies. <3.1.7. Managing and Monitoring> USAID has developed and maintained a system for managing and monitoring its reform process. We have previously reported that organizational transformations must be carefully and closely managed by developing an implementation plan with key milestones and deliverables to track and communicate implementation progress, among other actions. In May 2018, USAID T3 issued a task order for a contractor to help ensure that USAID has the capacity to manage the planning and implementation of USAID s reform efforts. The contractor is responsible for providing project and performance management support. Such support included tracking USAID s reform projects, providing summaries and executive reports on the progress of USAID reform projects, and also knowledge management, including the retention of key documents and information related to project and performance management. The contractor established a data tracking system that contains project end dates and deliverables to track the progress of reform implementation. The system notes which projects are on schedule, delayed, or complete. The contractor has also generated periodic executive reports that outline next steps for implementation reform and provide updates organized by USAID s five reform objectives. USAID T3 has developed guidance for transferring responsibility for project implementation to the appropriate bureaus and offices. The guidance details who in the bureau will be responsible and accountable for the project, resources that will be needed to initiate and complete handover of the project, and the future end state of the projects, among other items. As of July 2019, USAID had completed bureau handover plans for 24 T3 reform projects. USAID has demonstrated transparency over its reform efforts through publicizing reform-related information on its website, including fact sheets on its projects. USAID has also publicly released several of its reform deliverables. For example, USAID made its Journey to Self-Reliance portal available on its external website. Through the portal, viewers have access to USAID s Fiscal Year 2019 Country Roadmaps and can download a wide range of supporting resources on the Journey to Self- Reliance effort and the methodology that underpins this effort. <3.1.8. Employee Engagement and Employee Performance Management> USAID s reform efforts generally addressed two interrelated subcategories of strategic workforce planning by instituting policies to manage employee engagement and to improve employee performance management. These policy initiatives were part of USAID s broader effort to create a human resource services system that, according to USAID documents, will support a modern workforce in carrying out USAID s mission. Our prior work has found that increased levels of employee engagement generally defined as the sense of purpose and commitment employees feel toward their employer and its mission can lead to better organizational performance and can sustain or increase levels of employee engagement and morale, even as employees weather reorganizations and other difficult external circumstances. Our prior work also found that performance management systems which are used to plan work and set individual employee performance expectations, monitor performance, develop capacities to perform and to rate and incentivize individual performance can help the organization manage employees on a daily basis and provide supervisors and employees with the tools they need to improve performance. USAID developed and began implementing its Human Resources Transformation project prior to the start of the current reform effort led by T3. This project includes objectives and initiatives to both promote employee engagement issues and establish a performance management system during the 5-year transformation. USAID created a project management office to plan and carry out between three and five initiatives associated with each of the Human Resources Transformation project s objectives and a performance monitoring plan to track the progress of each initiative. As noted in figure 3, the three Human Resources Transformation objectives and the associated intermediate results called for by the project address both employee engagement and employee performance management issues. For example, Transformation Objective 3, Agency Culture and Workplace Enhanced, promotes employee engagement by calling for an agency workplace enhanced by a stronger focus on the culture of accountability with a workforce reflecting the diversity of America s population. The project is also using Federal Employee Viewpoint Survey (FEVS) data to periodically gauge employees feedback and level of engagement on the reform efforts. Moreover, USAID noted in its April 2019 Human Resources Transformation performance monitoring plan that USAID intends to measure the effectiveness of its efforts to improve employee engagement by assessing the extent to which those efforts increase employees positive response rates to human resources service- and delivery-related questions over the generally low baseline rates set by the FEVS 2016 survey response (ranging from 10 percent to 26 percent positive response rates). The monitoring plan noted that USAID expects to increase the positive response rates to these questions on the FEVS to upwards of 74 percent by 2021. Furthermore, one of the intermediate results associated with Transformation Objective 2, Agency Workforce Prepared for Today and the Future, includes an effort to establish and uphold a performance management system in areas such as provision of feedback, professional development, and career advancement. T3 also initiated six projects associated with its Empower People to Lead objective that incorporate some of the Human Resources Transformation project efforts to improve employee engagement and implement a performance management system. For example, T3 s project on Managing Human Capital Talent is developing new automated tools to transition the paper-based Foreign Service and Civil Service performance management and evaluation processes into online evaluation systems administered electronically. As of July 2019, these tools include an automated Foreign Service assignment tool and a Civil Service performance management system and automated tool. However, USAID delayed its expected completion date for these Foreign Service and Civil Service tools from the end of December 2018 to March 2019 and August 2019, respectively. Further, T3 s Leveraging Foreign Service National Talent project expects changes in job satisfaction- related survey scores, over time, will help USAID measure the success of a reform project aimed at empowering the agency s Foreign Service Nationals workforce. <3.2. USAID Partially Addressed Two Key Reform Practices> <3.2.1. USAID Established Goals but Generally Did Not Establish Outcome-Oriented Performance Measures to Gauge the Effectiveness of Efforts> Our prior work indicates that agency reforms should clearly identify what an agency is trying to achieve by establishing outcome-oriented performance measures that enable the agency to assess the extent to which projects are achieving progress toward reform goals. Moreover, T3 guidance states that, as responsibilities for project implementation are transferred to bureau- and office-level units, project-level managers should develop performance indicators to measure progress. While USAID has established high-level goals associated with its reform efforts, such as ending the need for foreign assistance, it has established outcome-oriented performance measures for only four of its reform efforts. Table 4 below provides examples of outcome-oriented performance measures for those four reform projects. USAID has not established outcome-oriented performance measures that would enable it to gauge the effectiveness of the remaining reform efforts. For example, USAID s five reform objectives (1) Journey to Self-Reliance, (2) Strengthen Core Capabilities, (3) Advance National Security, (4) Empower People to Lead, and (5) Respect Taxpayer Investments are not tied to outcome-oriented performance measures. In explaining why they had not developed outcome-oriented performance measures for all projects, USAID T3 officials indicated that thus far they have focused their efforts on establishing outputs (e.g., products and services) for the reform projects. Establishing outcome-oriented performance measures for its reform projects would enhance USAID s ability to assess the effectiveness of its reform efforts. <3.2.2. USAID Is Developing a Strategic Workforce Plan but Lacks the Planning Tools to Justify How Work Force Adjustments Will Help Achieve Its Objectives> USAID documents and officials demonstrate that the agency is developing an agency-wide strategic workforce plan in support of its ongoing reform efforts, but the plan and its associated workforce planning tools were not ready to implement as of July 2019. Strategic workforce planning is an essential activity that an agency needs to conduct to ensure that its human capital program aligns with its current and emerging mission and programmatic goals, and that the agency is able to meet its future needs. Our prior work also indicates the importance of preceding any staff realignments or downsizing with strategic workforce planning so that changed staff levels do not inadvertently result in skills gaps or other adverse effects that could increase use of overtime and contracting. USAID has taken a number of steps since 2017 to develop an agency- wide strategic workforce plan both prior to and during the current reform effort, including developing staff realignment plans as part of its process for standing up the proposed new bureau structures. However, USAID has not yet developed or implemented the data collection and measurement tools that it has identified as necessary to gauge current workforce capabilities, assess staffing needs arising from the proposed reorganization, and identify ways to close gaps arising from changes in workforce requirements. USAID documents note that such tools could allow USAID to achieve its goal of hiring the right talent, at the right time, for the right duration. USAID is using both the Human Resource Transformation project and two of T3 s projects to develop a strategic workforce plan and associated tools: USAID developed and began implementing the Human Resources Transformation project prior to the start of the current reform effort with the expectation that by 2020 the agency would have the organizational structure and workforce characteristics that support achievement of USAID s mission. This new structure would include an optimally sized workforce with an effective mix of all USAID employee types created through the use of a new workforce planning model. Project documents note, however, that developing this planning model in turn would require developing a Workforce Planning Tool to define workforce baselines and existing assets, identify future workforce needs, assess gaps, and build capacity where needed. In June 2016, USAID s 2016 2021 Human Resource Transformation Strategy and Action Plan stated that developing this model would be difficult but nevertheless estimated that implementing this effort would require no more than 2 years. However, USAID officials noted that the Human Resources Transformation efforts did not fully begin until 2018. T3 is implementing two projects associated with its objective titled Empower People to Lead. First, T3 s Manage Human Capital Talent project instituted an Employee Portal to provide all direct-hire employees access to their human resources data in one centralized online location. According to USAID documents, this project is also developing for management an automated assignment, performance management, and workforce planning tools, including separate automated planning, performance, and assignment tools for its Civil Service and Foreign Service personnel. The agency originally intended to implement these tools by the end of calendar year 2018. USAID s April 2019 performance monitoring plan indicates that the tools particularly the workforce planning model that USAID describes as a human-capital data analytics system to automate various standardized and ad hoc reports and access previously unconnected personnel data sources will not be available before the end of fiscal year 2019. Second, T3 s Workforce Flexibility and Mobility project is focused on implementing a demonstration project, the Adaptive Personnel Project, to replace non-career, program- funded positions with an excepted-service management system. The Adaptive Personnel Project is to be launched as a pilot project in two USAID bureaus in fiscal year 2020. As of April 2019, USAID documents and USAID and employee union officials noted that the strategic workforce plan has not yet been completed. Moreover, the April 2019 Human Resources Performance Monitoring Plan notes that the workforce planning tool needed to gauge current capabilities and close gaps is not yet deployed and in use due to competing programmatic and budgetary priorities. In addition, USAID s T3 project data tracking system indicates that the agency has delayed the implementation of the projects needed to establish baselines and create pilot projects until late 2019 or later in order to focus on broader strategic workforce planning objectives, such as the Strategic Workforce plan and Adaptive Personnel Project. The lack of a strategic workforce plan may limit USAID s efforts to estimate how its proposed reorganization will affect future staffing needs. For example, USAID officials indicated in 2018 that the proposed reorganization of its headquarters bureaus was intended to be staff neutral. Its congressional notification pertaining to this reorganization projected no net increase in its total combined headquarters workforce level of 3,262 employees. Nevertheless, in its Fiscal Year 2020 Congressional Budget Justification, USAID identified a need for 40 additional Civil Service positions to refocus Washington bureaus and offices toward being effective service providers to the field consistent with the vision of ending the need of foreign assistance. USAID requested $7.2 million to fund those positions in the restructured bureaus. Without a strategic workforce plan, USAID cannot determine whether its current or planned workforce requirements align with its reform and reorganization objectives. <4. Conclusions> USAID is entrusted with managing billions of dollars in foreign assistance funding, and USAID leadership recognizes that reforming its internal operations and programming is integral to achieving its mission. In developing and implementing its reform efforts, USAID addressed many key practices that are critical to ensuring a successful agency reform or reorganization, such as using data and evidence and providing leadership focus and attention. Specifically, USAID s reform efforts generally addressed nine of the 11 key practices we assessed. However, taking additional steps in two areas could further improve its reform efforts. First, while it established goals and desired outcomes for its reform efforts, it has not yet generally established outcome-oriented performance measures necessary to assess the effectiveness and success of these efforts. Second, while USAID has been developing a strategic workforce plan since 2017, it has yet to complete this plan, which includes developing the associated workforce planning tools to identify the staff needed to meet existing and emergent program demands associated with its transformation goals. Addressing these gaps could help USAID better position itself to make long-term and sustainable improvements in its efficiency and effectiveness. <5. Recommendations for Executive Action> We are making the following two recommendations to USAID: The Administrator of USAID should establish outcome-oriented performance measures to assess the effectiveness of USAID s reform projects. (Recommendation 1) The Administrator of USAID should ensure that the agency completes a strategic workforce plan necessary to support its reform efforts. (Recommendation 2) <6. Agency Comments> We provided a draft of this report to USAID, State, and OMB for review and comment. We received comments from USAID, which are reprinted in appendix IV. USAID concurred with our recommendations. We also received technical comments from USAID and State, which we incorporated in our report as appropriate. We are sending copies of this report to the appropriate congressional committees, the Administrator of USAID, the Secretary of State, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6881 or BairJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology We performed our work under the authority of the Comptroller General to conduct work to assist Congress with its oversight responsibilities. This report (1) examines the status of the U.S. Agency for International Development s (USAID) reform efforts and (2) assesses the extent to which USAID has addressed key practices and considerations critical to the successful planning and implementation of agency reform efforts. The scope of our review was limited to USAID s internal reform efforts and did not include government-wide or interagency reform proposals, such as those referenced in the Office of Management and Budget s Delivering Government Solutions in the 21st Century report. For both objectives, we reviewed USAID s reform plans, proposals, and related documents and interviewed officials involved in USAID s reform efforts. We interviewed USAID officials on the USAID Transformation Task Team, including the task team Coordinator and Deputy Coordinators. We also interviewed USAID representatives from two USAID employee unions: the American Federation of Government Employees and the American Foreign Service Association. In addition, we interviewed officials from the Department of State and the Office of Management and Budget. To determine the status of USAID s reform efforts, we also reviewed USAID reform plans, reports, briefings, and project factsheets. We also interviewed USAID officials responsible for the planning and implementation of the agency s reform projects. To determine the total number of USAID reform projects, we included all USAID reform projects identified by USAID as of July 2019. To provide the estimated costs associated with USAID s reform efforts for contextual purposes, we obtained data from USAID on the costs of: 1) developing T3 reform efforts, including T3 s operational costs, 2) implementing T3 reform efforts, and 3) its Human Resource Transformation project contract data. We reviewed supporting documentation, and interviewed cognizant USAID officials about the completeness and accuracy of the data. We did not independently assess the data used to estimate the costs associated with its reform efforts. We determined it was beyond the scope of this review to perform a full cost-benefit analysis to assess the potential financial impact of USAID s reform efforts using the cost estimates provided by USAID. To determine the extent to which USAID has addressed key practices for planning and implementing its reform efforts, we assessed USAID s reform efforts against key practices identified in our June 2018 report, which are organized by 12 subcategories of change management practices. The subcategories are based on 58 key questions for consideration in assessing reform efforts. We did not apply criteria from the Workforce Reduction Strategies subcategory of our June 2018 report. We deemed those criteria not applicable to USAID s reform efforts because USAID officials stated their proposals regarding workforce reductions were overtaken by events when congressional appropriations for fiscal years 2018 and 2019 maintained USAID staffing at the levels associated with its workforce as of December 2017. For the other 11 subcategories included in our assessment, we determined which key questions of each subcategory were most relevant USAID s reform efforts and applied those key questions to our assessment. We categorized USAID reform-related actions into two separate categories: (1) those that generally addressed the subcategory and (2) actions that partially addressed the subcategory. We determined that USAID s reform efforts had generally addressed a practice if we did not identify significant gaps in its coverage of the actions associated with this subcategory. We determined that USAID s reform efforts had partially addressed a practice if we identified significant gaps in its coverage of the actions associated with this subcategory. We would have determined that USAID had not addressed a practice if it had not substantively addressed any of the key elements in the subcategory. However, we found that USAID at least partially addressed all of the practices. We defined significant gaps as the areas we identified, based on our analysis of the key questions of each subcategory, that were both relevant to USAID as an agency and important for the success of the reform efforts. Each of two analysts made an independent qualitative judgment as to whether or not USAID had generally, partially, or had not addressed those criteria. The two analysts then reviewed and reconciled any differences in the data used to reach each determination, and their results were subject to supervisory review. The analysts determinations were then reviewed by other GAO stakeholders with experience in this topic, and any concerns raised were resolved through discussion to reach the final determinations. We conducted this performance audit from February 2018 to September 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Key Questions for Assessing Agency Reform Efforts We developed key questions based on our prior work on key practices that can help assess agency reform efforts. The 58 questions are organized into four broad categories and 12 subcategories, as shown in table 5. Appendix III: U.S. Agency for International Development (USAID) Headquarters Structure before Implementation of Proposed Organizational Reforms As of June 2019, the U.S. Agency for International Development (USAID) headquarters was organized as shown in figure 4. Appendix IV: Comments from the U.S. Agency for International Development Appendix V: GAO Contact and Staff Acknowledgments <7. GAO Contact> <8. Staff Acknowledgments> In addition to the contact named above, Thomas Costa (Assistant Director), B. Patrick Hickey (Analyst in Charge), Joshua Akery, Peter Beck, David Dayton, Martin de Alteriis, Emily Gupta, Christopher Keblitis, Steven Putansu, Sarah Veale, and Alexander Welsh made key contributions to this report. | Why GAO Did This Study
In March 2017, the President issued an executive order to federal agencies intended to improve the efficiency, effectiveness, and accountability of the executive branch. The order required the Director of the Office of Management and Budget (OMB) to develop a plan to reorganize and streamline the government. In April 2017, OMB issued additional guidance to agencies on implementing the order. In response, USAID launched several efforts to reform its organizational structure, workforce, programs, and processes with the ultimate goal of ending the need for foreign assistance by helping partner countries become more self-reliant. GAO's prior work has shown that successful agency reforms depend on following key practices for organizational transformation, such as establishing goals and outcomes and involving key stakeholders.
This report examines (1) the status of USAID's reform efforts and (2) the extent to which USAID has addressed key practices in planning and implementing those efforts. GAO reviewed USAID reform plans, proposals, and related documents and met with officials involved in its reform efforts. GAO also assessed USAID's planning and implementation of its reform efforts against 11 key practices identified in GAO's June 2018 report, Government Reorganization: Key Questions to Assess Agency Reform Efforts (GAO-18-427).
What GAO Found
The reform efforts of the U.S. Agency for International Development (USAID) consist of a total of 32 reform projects—31 projects being implemented by USAID's Transformation Task Team and an additional Human Resources Transformation project that predates the 31 projects. As of July 2019, USAID has completed 19 reform projects and is implementing 12 additional projects, which it intends to complete by mid-2021. The task team has one additional project in the planning phase.
In planning and implementing these efforts, USAID has generally addressed nine of 11 key practices for organizational transformation and partially addressed two. For example, USAID generally addressed the key practice of involving employees and key stakeholders such as the Department of State and Congress through a variety of mechanisms, such as briefings and town halls. USAID also used data and evidence to guide its reform efforts by integrating employee and external input into its reform plans. Morever, USAID addressed fragmentation, overlap, and duplication by planning a restructuring effort to streamline operations and achieve efficiencies. Further, it generally addressed leadership focus and attention by designating a reform coordinator and establishing a dedicated team responsible for managing and planning USAID's reform efforts.
However, while USAID established goals for its reform efforts, it established outcome-oriented performance measures for only four of its 32 projects. Establishing such measures would improve its ability to assess the results of the changes it is making. In addition, while USAID is developing a strategic workforce plan, it has yet to develop the tools needed to identify and meet staffing needs arising from the reforms in order to fully assess its workforce. Completing a strategic workforce plan with these tools could help USAID ensure it has the workforce needed to meet existing and emergent program demands. Addressing these gaps could help USAID make long-term improvements in its efficiency and effectiveness.
What GAO Recommends
USAID should (1) establish outcome-oriented performance measures to assess the effectiveness of its reform efforts and (2) complete a strategic workforce plan necessary to support its reform efforts. USAID concurred with the recommendations. |
gao_GAO-19-679T | gao_GAO-19-679T_0 | <1. Background> VA s mission is to promote the health, welfare, and dignity of all veterans in recognition of their service to the nation by ensuring that they receive medical care, benefits, social support, and lasting memorials. In carrying out this mission, the department manages one of the largest health care delivery systems in the United States that provides enrolled veterans with a full range of services. These services may include primary care; mental health care; and outpatient, inpatient, and residential treatment. The Veterans Health Administration (VHA), one of the department s three major components, is responsible for overseeing the provision of health care at all VA medical facilities. Information technology (IT) is widely used and critically important to supporting the department in delivering health care to veterans. As such, VA operates and maintains an IT infrastructure that is intended to provide the backbone necessary to meet the day-to-day operational needs of its medical centers and other critical systems supporting the department s mission. The infrastructure is to provide for data storage, transmission, and communications requirements necessary to ensure the delivery of reliable, available, and responsive support to all VA staff offices and administration customers, as well as veterans. The Office of Information and Technology (OIT) is responsible for managing the majority of VA s IT- related functions. The office provides strategy and technical direction, guidance, and policy related to how IT resources are to be acquired and managed for the department. <1.1. VistA s Role at VA> VA provides health care services to approximately 9 million veterans and their families and relies on its health information system VistA to do so. VistA has been essential to the department s ability to deliver health care to veterans. It was developed based on the collaboration between staff in the VA medical facilities and VHA IT personnel. Specifically, clinicians and IT personnel at the various VA medical facilities collaborated to define the system s requirements and, in certain cases, carried out its development and implementation. As a result of these efforts, the system has been in operation since the early 1980s. VistA supports a complex set of clinical and administrative capabilities. It is comprised of an architecture that ties together servers and personal computer workstations with various applications within VA facilities and the supporting infrastructure, such as data centers, storage, and messaging technologies. The core system and database code are programmed in the MUMPS programming language. Among other things, VistA contains an EHR for each patient and supports clinics and medical centers. In addition, the system provides functionality beyond the EHR and exchanges information with many other applications and interfaces. For example, the system also provides the functionality of a time and attendance program, asset management system, library, and billing system, among other things. Users interact with VistA through a number of interfaces that connect stored health data. These interfaces enable the system to communicate (send or exchange data) with other VA systems, as well as with other federal agencies (e.g., DOD), health information exchange networks, and COTS products. According to OIT officials, applications either interface with VistA directly through a messaging protocol or extract data from the system via a reporting mechanism. The Computerized Patient Record System is a graphical user interface to VistA that runs on workstations, laptops, and tablets and enables the department to support clinical workflows. Specifically, the Computerized Patient Record System enables the department to create and update an individual EHR for each VA patient. Among other things, clinicians can order lab tests, medications, diets, radiology tests, and procedures; record a patient s allergies or adverse reactions to medications; request and track consults; enter progress notes, diagnoses, and treatments for each encounter; and enter discharge summaries. According to VHA officials, there are also more than 100 COTS products that interface with VistA. In addition to these commercial products, medical equipment or devices at local facilities may also require interfaces to the system, and these vary on a site-by-site basis. <1.2. VA Has about 130 Different Versions of VistA> Over the last several decades, VistA has evolved into a technically complex system that supports health care delivery at more than 1,500 locations, including VA Medical Centers, outpatient clinics, community living centers, and VA vet centers. Customization of the system by local facilities has resulted in about 130 clinical versions of VistA referred to as instances. According to the department, no two VistA instances are identical. Further, each instance is comprised of over 27,400 routines (executable modules of code), which are logically grouped into products or modules. VistA products or modules can also be comprised of one or more software applications that support health care functions, such as providing care coordination and mental health services. The department reported that there are approximately 140 to 200 products or modules that comprise the system. The 130 clinical instances of VistA are operated from four regional VA data centers. Users interact with the system through the Computerized Patient Record System. Aggregated clinical data from every instance of the system are located on servers hosted at VA s National Data Center. Over time, VA has identified the need for enhancements and modifications to VistA in order to ensure that the system keeps up with current technology and health care delivery. However, according to the department, the system has become difficult and costly to maintain. This is a result of, for example, being programmed in MUMPS, a language for which there is a dwindling supply of qualified software developers. It is also due to years of decentralized customization of the system by staff members who were permitted to develop and implement applications at the local level. <1.3. OIT and VHA Share Responsibilities for VistA> OIT and VHA serve as the technical and functional leaders, respectively, for the department s health care delivery and, together, they have worked to develop and maintain VistA for decades. Specifically, OIT is responsible for managing the majority of VA s IT-related functions. The office provides strategy and technical direction, guidance, and policy related to how IT resources are to be acquired and managed for the department. According to the department, OIT s mission is to collaborate with its business partners (such as VHA) and provide a seamless, unified veteran experience through the delivery of state-of-the-art technology. The Assistant Secretary for Information and Technology/Chief Information Officer (CIO) serves as the head of OIT and is responsible for providing leadership for the department s IT activities. The CIO also advises the Secretary regarding the execution of VA s IT systems appropriation, consistent with the Federal Information Technology Acquisition Reform Act. For fiscal year 2019, the department has been appropriated $4.1 billion for IT. According to VA s budget documentation, about $1.2 billion of this amount is intended to support IT staffing and associated costs for approximately 8,100 full-time employees. VHA provides information and expertise to OIT to support the department s health-related information systems. For example, VHA officials help identify clinical and business needs used to inform IT requirements development. The Under Secretary for Health is the head of VHA and is supported by the Principal Deputy Under Secretary for Health, four Deputy Under Secretaries for Health, and nine Assistant Deputy Under Secretaries for Health. <1.4. VA Has Begun to Acquire a New EHR System> After nearly 2 decades of pursuing multiple efforts to modernize VistA, in June 2017, the former VA Secretary announced that the department planned to acquire the same EHR system that DOD is acquiring Cerner Millennium. According to the department, it has chosen to acquire this product because Cerner Millennium should allow VA s and DOD s patient data to reside in one system, thus, potentially reducing or eliminating the need for manual and electronic exchange and reconciliation of data between two separate systems. Accordingly, the department awarded an indefinite delivery, indefinite quantity contract to Cerner Corporation in May 2018 for a maximum amount of $10 billion over 10 years. Cerner is to replace the 130 instances of VistA with a standard COTS system to be implemented across VA. This new system is to support a broad range of health care functions including acute care, clinical decision support, dental care, and emergency medicine. When implemented, the new system will be expected to become the authoritative source of clinical data to support improved health, patient safety, and quality of care provided by VA. The Electronic Health Record Modernization (EHRM) program is responsible for managing the Cerner contract implementation. For fiscal year 2019, the program was appropriated about $1.1 billion for planning and managing the transition from VistA to Cerner. Further, the department has estimated that an additional $6.1 billion in funding, above the Cerner contract amount, will be needed to fund additional project management support supplied by outside contractors, government labor costs, and infrastructure improvements over the 10- year contract period. VA plans to deploy the new EHR system at three initial operating capability sites within 18 months of October 1, 2018, with a phased implementation of the remaining sites over the next decade. Each VA medical facility is expected to continue using VistA until the new system has been deployed. The three initial deployment sites, located in the Pacific Northwest, are the Mann-Grandstaff, American Lake, and Seattle VA Medical Centers and related clinical facilities that operate the same instances of VistA. These are the first locations where the system is expected to go live. The task order to deploy the Cerner system at the three initial sites provides a detailed description of the steps Cerner needs to take in order to reach initial operating capability at the Mann-Grandstaff site in March 2020, and at the Seattle and American Lake sites in April 2020. According to the schedule, the initial operating capability sites are expected to be operational by July 2020. <2. VA Has Undertaken Efforts to Define VistA, but Additional Work Remains> In order to maintain internal control activities over an IT system and its related infrastructure, organizations should be able to define physical and performance characteristics of the system, including descriptions of the components and the interfaces. Further, consistent with GAO s Cost Estimating and Assessment Guide, a comprehensive system definition should identify customization and the environment in which the system operates. While defining a complex IT system can be challenging, having an adequate understanding of its characteristics will better position the organization to comprehensively project and account for costs over the life of a system or program as well as identify specific technical and program risks. Definition of VistA remains important because VA plans to continue using the system during the department s decade-long transition to the Cerner system. VA maintains multiple documents and a database that describe parts of VistA, including various components and interfaces. However, despite these existing sources, OIT officials acknowledged that there is no comprehensive definition of the VistA system. Consequently, VA has completed a number of efforts to better define VistA and understand the environment in which it operates and additional work is planned in the future. Specifically, VA has documented descriptions of the system, including the components that comprise it. These descriptions are documented in multiple sources: the VA Monograph, VA Systems Inventory, and VA Document Library. The VA Monograph is a document maintained by OIT that provides an overview of VistA and non-VistA applications used by VHA. According to VHA officials, the VA Monograph is the primary document that describes the components of the system. The Monograph describes VistA in terms of modules. For modules identified, including VistA modules, information such as the associated business functions, VA Systems Inventory identification number, and a link to the VA Document Library for additional technical information are provided. The VA Systems Inventory is a database maintained by OIT that identifies current IT systems at the department, including systems and interfaces related to VistA. For systems identified, the database includes information such as the system name, the system status (i.e., active, in development, or inactive), and related system interfaces. The VA Document Library is an online resource for accessing documentation (i.e., user guides and installation manuals) on the department s nationally released software applications, including VistA. VA has taken additional steps to further define the system. For example, EHRM program officials recognized the need to further understand the customization of VistA components at the various medical facilities and have conducted analyses to do so. These analyses include: Variance analysis: As part of its VistA Evolution program, which has focused on standardizing a core set of VistA functionality, the department implemented a process to compare the instances of VistA installed at sites to the Enterprise Standard version. The results of this analysis allowed the department to assess the criticality of each variance, which is expected to help with VA s transition to the Cerner system. Module analysis: EHRM program subject matter experts undertook an analysis that involved reviewing and assessing capabilities provided by VistA modules. This analysis enabled department officials to determine whether the capability provided by a VistA module could be provided by the Cerner system, or whether another COTS solution would be required to support this function going forward. Visual mapping: EHRM program officials also directed an analysis that involved developing a notional visual mapping of VA s health care applications, components, and supporting systems within the health delivery environment. The results of this analysis provided a description of the current state of one instance of VistA and the VA health environment, which is intended to inform the department of possible opportunities for business process and IT improvements as it proceeds with the Cerner acquisition. Nevertheless, even with these analyses, VA has not yet fully defined VistA, including, for example, identifying performance characteristics of the system and describing the environment in which it operates. The department s three sources that describe VistA and the additional analyses undertaken do not provide insight into site specific customizations of the system. For example, the VA Monograph does not include information on module customization at local facilities. In addition, according to OIT officials, the systems inventory does not reflect differences among the 130 different instances of VistA and does not take into consideration regional and local customizations of related components. Further, the visual mapping analysis noted that there was not full insight of the intertwined structure of data and applications or the various local customizations of VistA. EHRM program officials stated that they have not been able to fully define VistA and understand all local customizations due to the decentralization of the development of the system and its evolution over more than 30 years. They explained that VistA s complexity is partly due to the various instances of the system, compounded by local customizations, which have resulted in differences in VistA instances operating at various facilities. According to EHRM program documentation, Cerner s contract calls for the company to conduct comprehensive assessments to capture the current state of technical and clinical operations at specific facilities, as well as identify site-specific requirements where the Cerner system is planned to be deployed. As of June 2019, Cerner had completed site assessments for the three initial operating capability sites in the Pacific Northwest and had planned additional assessments at future deployment sites. The initial site assessments included, among other things, an assessment of the unique VistA instances and the environment in which the system operates. The continuation of planned site assessments should provide a thorough understanding of the 130 VistA versions, help the department better define VistA, and position it for transitioning from VistA to Cerner s COTS solution. <3. VA Identified Total VistA Costs of about $2.3 Billion between 2015 and 2017, but Could Not Sufficiently Demonstrate the Reliability of All Data and Omitted Other Costs> When using public funds, an agency must employ effective management practices in order to let legislators, management, and the public know the costs of programs and whether they are achieving their goals. To make those evaluations for a program or for a system as large and complex as VistA, a complete understanding of the system and reliable cost information is required. By following a methodology and utilizing reliable data, an agency can ensure that all costs are fully accounted for, which in turn, better informs management decisions, establishes a cost baseline, and enhances understanding of a system s performance and return on investment. Fundamental characteristics of reliable costs are that they should be accurate (unbiased, not overly conservative or optimistic), well- documented (supportable with source data, clearly detailed calculations, and explanations for choosing a particular calculation method), credible (identifying any uncertainty or biases surrounding data or related assumptions), and comprehensive (costs are neither omitted nor double counted). Identification of VistA s costs remains important because VA plans to continue using the system during the department s transition to the Cerner system over the next decade. VA identified costs for VistA and its related activities adding up to approximately $913.7 million, $664.3 million, and $711.1 million in fiscal years 2015, 2016, and 2017, respectively for a total of about $2.3 billion over the 3 years. However, the department could not sufficiently demonstrate the reliability of certain costs that were identified. In addition, VA identified other categories of VistA-related costs, but omitted these costs from the total. <3.1. VA Did Not Sufficiently Demonstrate the Reliability of Data for All VistA Costs> Of the $2.3 billion total costs for VistA, VA demonstrated that only approximately $1 billion of these costs were reliable. Specifically, OIT officials identified VistA-related costs within seven categories. The officials were able to sufficiently explain why these categories were included in the development and sustainment costs for VistA and how they were documented by the department; the officials also presented detailed source data for our examination. As a result of our review, we determined that the cost data for these seven categories were accurate, well-documented, credible, and comprehensive and, thus, sufficiently reliable. Table 1 provides a summary of the program costs identified for VistA by OIT and VHA for fiscal years 2015 through 2017 that we determined to be reliable. As shown in the table, VA identified costs for the following seven categories for fiscal years 2015 through 2017: VistA Evolution The VistA Evolution program costs were associated with VistA strategy, system design, product development, and program management. These costs totaled approximately $549.6 million. Interoperability The Interoperability program focused on sharing electronic health data between VA and non-VA facilities, including private sector providers and DOD. For example, interoperability costs were associated with architecture, strategy, the Interagency Program Office, product development, and program management. These VistA-related costs totaled approximately $140.2 million. Virtual Lifetime Electronic Record (VLER) Health This program focused on streamlining the transition of electronic medical information between VA and DOD. These VistA-related costs were associated with product development and program management and totaled approximately $81.2 million. Contracts Contract costs for VistA Evolution included VHA s obligations associated with workload management, change management, clinical requirements, and clinical interoperability. These VistA-related costs totaled approximately $202.8 million. Intergovernmental personnel acts Intergovernmental personnel acts are agreements for the temporary assignment of personnel between the federal, state, and local governments; colleges and universities; Indian tribal governments; federally funded research and development centers; and other eligible organizations. These costs accounted for VHA s need to use outside experts from approved entities for limited periods of time to work on VistA Evolution assignments. The total VistA-related costs were approximately $2.4 million. Memorandums of understanding According to VHA, memorandums of understanding are agreements used by the administration to obtain the services of personnel between VA entities for VistA-related activities. These agreements accounted for approximately $2.3 million. Pay Costs in this category included salaries for VHA staff who worked on VistA-related projects as well as travel, training, and supply costs associated with employment. These costs totaled approximately $34.1 million. However, VA was not able to sufficiently demonstrate the reliability of approximately $1.3 billion in costs related to VistA. Specifically, OIT officials identified the additional legacy VistA costs that generally fell into three categories: Legacy VistA: Infrastructure, hosting, and system sustainment Legacy VistA costs are generally related to the maintenance of fully operational items, such as VistA Imaging and Fileman two key components related to VistA s operation. The costs also included obligations for costs related to hosting health data in both VA and non-VA facilities. The OIT officials and subject matter experts estimated these total costs to be approximately $343 million during fiscal years 2015 through 2017. However, we were not able to determine the reliability of these costs because, for example, source data were not well documented; changes in the cost information provided to us during our review indicated that the cost data may not be credible; and subject matter experts were unclear about how to separate VistA costs from non- VistA costs. Related software Related software costs are associated with the software supporting, or closely integrated with, VistA that were identified by EHRM officials, yet not tracked directly for one of the VistA-related programs. Both OIT and VHA identified software licensing costs as VistA-related obligations. The EHRM program reported these costs to be approximately $389 million in total during fiscal years 2015 through 2017. However, we were not able to determine the reliability of the costs in this category for a variety of reasons, including that source data were not well documented. In addition, VA officials were not clear regarding how the total amounts in each category should be divided between OIT and VHA. Given this confusion, we were not able to determine if the costs were fully accurate or credible. OIT personnel (pay and administrative) According to EHRM officials, OIT does not track labor costs by program. Instead, the department provided estimations of the amount of salaries paid to OIT government staff working on activities such as VistA Evolution, program management, and overall support of VistA and related applications. OIT personnel costs were estimated by the EHRM program office to be approximately $544 million total during fiscal years 2015 through 2017. However, we were not able to determine the reliability of costs in this category because assumptions made for estimating the personnel and salary costs were not well documented and could not be verified. <3.2. VA Omitted Certain Costs from the Total Cost of VistA> In addition, VA omitted certain VistA costs from the total costs identified for fiscal years 2015, 2016, and 2017. Specifically, VA omitted the following costs: Additional hosting OIT officials stated that additional costs related to hosting health data by an outside vendor, as well as hosting backup VistA instances at each of the medical center sites, should also be included in the total costs for VistA; however, VA omitted these costs from the total for fiscal years 2015 through 2017. Specifically, according to the officials, calculating costs for these hosting activities requires subject matter experts to identify equipment, space, utilities, and maintenance costs for resources allocated specifically for VistA. However, the department has not yet developed a methodology to calculate the costs. The officials said they were working on identifying a reliable approach for calculating these costs in the future. Data standardization and testing OIT officials stated that additional costs related to work on clinical terminology mapping and functional testing were not included in the total costs for VistA for fiscal years 2015 through 2017. This work related to mapping existing clinical data to national standards and making updates to VistA or the Joint Legacy Viewer and included mapping data and building test scripts and reports. OIT officials noted that this work had been critical to the VistA Evolution program, but they did not provide actual cost data in this category. The lack of sufficiently reliable and comprehensive costs indicates that the department is not positioned to accurately report the annual costs to develop and sustain VistA. This is due in part to VA not following a well- documented methodology that describes how the department determined the total costs for the system. In lieu of a methodology, OIT officials said that leadership and staff from the program took efforts to identify and track the cost components and contracts associated with the system. However, they noted that costs associated with VistA were not all clearly labeled as VistA in an IT system and it was necessary to estimate other costs. The officials were also unable to verify how VistA-related costs were separated from other department costs in all areas and subject matter experts were not consistently familiar with the estimation methods employed and how VistA was defined for the purposes of calculating costs. Further, VA officials noted that they were still working on the best approach to identifying and calculating omitted costs. Without documenting the methodology for what costs are to be included and how they were identified and calculated, VA s total does not accurately reflect the development and sustainment costs for VistA. As a result, the department, legislators, and the public do not have the comprehensive, reliable information needed to understand how much it actually cost to develop and maintain the system. Further, VA does not have the reliable information needed to make critical management decisions for sustaining the many versions of VistA over the next 10 years until the Cerner system is fully deployed. <3.3. Implementation of GAO s Recommendation Could Help Ensure VA Reliably Reports VistA Costs> In our report, we are making a recommendation for VA to improve its reporting of VistA s costs. Specifically, we are recommending that the department develop and implement a methodology for reliably identifying and reporting the total costs of VistA. The methodology should include steps to identify the definition of VistA and what is to be included in its sustainment activities, as well as ensure that comprehensive costs are corroborated by reliable data. In written comments on a draft of the report, the department agreed with the recommendation and stated that it will provide the actions it plans to take to address this recommendation within 180 days. In conclusion, although VA is not likely to be positioned to retire VistA for at least another 10 years, the department lacks the comprehensive and reliable cost information needed to make critical management decisions for sustaining the system. As the department continues to work toward acquiring a new electronic health record, it will be important for VA to take actions to address our recommendation for improving the reporting of VistA costs. Doing so is essential to helping ensure that decisions related to the current system are informed by reliable cost information and that there is an accurate basis for reporting on the return on its investment for replacing VistA. Chair Lee, Ranking Member Banks, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have. <4. GAO Contact and Staff Acknowledgments> If you or your staffs have any questions about this testimony, please contact Carol C. Harris, Director, Information Technology Management Issues, at (202) 512-4456 or harriscc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony statement. GAO staff who made key contributions to this testimony are Mark Bird (Assistant Director), Rebecca Eyler, Jacqueline Mai, Monica Perez-Nelson, Scott Pettis, Jennifer Stavros-Turner (Analyst in Charge), and Charles Youman. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
VA provides health care services to approximately 9 million veterans and their families and relies on its health information system—VistA—to do so. However, the system is more than 30 years old, is costly to maintain, and does not fully support exchanging health data with DOD and private health care providers. Over nearly 2 decades, VA has pursued multiple efforts to modernize the system. In June 2017, the department announced plans to acquire the same system—the Cerner system—that the Department of Defense is implementing. VA plans to continue using VistA during the department's decade-long transition to the Cerner system.
GAO was asked to summarize its report that is being released today which discusses, among other things, (1) the extent to which VA has defined VistA and (2) the department's annual costs to develop and sustain the system.
In preparing the report on which this testimony is based, GAO analyzed documentation that defines aspects of VistA and identifies components to be replaced; and evaluated the reliability of cost data, including funding obligations associated with the development and sustainment of VistA for fiscal years 2015, 2016, and 2017.
What GAO Found
The Department of Veterans Affairs (VA) has various documents and a database that describe parts of the Veterans Health Information Systems and Technology Architecture (VistA); however, the department does not have a comprehensive definition for the system. For example, VA has identified components that comprise VistA, identified interfaces related to the system, and collected system user guides and installation manuals. VA has also conducted analyses to better understand customization of VistA components at various medical facilities. Nevertheless, the existing information and analyses do not provide a thorough understanding of the local customizations reflected in about 130 versions of VistA that support health care delivery at more than 1,500 sites. Program officials stated that they have not been able to fully define VistA due to the decentralization of the development of the system for more than 30 years. Cerner's contract to provide a new electronic health record system to VA calls for the company to conduct comprehensive assessments to identify site-specific requirements where its system is planned to be deployed. Three site assessments have been completed and additional assessments are planned. If these assessments provide a thorough understanding of the 130 VistA versions, the department should be able to define VistA and be better positioned to transition to the new system.
VA identified costs for VistA and its related activities adding up to approximately $913.7 million, $664.3 million, and $711.1 million in fiscal years 2015, 2016, and 2017, respectively—for a total of about $2.3 billion over the 3 years. However, of the $2.3 billion, the department was only able to demonstrate that approximately $1 billion of these costs were sufficiently reliable.
In addition, the department omitted VistA-related costs from the total. The lack of a sufficiently reliable and comprehensive total cost for VistA is due in part to not following a well-documented methodology that describes how the department determined the costs for the system. As a result of incomplete cost data and data that could not be determined to be sufficiently reliable, the department, legislators, and the public do not have a complete understanding of how much it has cost to develop and maintain VistA. Further, VA lacks the information needed to make decisions on sustaining the many versions of the system.
What GAO Recommends
In its report being issued today, GAO is recommending that VA develop and implement a methodology for reliably identifying and reporting the total costs of VistA. The department agreed with the recommendation. |
gao_GAO-19-448 | gao_GAO-19-448_0 | <1. Background> <1.1. State, USAID, and UNRWA Fund Education Assistance in the West Bank and Gaza.> <1.1.1. State> Two State entities play key roles in education assistance in the West Bank and Gaza State s Bureau of Population, Refugees, and Migration (State/PRM) and State s U.S. Consulate General in Jerusalem (State/ConGen). State/PRM has an important role in funding and overseeing education assistance provided by UNRWA in the West Bank and Gaza. State contributes funds to and manages the institutional relationship with UNRWA on behalf of the U.S. government, while recognizing UNRWA s independence and commitment to upholding humanitarian principles, including neutrality. This relationship is guided by the U.S.-UNRWA Framework for Cooperation, annually negotiated between State/PRM and UNRWA. The framework includes UNRWA s commitment to meet the condition on U.S. contributions to UNRWA that U.S. funds do not support terrorism, pursuant to section 301(c) of the Foreign Assistance Act of 1961, as amended. The framework also sets forth the activities used to evaluate UNRWA s conformance with this condition. According to State/PRM officials, some educational materials fit into the framework s section involving broader U.S. priorities for UNRWA s education sector. For example, continuing support for mutually identified special projects such as UNRWA s Human Rights, Conflict Resolution, and Tolerance education program in all of UNRWA s five fields of operation fit into the latter category. UNRWA s five fields of operations are the West Bank (including East Jerusalem), Gaza, Jordan, Lebanon, and Syria. The framework also defines U.S. priorities for UNRWA s education sector. The frameworks for fiscal years 2016 and 2017 state, The United States is particularly interested in ongoing curriculum review process, which enables UNRWA s educators to use consistent criteria in analyzing and enriching local textbooks, in order to promote UN values and principles in UNRWA classrooms. The Secretary of State is required under Section 7048(d) of the Department of State, Foreign Operations, and Related Programs Appropriations Acts for fiscal years 2015 and 2016 to submit a report in writing to the Committees on Appropriations not less than, and for fiscal year 2016 no later than, 45 days after enactment. Section 7048(d) of the Department of State, Foreign Operations, and Related Programs Appropriations Act, 2017 states that this report must be submitted prior to initial obligation of funds. This report is to cover seven topics. One of the required topics in the report is whether UNRWA is taking steps to ensure that the content of all educational materials currently taught in UNRWA-administered schools and summer camps is consistent with the values of human rights, dignity, and tolerance, and does not induce incitement. State/ConGen also has a key role in funding and overseeing U.S. educational assistance. State/ConGen is responsible for the U.S. bilateral relationship with the Palestinian Authority, including efforts to combat incitement to violence and address problematic content in textbooks. In addition, according to the Consulate General s Education Statement of Purpose, State/ConGen funds and implements education projects to improve the quality of education to equip Palestinians with the skills to grow their economy and build a democratic, secular, politically moderate, and outward-focused Palestinian civil society as a driver for peace. USAID funds education projects that support Palestinian Authority- administered schools, teacher and administrator training in the West Bank, and scholarships. USAID did not identify or address potentially problematic content in Palestinian Authority textbooks between fiscal years 2015 and 2017 because, according to USAID and State officials, reviewing textbooks is outside the scope of the work of USAID s partners, including nongovernmental organizations, that implement projects in the West Bank and Gaza. USAID officials told us that they defer discussion of any potentially problematic content in textbooks to State as a bilateral policy issue. UNRWA is to provide humanitarian assistance to Palestine refugees in accordance with its mandate provided by the UN General Assembly. UNRWA provides education, health care, social services, microfinance, and emergency assistance to Palestine refugees; infrastructure and camp improvement within Palestine refugee camps; and protection. When UNRWA began operations in 1950, it was responding to the needs of about 860,000 Palestine refugees. UNRWA reports that over 5 million Palestine refugees are registered with UNRWA in the West Bank, Gaza, Jordan, Lebanon, and Syria and are currently eligible for its services. UNRWA administers its education system of more than 700 schools across its five fields of operation, educating approximately 526,000 children, according to UNRWA officials. This includes 370 schools in the West Bank and Gaza for grades 1 through 9 (and grade 10 in two East Jerusalem schools) serving over 300,000 children. UNRWA uses the curricula and textbooks of host governments. In keeping with this practice, UNRWA schools in the West Bank and Gaza use the Palestinian Authority curriculum and textbooks. This practice helps to ensure that UNRWA students can continue their education at government secondary schools and universities and can take national exams. According to UNRWA officials, using the host country curricula is also in line with good practice affirmed by other UN agencies, such as United Nations High Commissioner for Refugees. The Palestinian Authority provides all textbooks used in UNRWA and Palestinian Authority schools in the West Bank and Gaza except for English language textbooks. Figure 1 shows an UNRWA girls school in Shufat refugee camp, located in East Jerusalem. Prior to the release of the first set of Palestinian Authority textbooks developed by the Palestinian Authority in 2000, schools in Gaza used Egyptian textbooks, and schools in the West Bank used Jordanian textbooks. The Palestinian Authority developed its first curriculum in the mid-1990s in cooperation with the United Nations Educational, Scientific and Cultural Organization. Since then, the Palestinian Authority has developed multi-year strategies to improve its educational system, including by modernizing its curriculum and improving its textbooks. The Palestinian Authority worked to implement its early strategies but could not fully do so because responding to other events took priority, according to Palestinian Authority documents. These events included the second Palestinian Intifada (uprising) that began in 2000, the government of Israel s subsequent tightening of security, the rise of Hamas to power in the Palestinian government in 2006, and the resulting delays in donor funding. After donors resumed their support, the Palestinian Authority developed an education strategy for 2008 through 2012. This strategy s stated goals include improving the quality of education by reviewing the curriculum and revising textbooks, among other things. Beginning in 2013 the Palestinian Authority undertook a multi-year effort to revise its curriculum and issue new textbooks to provide students with skills such as problem solving and analysis. As a result, the Palestinian Authority Ministry of Education and Higher Education issued new pilot textbooks for grades 1 through 4 in 2016 and 2017. The Palestinian Authority issued textbooks for the first semester of these grades in summer 2016 and textbooks for the second semester later in the year with the start of that semester. The Palestinian Authority issued the final textbooks for grades 1 through 4 and new pilot textbooks for grades 5 through 10 in 2017. As of August 2017, Palestinian Authority public schools and UNRWA schools in the West Bank and Gaza use these textbooks, according to State and UNRWA officials. Figure 2 shows examples of the pilot textbooks for grades 1 through 3. <2. The U.S. Government Funded an Estimated $243 Million for Education Assistance in the West Bank and Gaza for Fiscal Years 2015 through 2017, and UNRWA Purchased English Language Textbooks with Contributions from Donor Countries, including the United States> The U.S. government provided an estimated $243 million for education assistance in the West Bank and Gaza State provided an estimated $193 million, and USAID provided about $50 million for fiscal years 2015 through 2017, according to State and USAID data and UNRWA- provided information. Of State s estimated $193 million contributions to education assistance in the West Bank and Gaza, UNRWA estimated that about $187 million went to its education assistance. State provided the remaining approximately $6 million to non-UNRWA education programs. UNRWA reported expending about $877 million for education in the West Bank and Gaza for fiscal years 2015 through 2017, including contributions from the United States and other donors. According to UNRWA officials, UNRWA used some of these funds to purchase English language textbooks that were used in UNRWA schools in the West Bank and Gaza. State, UNRWA, and USAID funds were not used to purchase or produce other textbooks used in the West Bank or Gaza, according to officials from these agencies. <2.1. State Funded an Estimated $193 Million for Education Assistance in the West Bank and Gaza for UNRWA and Non- UNRWA Projects for Fiscal Years 2015 through 2017> Of the estimated $243 million that the United States provided for education assistance in the West Bank and Gaza for fiscal years 2015 through 2017, State funded an estimated $193 million for UNRWA and non-UNRWA projects, according to State and UNRWA information. For UNRWA, State contributed an estimated $187 million for education in the West Bank and Gaza for fiscal years 2015 through 2017, out of a total contribution to UNRWA of about $1 billion for that timeframe. U.S. contributions support UNRWA s core programs of education, health, relief and social services, microfinance, and infrastructure and camp improvement across its five fields of operation. State does not earmark the majority of its contributions to UNRWA s program budget by either program area or field of operation. Rather, State contributes funds to UNRWA s program budget, which UNRWA pools with contributions from other donors to provide general support to UNRWA s core programs, according to State and UNRWA officials. State earmarks a small portion of its contributions to the program budget to support special projects of mutual priority to State and UNRWA, according to State officials. For each fiscal year from 2015 through 2017, State earmarked funds for the Human Rights, Conflict Resolution, and Tolerance project, an agency-wide, education-related project implemented in all five of UNRWA s fields of operations, including in the West Bank and Gaza. UNRWA officials stated that UNRWA aims to support teachers in integrating human rights, conflict resolution, and tolerance into the regular curriculum. As part of its education reform, UNRWA developed a Human Rights, Conflict Resolution, and Tolerance Policy and Teacher Toolkit to further strengthen human rights education in UNRWA. According to UNRWA officials, UNRWA has built on international best practices to better integrate human rights education in all UNRWA schools. The United States exclusively funds the Human Rights, Conflict Resolution, and Tolerance project activities, according to State officials. UNRWA estimated expending about $0.3 million on the Human Rights, Conflict Resolution, and Tolerance project in the West Bank and Gaza for fiscal years 2015 through 2017. In addition to State s funding for UNRWA, State s U.S. Consulate General in Jerusalem (ConGen) officials said that State/ConGen provided about $6 million in funding for three non-UNRWA education programs focused on youth in grades 1 through 10 in the West Bank and Gaza for fiscal years 2015 through 2017. These three education programs include (1) a program that provides secondary school students in the West Bank and Gaza an opportunity to study at American high schools and live with American host families; (2) an afterschool English language program that targets academically gifted and economically disadvantaged high school students; and (3) a 2-week summer camp program for at-risk Palestinian youth ages 8 through 14 residing in refugee camps and other marginalized areas throughout the West Bank, Gaza, and Jerusalem. <2.2. USAID Obligated about $50 Million for Education Projects Active in the West Bank and Gaza for Fiscal Years 2015 through 2017, and Did Not Fund Textbooks> Of the estimated $243 million that the United States provided for education assistance in the West Bank and Gaza for fiscal years 2015 through 2017, USAID obligated about $50 million for active non- construction education projects for this timeframe, and it did not fund textbooks, according to USAID officials. USAID funds supported six education projects, of which four were scholarship projects. Two projects the School Support Program and the Leadership and Teacher Development program provided support directly to Palestinian Authority public schools in the West Bank. The School Support Program offers assistance to 50 schools, including infrastructure rehabilitation of schools, in-kind assistance (e.g., science lab equipment and school supplies), extracurricular activities (sports, arts and music, career counseling, and psychosocial support), and leadership and teacher development for the school administration. The Leadership and Teacher Development program supports teacher, principal, and supervisor training to make teaching and learning practices more learner-centered, in addition to the introduction of information technology in education (e.g., internet connectivity, equipment, teaching of coding), classroom assessment and testing methods, and administrative reform at the school, district, and central levels. <2.3. UNRWA Reported Expending about $877 Million for Education in the West Bank and Gaza for Fiscal Years 2015 through 2017 and Purchased English Textbooks with Funds That Consist of Contributions from Donor Countries, including the United States> According to UNRWA-provided information, UNRWA expended about $877 million on education for fiscal years 2015 through 2017 in the West Bank and Gaza with funds from the United States and other donors. These funds were expended for UNRWA s education program, including the purchase of English language textbooks and other educational materials. <2.3.1. Education Program> Of the approximately $877 million UNRWA reported expending on education, it expended about $671 million for education in Gaza and $206 million for education in the West Bank. UNRWA s expenditures for Gaza are significantly higher because, as of June 30, 2017, UNRWA operated 275 schools in Gaza serving approximately 270,000 students compared to 95 schools in the West Bank serving approximately 48,000 students. UNRWA s largest reported expenditure within the education sector in fiscal years 2015 and 2016 was personnel-related expenditures, which represented about 85 percent of all education expenditures, according to UNRWA. <2.3.2. English Language Textbooks and Other Educational Materials> Between fiscal years 2015 and 2017, including estimated expenditures in 2017, UNRWA reported that it expended about $2 million on educational materials including about $1 million on English language textbooks for fiscal years 2015 through 2017 for UNRWA schools in the West Bank and Gaza. Of the approximately $1 million expended on English language textbooks, UNRWA estimates that the U.S. contributions totaled about $587,369, with about $28,763 for the West Bank and about $558,606 for Gaza. Educational materials made up less than one percent of UNRWA s reported education expenditures in the West Bank and Gaza in part because UNRWA does not purchase or fund textbooks for use in its schools in the West Bank and Gaza, with the exception of English language textbooks. The Palestinian Authority provides UNRWA with textbooks for all but one academic subject (English) as an in-kind contribution, according to UNRWA officials. As such, U.S. funds do not contribute to the textbooks that are published by the Palestinian Authority, according to UNRWA information. However, to purchase English language textbooks used in Gaza, UNRWA sent payment from its program budget, which includes commingled donor funds, directly to the Palestinian Authority Ministry of Education and Higher Education, for which they subsequently paid a private publisher. According to information provided by UNRWA, doing so lowered the per unit cost through bulk ordering. According to UNRWA, UNRWA staff work on complementary teaching materials educational materials that UNRWA develops to use alongside host government textbooks, as part of their regular course of work. They also work on student summer learning materials based on the textbooks. Therefore, the expenditures for these materials cannot be disaggregated from staff wages and salaries and are not included in UNRWA s expenditures for educational materials. <3. UNRWA and State Have Taken Actions to Identify and Address Potentially Problematic Textbook Content> UNRWA has reviewed Palestinian Authority textbooks for the first semester of grades 1 through 10 to identify content it deemed not aligned with UN values and has developed complementary teaching materials to address this content when considered necessary. However, UNRWA did not train teachers on the materials or distribute materials to classrooms; as a result, these materials were not used in UNRWA classrooms. Since at least 2015, State has used several means to identify and address Palestinian Authority textbook content it deemed problematic, including examining nongovernmental organizations allegations about problematic Palestinian Authority textbook content, engaging with Palestinian Authority officials, and monitoring UNRWA s efforts. <3.1. UNRWA Reported Taking Steps to Identify Textbook Content Not Aligned with UN Values and Efforts to Address Such Content Are Ongoing> UNRWA reported that it had reviewed 111 textbooks used in its West Bank and Gaza schools during three sessions since 2016 to identify content it deemed not aligned with UN values. UNRWA reported that it had developed specific complementary teaching materials for any page identified to address this content following each of the reviews. In addition, UNRWA reported that it had trained some field-level education staff but had not trained teachers on the materials or distributed materials to classrooms for several reasons including staff refusal to attend training and workshops. <3.1.1. Actions UNRWA Reported Taking to Identify Content Not Aligned with UN Values in Textbooks> UNRWA reported that it reviewed the Palestinian Authority and English language textbooks in part based on the values contained in its Framework for the Analysis and Quality Implementation of the Curriculum (Curriculum Framework), through which UNRWA aims to ensure that the curricula taught in its schools reflect UN values, such as neutrality, tolerance, equality, and nondiscrimination, and human rights with regard to race, gender, language, and religion. However, UNRWA explained that, given the urgency of reviewing any newly issued textbooks, it developed a rapid review process. Appendix II provides an overview of the Curriculum Framework and rapid review processes. UNRWA reported conducting three rapid reviews of all newly released Palestinian Authority textbooks since 2016, in each case using the rapid review criteria as a guide: beginning in October 2016, for textbooks for the first semester of grades 1 through 4; beginning in January 2017, for textbooks for the second semester of grades 1 through 4; and beginning in August 2017, for all textbooks used in UNRWA schools for the first semester of grades 1 through 10. UNRWA slightly revised the criteria used over the course of its three rapid reviews. UNRWA officials noted that for the first rapid review, they reviewed textbooks to determine if the textbooks were aligned with UN values and the UN commitment to neutrality. For the second rapid review, UNRWA developed three criteria: (1) neutrality/bias, (2) gender, and (3) aggressiveness. For the third rapid review, UNRWA renamed the criterion of aggressiveness to age-appropriateness to better reflect the types of issues it was intended to capture. The criteria for the third rapid review are 1. neutrality/bias: taking sides or engaging in controversies of a political, racial, religious, or ideological nature 2. gender: gender stereotypes 3. age-appropriateness (formerly aggressiveness): content that is violent, frightening, or inappropriate for the student s age. Appendixes II and III provide more detail on UNRWA s textbook reviews. In fall 2017, UNRWA reported to donors that, based on its rapid review criteria, its August 2017 review identified issues on 3.1 percent of the pages in the 75 textbooks for the first semester of grades 1 through 10 used during the school year 2017-2018. In particular, UNRWA identified 203 issues covering a total of 229 pages (out of a total of 7,498 pages reviewed), the majority of which they identified as related to neutrality/bias. According to UNRWA-provided information, UNRWA found no cases of incitement to violence in the Palestinian Authority grades 1 through 10 textbooks during the August 2017 rapid review. More than half of the neutrality/bias issues it found were related to one of the following three categories maps, Jerusalem, and cities for example, regional maps that exclude Israel and refer to Israeli cities as Palestinian. Additional details about the issues UNRWA identified and the complementary teaching materials it developed have been omitted from this report because the information is classified. In addition to issues UNRWA identified using the three rapid review criteria, it identified positive attributes in the textbooks newly issued by the Palestinian Authority, such as promoting active learning, life skills, gender equality, higher-order thinking, and problem-solving skills, according to UNRWA officials. <3.1.2. Actions UNRWA Reported Taking to Address Content It Deems Not Aligned with UN Values in Textbooks but Did Not Complete> For the content that UNRWA identified as not aligned with UN values during all three rapid reviews, UNRWA officials reported that they developed specific complementary teaching materials for any page with issues identified, such as alternate photos, examples, and guidance for teachers, as needed, to use with the textbooks in UNRWA schools. UNRWA also developed training guides and presentations to support training on the complementary teaching materials for each of the reviews. According to UNRWA, it developed these materials to ensure that the lessons taught in UNRWA schools adhere to UN core values, such as neutrality. In addition, UNRWA officials reported that they trained some field-level education officials but were not able to train teachers or distribute materials to classrooms. UNRWA officials told us that UNRWA did not change the content of Palestinian Authority textbooks and that they do not have the authority or mandate to do so. UNRWA developed complementary teaching materials to address the following issues it identified, among others, during its rapid review process of pilot textbooks for the second semester of grades 1 through 4 textbooks. Details about these complementary teaching materials were omitted because the information is classified. For details about the issues UNRWA addressed, see appendix IV. UNRWA officials told us that as of April 2018 they have reviewed all textbooks for the second semester of grades 1 through 10. UNRWA did not train teachers or complete distributing complementary teaching materials after its first rapid review for several reasons. In a January 2017 briefing note to the United States and other donors, UNRWA reported that it had completed training for professional support staff on the complementary teaching materials for the pilot textbooks for the first semester of grades 1 through 4. However, UNRWA officials told us that UNRWA was not able to deliver the training for school staff, including principals or teachers, or disseminate these materials to classrooms before the end of the first semester of the 2016-2017 school year. They noted that this was due to collective employment actions between August 2016 and January 2017, including staff walkouts and a refusal to attend training and workshops, that were unrelated to the curriculum reform and having to complete the school exam period immediately following the resolution of these collective employment actions. For similar reasons, UNRWA was unable to distribute materials or train teachers after the second rapid review of pilot textbooks for the second semester of the 2016-2017 school year. UNRWA reported to the United States and other donors in March 2017 that it anticipated completing training on the complementary teaching materials for all professional support staff and teachers by the end of that month in the West Bank and by the end of the following month in Gaza, according to State/PRM officials. However, UNRWA officials told us that UNRWA halted the training following a Palestinian Authority announcement of suspension of ties with UNRWA in response to UNRWA s use of complementary teaching materials, and the UNRWA staff union reactions. UNRWA then determined that these materials would be outdated because the Palestinian Authority planned to issue revised textbooks in August 2017, before the start of the new school year. UNRWA s efforts to train teachers and issue complementary teaching materials as a result of the third rapid review were ongoing as of December 2017. As of that date, UNRWA officials told us that UNRWA had finalized the complementary teaching materials for the final textbooks for the first semester of grades 1 through 4 and pilot textbooks for grades 5 through 10, as well as the English Language textbooks for the first semester of grades 1 through 10, all of which are being used during the 2017-2018 school year. UNRWA officials told us that UNRWA has developed training materials for the final textbooks for first semester grades 1 through 4 and pilot textbooks for grades 5 through 10 and planned to begin training of all relevant professional support staff, who will, in turn, train teachers using a cascaded training model. In addition, UNRWA officials reported sharing the complementary teaching materials in PDF format with field education staff in the West Bank and Gaza for distribution to all teachers. However, in commenting on a draft report, UNRWA officials told us in April 2018 that they did not disseminate the training or the complementary teaching materials for the third rapid review for various reasons. For example, some UNRWA staff opposed the use of these materials in classrooms while other staff boycotted the training. In addition, UNRWA faced deteriorating operational and political environments during that time period, such as financial shortfalls, as well as an increased number of violent confrontations between Palestinians and Israeli Security forces in the West Bank and Gaza. According to UNRWA, these factors heightened sensitivities and risks associated with the training and curriculum enrichment materials. As a result, these materials were not used in UNRWA classrooms. <3.2. State Reported Taking Steps to Identify and Address Content Deemed Problematic> To promote appropriate content in Palestinian Authority textbooks, State/ConGen officials have examined nongovernmental organizations studies and allegations about potentially problematic Palestinian Authority textbook content and confirmed instances of problematic material since fiscal year 2015. State/ConGen officials told us that the studies they reviewed raised concerns with a range of content, and they will continue their reviews of these studies in the future. In examining Palestinian Authority textbooks, State/ConGen has found material that ignores Israeli narratives, includes militaristic and adversarial imagery, and preaches the values of resistance, according to State officials. Although according to State officials there has been a general agreement in these studies on the absence of anti-Semitic content or explicit incitement to violence in Palestinian Authority textbooks, State/ConGen nonetheless has confirmed instances of inappropriate language, content, and imagery based on the grade level of certain textbooks. State/ConGen also noted that the textbooks do not mention Israel or Judaism, and they continue to include regional maps that exclude Israel. In response to allegations that two textbooks in particular the National and Social Education (civics) textbooks for grades 3 and 4 contained problematic content, State/ConGen officials reported that they translated them into English and then analyzed two new pilot civics textbooks for grade 4 for the first and second semesters as well as previous versions of the same books and contracted for an external review of the textbooks. State/ConGen officials selected these textbooks for translation and analysis to examine a smaller subset of material reviewed in one independent study. State/ConGen officials told us in September 2017 that they had received the results of the external review and that these results informed their advocacy efforts and provided external perspective on additional material. To address incitement to violence, such as the inclusion of problematic content in textbooks, State/ConGen officials have engaged the highest levels of the Palestinian Authority officials, according to State officials. State/ConGen officials reported that, since 2015, they have encouraged Palestinian officials during these meetings to address incitement to violence in textbooks, and Palestinian officials have done so. Officials also noted that the Palestinian Authority President has publicly condemned incitement to violence and vowed to combat it. A case study of a particularly problematic lesson illustrates State/ConGen s role and approach. State/ConGen officials reported that a specific math problem using the number of Palestinian casualties in the First and Second Intifadas (uprisings) was clearly objectionable even if it did not demonstrate a call for violence against Israel. The Consulate and Consul General subsequently raised this concern with Palestinian officials, including the Minister of Education. To discuss the Palestinian Authority s ongoing textbook reform and address potential concerns, State/ConGen officials reported that they also convened a meeting in April 2017 of international donor groups and members of the international community that participate in the Palestinian-led Education Sector Working Group. A State official said that the group conducted a wide-ranging discussion about incitement to violence and agreed to discuss incitement bilaterally with the Palestinian Authority as appropriate. State/ConGen continued to raise the issue with the Palestinian Authority following the meeting. In accordance with State/PRM s role in monitoring UNRWA s efforts to identify and address potentially problematic content in Palestinian Authority textbooks, State/PRM reports that it engages regularly with UNRWA. It does so through reviews of UNRWA reports, site visits to UNRWA schools and classrooms when and where security permits, regular communication with UNRWA staff at UNRWA headquarters and in the field, and by attending UNRWA s briefings on the status of its textbook reviews. In addition, State/PRM officials aim to ensure that UNRWA takes adequate steps to ensure neutrality in UNRWA s operations. To do so, State/PRM meets regularly with UNRWA officials to ensure that UNRWA operates in a fully neutral way in line with UN standards across all sectors of operation, including education and content of textbooks. <4. State Submitted Required Reports to Congress, but One Contains Inaccurate Information and Reports Do Not Include Some Information That Could Be Useful for Congressional Oversight> State/PRM submitted annual reports to Congress in response to provisions in the annual appropriations acts for fiscal years 2015, 2016, and 2017; however, these reports have several limitations regarding educational assistance. First, we found that State/PRM s 2017 report inaccurately described certain UNRWA actions to address textbook content not aligned with UN values. Inaccurate information about UNRWA s actions could limit the transparency of State s and UNRWA s activities and the usefulness of State s reports as tools for congressional decision making and oversight. Second, while State s reports explain generally how UNRWA is taking steps to ensure that educational materials in UNRWA schools are consistent with certain values, we found that the reports did not include some information about UNRWA s textbook review that could be useful for congressional oversight. Specifically, State s reports did not specify whether the educational materials are consistent with the value of dignity or not inducing incitement. In addition, we found that in its 2017 report, State did not include information provided by UNRWA about the nature and extent of content that UNRWA identified in Palestinian Authority textbooks as not aligned with UN values. This information, while not required by law to be included in State s reports, could be useful to congressional decision- makers. <4.1. State s Reports Generally Explain UNRWA s Actions to Address Textbook Content Not Aligned with UN Values, but Its 2017 Report to Congress Inaccurately Described Certain Actions> State submitted reports to Congress each year in a timely manner in accordance with the requirements of the appropriations acts. In the annual appropriations acts for fiscal years 2015 through 2017, Congress required State to report on seven different topics, including whether UNRWA is taking steps to ensure that the content of all educational materials taught in UNRWA schools and summer camps is consistent with the values of human rights, dignity, and tolerance, and does not induce incitement. State s reports explain that UNRWA applied its Curriculum Framework in reviewing textbook content and that the Curriculum Framework will help ensure all materials used in UNRWA classrooms reflect UN values and principles. These UN values address issues related to neutrality, human rights, tolerance, and non-discrimination. These values are aligned with the ones that are included in the laws, according to State officials. However, we found that State s 2017 report to Congress inaccurately described some of UNRWA s actions to address content that is not aligned with UN values. State correctly reported that UNRWA completed several actions related to its second rapid review, including that UNRWA reviewed 18 new Palestinian Authority pilot textbooks, with a particular focus on the issues of neutrality and bias, gender, and aggressiveness. However, State reported that UNRWA trained teachers on the application of the complementary teaching materials they developed and disseminated the materials to classrooms, actions that UNRWA officials told us they did not complete. State/PRM officials stated they became aware that UNRWA s classroom training and dissemination of complementary teaching materials had been delayed in June 2017, after the school year ended and after submitting the report to Congress in May 2017. State/PRM officials stated that, based on conversations they had with UNRWA during tense discussions between UNRWA and the Palestinian Authority in March and April 2017, they believed UNRWA would train teachers and disseminate complementary teaching materials after the tensions dissipated. These officials said they did not provide the congressional report to UNRWA for it to review because it is considered an internal U.S. government document. While State/PRM officials stated they verified facts related to other aspects of the reporting requirement, they did not verify the implementation of training and dissemination of complementary teaching materials because they believed this information to be current given ongoing dialogue with UNRWA. In addition, State/PRM officials told us that they were not aware of the inaccuracy in their report to Congress until we brought it to their attention, although they were aware that the trainings had not been implemented in June 2017. In November 2017 about 6 months after the 2016-2017 school year ended State/PRM officials told us that their understanding remained that UNRWA had trained some education staff on the application of the complementary teaching materials, though not all teachers, and that UNRWA had disseminated the materials to some education staff and schools, though not to all classrooms. From State s perspective, the statement in its report to Congress about UNRWA training teachers and disseminating complementary teaching material was partly accurate. However, UNRWA officials confirmed that they did not disseminate the training or the complementary teaching materials related to the second rapid review to any school staff, including principals and teachers. In October 2017, State noted that it has taken, or plans to take, action to address the accuracy of reporting in the future. First, subsequent to learning that the training had been halted in June 2017, State/PRM officials reiterated to UNRWA the need to keep them informed in a timely manner when the situation in the field shifts with regard to textbooks and other issues. State/PRM officials also said that they would likely avoid misreporting facts in the future by taking additional actions, such as including specific dates of the actions taken in their reports and verifying key facts with UNRWA. Further, they said they plan to address the issue of inaccuracy in the fiscal year 2018 report, if needed. Standards for Internal Control in the Federal Government states that management should use quality information to achieve the entity s objectives. Incomplete and inaccurate information about UNRWA s actions could limit the transparency of UNRWA s activities and usefulness of State s reports as tools for congressional decisionmaking and oversight. <4.2. State s Reports Do Not Include Some Information That Could Be Useful for Congressional Oversight> Our analysis also showed that State s required reports did not include some information that could be useful for congressional oversight of whether UNRWA is taking steps to ensure that all the content of all educational materials currently taught in UNRWA schools and summer camps is consistent with the values of human rights, dignity, and tolerance, and does not induce incitement. In particular, our analysis showed that while State s reports partly explain how certain educational materials are consistent with two elements included in the law (human rights and tolerance), they do not address the other two elements (dignity and not inducing incitement). In addition, State s reports do not include details about the nature and extent of content UNRWA identified in Palestinian Authority textbooks as not aligned with UN values. State s reports for all 3 years partly explain how certain educational materials are consistent with the values of human rights and tolerance but do not specifically say whether the Palestinian Authority textbooks are consistent with these values. In particular, the reports discuss the U.S.- funded Human Rights, Conflict Resolution, and Tolerance project and accompanying teacher toolkit. The toolkit aims to ensure that teachers have the skills and resources to implement human rights education across UNRWA classrooms. The reports note that in Gaza, UNRWA students use a dedicated human rights curriculum anchored in the Universal Declaration of Human Rights. While the Human Rights, Conflict Resolution, and Tolerance project is relevant to the congressional reporting requirement, it is supplemental to the Palestinian Authority textbooks the core educational materials used in UNRWA s schools. State s reports do not discuss whether these Palestinian Authority textbooks are consistent with the values of human rights and tolerance. Moreover, none of State s reports for these 3 years explicitly state that the UN values UNRWA applied in reviewing textbooks encompass the value of dignity or not inducing incitement. State/PRM officials said that these topics are addressed implicitly, in that the value of dignity is encompassed by the concepts of human rights and non-discrimination, which are among the elements encapsulated by the UN values applied as part of the Curriculum Framework. State/PRM officials further assert that reporting to Congress on UNRWA s application of UN values via the Curriculum Framework necessarily encompasses the concept of non- inducement of incitement. In State s view, materials reviewed through the lens of UN values and principles as defined by the UN imply that such review is taking into consideration whether the materials include incitement to violence. However, State did not include language about dignity or not inducing incitement explicitly in its reports to Congress. Regarding the nature and extent of content UNRWA identified in Palestinian Authority textbooks as not aligned with UN values, State did not include details provided by UNRWA about UNRWA s reviews of Palestinian Authority textbooks in its May 2017 report to Congress that, while not required by law to be included in State s reports, could be helpful for congressional oversight. The May 2017 report states that UNRWA reviewed pilot textbooks for the first and second semesters of grades 1 through 4 and identified a limited amount of problematic content in the Palestinian Authority materials. However, State s report did not cite the percentage of all pages UNRWA deemed as including content not aligned with UN values, the percentage of issues UNRWA identified in relation to each of the three rapid review criteria, or examples of such content (e.g., frightening pictures that they considered inappropriate for children), which UNRWA had reported to State/PRM and other donors at least 2 months earlier. We have previously reported that agencies should consider the differing information needs of various users to ensure that performance information will be useful in decision making. Standards for Internal Control in the Federal Government states that information should be communicated in a way that is useful to internal and external users. Less thorough information in State s annual reporting could limit its usefulness as a tool for congressional oversight. In addition, the lack of certain relevant information may limit Congress ability to fully assess the nature and extent of material that may not be aligned with UN values in Palestinian Authority textbooks. <5. Conclusions> The United States has funded education for Palestinian children for decades, including an estimated $243 million for fiscal years 2015 through 2017. State funds education projects to improve the quality of education to equip Palestinians with the skills to grow their economy and build a democratic, secular, politically moderate, and outward-focused Palestinian civil society as a driver for peace, according to the Consulate General s Education Statement of Purpose. Congress remains interested in the role UNRWA plays in educating children under its purview, requiring State to report on steps UNRWA is taking to ensure that the content of all educational materials currently taught in UNRWA- administered schools is consistent with the values of human rights, dignity, and tolerance, and that those materials do not induce incitement. State s 2017 report inaccurately describes certain UNRWA actions to address content not aligned with UN values. In addition, State s reports to Congress did not specify whether the educational materials used in UNRWA schools are consistent with the value of dignity or not inducing incitement. Although State s reports generally discuss whether UNRWA is taking certain steps, the lack of certain relevant information in State s reports could limit their usefulness as a tool for congressional decision making and oversight. Accurate and complete information would help Congress more fully understand and assess the nature and extent of content in textbooks that is not aligned with UN values, as well as UNRWA s actions to address this content. <6. Recommendations for Executive Action> We are making the following four recommendations that could further enhance State s annual reports to Congress: The Secretary of State should direct the Assistant Secretary for Population, Refugees, and Migration to establish a process to ensure that State s reporting to Congress on the actions UNRWA has taken is accurate. (Recommendation 1) The Secretary of State should direct the Assistant Secretary for Population, Refugees, and Migration to provide information in its reports to Congress that could be useful for congressional oversight, including information that: discusses whether Palestinian Authority textbooks used in UNRWA schools are found to be consistent by UNRWA with the values of human rights and tolerance. (Recommendation 2) explicitly states whether the UN values UNRWA applied as part of the Curriculum Framework encompass dignity and do not induce incitement. (Recommendation 3) describes the nature and extent of textbook content that UNRWA identified as not aligned with UN values, including in the English language textbooks purchased by UNRWA. (Recommendation 4) <7. Agency Comments and Our Evaluation> We provided a draft of our April 2018 classified report to State and USAID for comment. We also provided UNRWA with relevant information for comment. In response, State and UNRWA provided written comments on the classified report. We have reprinted State s updated written comments in appendix V and UNRWA s original written comments in appendix VI. All three also provided technical comments, which we incorporated as appropriate throughout our report. In its written comments on this report, State noted that it has implemented all four of our recommendations contained in the classified report we issued in April 2018. To ensure the accuracy of information in its reports, State has developed standard operating procedures for drafting and verifying the information contained in its annual report to Congress on UNRWA, including clearly sourcing all information contained in the report and seeking written verification from UNRWA on any information previously obtained via oral communication. State implemented our recommendation that it discuss whether Palestinian Authority textbooks used in UNRWA schools are found to be consistent by UNRWA with the values of human rights and tolerance. State included additional qualitative details from UNRWA on its evaluation of the Palestinian Authority materials, and the degree to which UNRWA assesses that these materials are consistent with human rights and tolerance. State implemented the recommendation to explicitly state in its reports to Congress whether the UN values UNRWA applied as part of the Curriculum Framework encompass dignity and do not induce incitement. In addition, State implemented the recommendation to describe the nature and extent of textbook content that UNRWA identified as not aligned with UN values, including in the English language textbooks purchased by UNRWA. State provided additional qualitative and quantitative details from UNRWA s evaluation of Palestinian Authority textbooks in its fiscal year 2018 report based on information provided by UNRWA. In its written comments, UNRWA said, among other things, that while using the curricula and textbooks of host nations, UNRWA s education program strives to realize the potential of all its Palestine refugee students, to help them develop into confident, innovative, questioning, thoughtful, tolerant and open-minded critical thinkers, who uphold human values and tolerance, and contribute positively to the development of their society and the global community. In addition, UNRWA noted that it appreciates our understanding of the role of the Curriculum Framework and how UNRWA takes specific measures to rapidly review newly issued textbooks, including the large number of new textbooks released by the Palestinian Authority Ministry of Education and Higher Education throughout 2016 and 2017. UNRWA also commented that while it does not have authority to determine or alter national curricula, UNRWA is committed to taking all measures within its control to ensure that the delivery of its educational services is fully aligned with the values of the United Nations. UNRWA did not comment on our recommendations, since they were not directed to UNRWA. We are sending copies of this product to the appropriate congressional committees, as well as the Secretary of State, the Administrator of USAID, the Commissioner-General of UNRWA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9601 or melitot@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology This report examines (1) the amount of funding Department of State (State) and U.S. Agency for International Development (USAID) provided for education assistance to the West Bank and Gaza for fiscal years 2015 through 2017 and how it was used; (2) how the United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA) and State have identified and addressed potentially problematic content in educational materials used by schools in the West Bank and Gaza; and (3) whether State has submitted required annual reports to Congress including information on whether UNRWA is taking steps to ensure that the content of all educational materials currently taught in UNRWA- administered schools is consistent with the values of human rights, dignity, and tolerance, and do not induce incitement. To determine which U.S. government agencies provide education assistance for the West Bank and Gaza, we reviewed documents and conducted interviews with State, USAID, and the Overseas Private Investment Corporation (OPIC). We initially conducted an interview with OPIC because it was included in a previous report we issued on a similar topic. We excluded OPIC from the scope of this engagement because they did not provide relevant education assistance to the West Bank or Gaza between fiscal years 2015 and 2017. We focused our review on State and USAID because these agencies provided education assistance to the West Bank and Gaza during this timeframe. For this review, we refer to State and USAID when we refer to the U.S. government. To examine the amounts of funding State and USAID provided for education assistance to the West Bank and Gaza and how it was used for fiscal years 2015 through 2017, we took the following steps. We examined actual funding where it was available and estimated funding where it was not. We obtained and analyzed financial data from State and USAID and expenditure data from UNRWA for education assistance to the West Bank and Gaza in fiscal years 2015 through 2017. We used these data to describe how much and for what types of activities State contributed funds to UNRWA. We also obtained and analyzed expenditure and contributions data from State and obligations data from USAID to describe non-UNRWA education programs that they administered in the West Bank and Gaza. We reported the amount of funds UNRWA expended in general on education in the West Bank and Gaza, including the amounts that UNRWA expended on educational materials and specifically on textbooks. We define educational materials to primarily include curriculum, textbooks, select videos and web-based tools, and any complementary teaching materials, including those developed by UNRWA that aim to supplement, replace, or mitigate materials that UNRWA deems not aligned with UN values. We exclude posters, library books, educational technology, education administration materials, extracurricular materials, handouts and worksheets, and teacher training materials, with limited exceptions, from materials produced by UNRWA and used to mitigate material that UNRWA deemed not aligned with UN values or to supplement the curriculum. According to UNRWA officials, the financial information they provided pertains to educational materials, including textbooks, complementary teaching materials, and costs related to an interactive learning portal in Gaza and UNRWA TV. Finally, we reported the amount of funds USAID obligated for education programs in the West Bank and Gaza during this timeframe. To analyze these data, we reviewed State-UNRWA contribution agreements, State reports on UNRWA emergency appeals expenditures, and USAID award documents. We examined the two types of funding that State contributed to UNRWA program budget funding and emergency appeals. We also examined the three ways in which UNRWA expends that funding through program budget expenditures, emergency appeals expenditures, and special project expenditures. We supplemented these data by interviewing State, UNRWA, and USAID officials about funding. While the majority of UNRWA data are actual expenditures, some UNRWA data are estimates. According to UNRWA officials, they estimated all UNRWA expenditure data for fiscal year 2017 because, as of December 2017, when we finished collecting data, UNRWA s 2017 fiscal year was ongoing. In addition, UNRWA estimated its education expenditures provided by the United States because U.S. contributions to UNRWA are generally not earmarked. Rather, UNRWA s core budget, its program budget, pools funding from all UNRWA donors. For this reason, we reported all UNRWA expenditure data on education assistance based on information UNRWA officials provided us. To make these estimates, UNRWA officials informed us that they calculated U.S. funding as a proportion of all UNRWA funding, and applied that proportion to their educational expenditures. Data on State s contributions to UNRWA and USAID s funding to education programs in the West Bank and Gaza active between fiscal years 2015 and 2017 are obligations; according to State, all funds disbursed to UNRWA were through contributions. Data on State s funding for non-UNRWA education programs are expenditures. For the purposes of this report, we use the U.S. fiscal year (October 1 through September 30) for all State and USAID contributions data, while we use UNRWA s fiscal year (January 1 through December 31) for all UNRWA expenditure data. In addition, State and USAID awarded several grants for additional years not included in our scope. For example, the USAID s first obligation to the Leadership and Teacher Development program occurred in fiscal year 2011 and the latest obligation to that program occurred in fiscal year 2017. As a result, the data presented in this report may include additional contributions of funds beyond what State and USAID obligated for fiscal years 2015 through 2017. To determine the reliability of the obligations and expenditure data, we requested information from State, UNRWA, and USAID officials regarding the processes they used to collect and verify data, and we checked the data for reasonableness and completeness. When we found discrepancies or missing data fields, we worked with relevant agency officials to correct the discrepancies and missing fields. We compared State s contribution data with UNRWA s expenditure data to ensure consistency. We discussed UNRWA s financial data for educational expenditures with knowledgeable officials, reviewed audited financial statements for confirmation, and reviewed vouchers they provided. However, we did not independently audit their financial data. To ensure completeness of the data, we reviewed initial grant documents or contribution agreements and all associated amendments for the (1) six education projects USAID funded in the West Bank and Gaza, and (2) annual UNRWA contributions State made between fiscal years 2015 and 2017. We discussed UNRWA s procedures for estimating the proportion of U.S. funds that went to educational expenditures with knowledgeable officials. Based on our initial assessments of the data, we determined that the State and USAID funding data we collected were sufficiently reliable for the purposes of this report. In addition, we determined that the actual expenditure data we collected from UNRWA were sufficiently reliable for our purposes, and that the estimated expenditures it provided were reasonable for the purposes of this review. (To examine how UNRWA and State have identified and addressed potentially problematic content in educational materials used by schools in the West Bank and Gaza, we reviewed the policies and procedures that UNRWA and State have established and implemented. We focused on actions agencies took in response to the (1) pilot textbooks for grades 1 through 4 that the Palestinian Authority issued in 2016 and that UNRWA used during the 2016-2017 school year; (2) final textbooks for grades 1 through 4, and pilot textbooks for grades 5 through 10 the Palestinian Authority issued in in 2017 and used during the first semester of the 2017-2018 school year; and (3) English language textbooks that UNRWA and the Palestinian Authority purchased for grades 1 through 10 published in 2011 through 2014 and used during the 2017-2018 school year. According to UNRWA officials, these textbooks do not include the second semester Palestinian Authority textbooks for the 2017-2018 school year (released in late 2017) and the second semester English language textbooks, and therefore do not cover all the textbooks used in UNRWA and Palestinian Authority schools for grades 1 through 10. We examined how UNRWA and State have implemented their policies and procedures. We reviewed State s cables and agencies policy documents and reports and met with officials from State, UNRWA, and USAID in Washington, D.C., and overseas. In addition, we interviewed international donors overseas and officials from the government of Israel, the Palestinian Authority, and Jerusalem municipality. We only interviewed official government entities and public international organizations and did not meet with non-governmental interest groups. We followed up with relevant officials on multiple occasions to assess the progress of textbook review and the status of implementation of other policies and procedures. We interviewed UNRWA officials about the methods they used to conduct the rapid reviews of textbook content and reviewed documents they provided that outline their procedures. While the methods and procedures described seemed generally reasonable, we did not independently review UNWRA s underlying documents to fully assess the reliability of the rapid review results it reported because UNRWA is an international organization. Moreover, it was beyond the scope of our review to examine the underlying documents and textbooks themselves, most of which are written in Arabic. There can be a number of challenges to analyzing and coding content as UNRWA did in its rapid reviews, such as the need for those performing the review to exercise judgment, and while the overall process officials outlined generally appeared reasonable, we cannot comment on the extent to which it successfully overcame all of the potential challenges. We are presenting the results of the textbook reviews, attributed to UNRWA, to help support our finding that the agency has developed procedures to review textbooks, and that it found some concerns in its recent reviews. In addition, we are providing details about these reviews for context because the State Department summarized the results of the first two reviews in its May 2017 report to Congress, which we discuss in the third section of this report. This report is a public version of a classified report that we issued in April 2018. The Department of State deemed some of the information in our April 2018 report to be classified, which must be protected from loss, compromise, or inadvertent disclosure. Therefore, this report omits classified information about neutrality/bias, gender issues, and other textbook content identified in English language textbooks by UNRWA as not aligned with UN values. Although the information provided in this report is more limited, the report addresses the same objectives as the classified report and uses the same methodology. To examine whether State has submitted required annual reports to congressional committees, including information on whether UNRWA is taking steps to ensure that the content of all educational materials currently taught is consistent with the UN values of human rights, dignity, and tolerance, and do not induce incitement, we took the following steps. We reviewed the legal requirements for State to report on the steps UNRWA is taking to ensure that the content of all educational materials currently taught is consistent with the UN values. These requirements are found in the annual appropriations acts; for fiscal year 2017, the requirement is located in Section 7048(d)(5) of the Consolidated Appropriations Act, 2017. We reviewed State s reports to Congress in 2015, 2016, and 2017, and compared data State reported regarding education assistance with data we gathered through meetings with State and UNRWA officials in in Washington, D.C., and overseas. We also reviewed UNRWA documents. The performance audit upon which this report is based was conducted from January 2017 to April 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We subsequently worked with State from February 2019 to June 2019 to prepare this unclassified version of the original classified report for public release. This public version was also prepared in accordance with these standards. Appendix II: Overview of UN Relief and Works Agency for Palestine Refugees in the Near East s (UNRWA) Curriculum Framework Review and Rapid Review Processes UNRWA s Framework for Analysis and Quality Implementation of the Curriculum (Curriculum Framework) provides the overarching structure for the review and enrichment of educational materials used in UNRWA schools in all of its fields of operation, including the West Bank and Gaza. The Curriculum Framework, developed as part of UNRWA s education reform process, aims to ensure that the curricula taught in its schools support the development of skills and competencies that are considered important for individual development in the 21st century. In addition, the Curriculum Framework aims to ensure that the delivery of the host country s curriculum reflects UN values, such as neutrality, tolerance, equality, and nondiscrimination, and human rights with regard to race, gender, language, and religion as well as the development of respect for a child s own cultural identity, language, and values in line with UN values. According to UNRWA officials, neutrality is one of the four humanitarian principles formally adopted by the UN General Assembly and endorsed by UNRWA and is a core obligation and value of UN staff as spelled out in the UN s regulatory framework. According to UN humanitarian principles, the concept of neutrality means that, irrespective of their personal beliefs and opinions, umanitarian actors must not take sides in hostilities or engage in controversies of a political, racial, religious or ideological nature. The Curriculum Framework includes 10 Curriculum Framework principles and five student competencies against which UNRWA reviews educational materials used in its schools: Principle 1 Focuses on understanding and application and not just memorization Principle 2 Is active, practical, and encourages independent thinking Principle 3 Is relevant to students lives and situation, particularly as Principle 4 Provides a variety of teaching and learning approaches Principle 5 Integrates learning and emphasizes connections to other Principle 6 Is inclusive and provides learning opportunities for Principle 7 Provides for students personal development and well- Principle 8 Is free of biases (such as gender, disabilities, and ethnicity) Principle 9 Enables students to value their Palestinian culture, Principle 10 Reflects UN values Curriculum Framework Student Competencies: 1. Critical and creative thinking 3. Communication and literacy UNRWA s Curriculum Framework includes tools to guide the analysis and review of host country textbooks and other learning material at the school and field levels, and remains the overarching framework for the review and enrichment of educational materials used in UNRWA schools agency-wide. However, given the urgency of reviewing any newly issued textbooks for use during the 2016-2017 school year, UNRWA developed a rapid review process. The rapid review process does not replace the Curriculum Framework process, as the Palestinian Authority textbooks reviewed through the rapid review process are also subject to the regular Curriculum Framework review process at the field and school levels, as follows: At the field level, field education staff are to use the Field-Level Analysis Tool of the Curriculum Framework to review textbooks against all five student competencies and 10 principles of the Curriculum Framework. At the school level, all UNRWA teachers and school principals in the West Bank and Gaza and UNRWA s other fields of operations are to use the School-Level Analysis Tool of the Curriculum Framework to review their own teaching programs and lessons, including curriculum materials they use, while considering their context and diversity of needs. The School-Level Analysis Tool focuses on the five Student Competencies and select Curriculum Framework Principles: (1) Principle 4 provides a variety of teaching and learning approaches; (2) Principle 6 is inclusive and provides learning opportunities for students of all abilities; (3) Principle 8 is free of biases (such as gender, disabilities, and ethnicity); (4) Principle 9 enables students to value their Palestinian culture, heritage, and identity; and (5) Principle 10 reflects UN values. The Curriculum Framework is a more comprehensive pedagogical review one that relates more directly to the theory and practice of education than the rapid review process, which focuses specifically on three rapid review criteria linked to the UN values in the Curriculum Framework. According to UNRWA documents, UNRWA employed a multi-stage rapid review process to identify textbook content not aligned with UN values, and its efforts to address this content were ongoing as of November 2017. Figure 3 summarizes UNRWA s process. Complementary teaching materials are educational materials that UNRWA developed to use alongside host government textbooks to ensure that the lessons taught in UNRWA schools adhere to UN core values, such as neutrality, according to UNRWA officials. UNRWA s Agency Task Force is composed of the Chief of Staff and headquarters officials from the departments of Education and Legal Affairs, according to UNRWA officials. The cascade training model involves training groups of individuals who in turn train other individuals. UNRWA has established strategic support units in the fields that train educational specialists who then train school principals and teachers using a cascade model, according to UNRWA officials. Professional support staff include field-level strategic support unit staff, education specialists, and Chiefs of the Field Education Programs, according to UNRWA officials. Appendix III: 2016-2017 Rapid Review, as Reported by the United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA) Appendix III: 2016-2017 Rapid Review, as Reported by the United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA) UNRWA reported that it has reviewed newly issued Palestinian Authority textbooks during three rapid review sessions since 2016 to identify content it deems not aligned with UN values and that it has developed complementary teaching materials to specifically address this content for any page with issues identified. Throughout the 2016-2017 school year, UNRWA reported reviewing pilot textbooks newly issued by the Palestinian Authority for grades 1 through 4 in two separate reviews. In August 2017, UNRWA reported reviewing the final textbooks for grades 1 through 4 for the first semester, pilot textbooks for grades 5 through 10 for the first semester, and English language textbooks funded with contributions from donor countries, including the United States, for grades 1 through 10 for the first semester. For the August 2017 review, UNRWA reported reviewing 75 textbooks (7,498 pages) in aggregate. Table 1 provides details on the number of textbooks and number of pages UNRWA reported reviewing between 2016 and 2017 for the textbooks used in its schools in the West Bank and Gaza. Table 2 provides detail on the academic subjects for which UNRWA reported reviewing Palestinian Authority textbooks in 2016 and 2017. Table 2. Select Academic Subjects for Which the United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA) Reported Reviewing Content of Palestinian Authority and English Language Textbooks, 2016-2017 recitation) Legend: =UNRWA reviewed textbook for this subject. N/A= Not applicable because Palestinian Authority and UNRWA schools do not use these textbooks for the grades listed. Appendix IV: Textbook Content Issues Identified by the United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA) Appendix IV: Textbook Content Issues Identified by the United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA) During its August 2017 review of textbooks for grades 1 through 10 for the first semester, UNRWA identified 203 issues covering a total of 229 pages (out of a total of 7,498 pages reviewed), the majority of which it identified as related to neutrality/bias. Specific details about the percentage of pages with issues UNRWA identified in relation to each of the three rapid review criteria subjects, as well as the types and percentages of neutrality/bias issues UNRWA reported finding were omitted because the information is classified. Of the 203 issues UNRWA identified in the textbooks for the first semester of grades 1 through 10 for the 2017-2018 school year, UNRWA officials reported that they identified the largest number of issues in social studies textbooks (105 issues), followed by Arabic grammar (30 issues), Islamic education (20 issues), mathematics (18 issues), science and life (15 issues), English language (14 issues), and vocational education (1 issue). The 14 issues that UNRWA identified in the English language textbooks purchased by UNRWA for the first semester of grades 1 through 10 cover a total of 22 pages out of 664 textbook pages (3.3 percent), according to UNRWA officials. Of the 14 issues, UNRWA officials identified 10 of the 14 as neutrality/bias issues and 4 as gender issues. The neutrality/bias issues that UNRWA identified include issues related to maps, Jerusalem, and the Islamic religion. Details about the neutrality/bias and gender issues that UNRWA identified and the complementary teaching materials it developed were omitted because the information is classified. UNRWA officials identified four examples in the English language textbooks for the first semester of grades 1 through 10 that show a lack of gender balance in sports, hobbies, and professions. In response, they developed complementary classroom discussion questions to discuss gender bias with UNRWA students. Details about the gender issues that UNRWA identified and the complementary teaching materials UNRWA developed were omitted from this report because they included classified information. Appendix V: Comments from the State Department Appendix VI: Comments from the United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA) Appendix VI: Comments from the United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA) Appendix VII: GAO Contact and Staff Acknowledgments <8. GAO Contact> Thomas Melito at (202) 512-9601 or melitot@gao.gov. <9. Staff Acknowledgments> In addition to the contact named above, Cheryl Goodman (Assistant Director), Jaime Allentuck (Analyst in Charge), Ashley Alley, Martin de Alteriis, and Lynn Cothern made key contributions to this report. Other contributors to this report include Neil Doherty, Mark Dowling, Aldo Salerno, and Mona Sehgal. | Why GAO Did This Study
The U.S. government has funded education assistance to Palestinians. The State Department oversees U.S. contributions to UNRWA, and USAID provides assistance to Palestinian Authority schools. UNRWA generally administers schools for Palestine refugees. The Palestinian Authority generally administers schools for non-refugee Palestinians who live in the WBG. During the 2016-2017 school year, it issued new pilot textbooks for grades 1 through 4 for use in both its and UNRWA's schools. GAO was asked to review issues related to U.S. education assistance to the WBG.
This report examines (1) the funding the U.S. government provided for education assistance to the WBG for fiscal years 2015 through 2017, (2) how UNRWA and State have identified and addressed potentially problematic content in textbooks, and (3) whether State has submitted required annual reports to Congress including information on educational materials used in UNRWA schools. To address these objectives, GAO reviewed documents and interviewed U.S. government, UNRWA, and Palestinian Authority officials. For this report, GAO refers to potentially problematic content as that which State defined as inappropriate and that UNRWA defined as not aligned with UN values.
What GAO Found
The U.S. government funded an estimated $243 million for education assistance in the West Bank and Gaza (WBG) for fiscal years 2015 through 2017, including an estimated $193 million from the Department of State (State) and about $50 million from the U.S. Agency for International Development (USAID). Of State's contribution of approximately $193 million, the United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA) estimated that about $187 million was provided for its education assistance. State provided the remaining approximately $6 million for non-UNRWA education projects. UNRWA purchased English language textbooks used in UNRWA schools with funds that consist of contributions from donor countries, including the United States. The U.S. government and UNRWA did not fund textbooks published by the Palestinian Authority because the Palestinian Authority provided these textbooks free of charge, according to agency officials.
UNRWA and State have taken steps to identify and address potentially problematic content of textbooks used in UNRWA schools, such as maps that exclude Israel. UNRWA reviewed textbooks, including English language textbooks, and took actions to address content it deemed as not aligned with UN values. For example, UNRWA created complementary teaching materials, such as alternate photos, examples, and guidance for teachers to use with the textbooks in UNRWA schools. However, due to financial shortfalls and other constraints, UNRWA officials told GAO that UNRWA did not train teachers or distribute the complementary teaching materials to classrooms. As a result, these materials were not used in UNRWA classrooms. To address textbook content deemed problematic, State examined nongovernmental organizations' studies, encouraged Palestinian Authority officials to address the issue, and monitored UNRWA's efforts.
The annual appropriations acts for fiscal years 2015 through 2017 require State to report to Congress on several topics, including steps UNRWA has taken to ensure that the content of all educational materials taught in UNRWA schools is consistent with the values of human rights, dignity, and tolerance, and do not induce incitement. Although State submitted its required reports to Congress on time, State included inaccurate information in the 2017 report and omitted potentially useful information in all three reports. In its 2017 report, State noted incorrectly that UNRWA had completed training teachers and distributed complementary teaching materials to address textbook content that UNRWA deemed as not complying with UN values. In all three of the reports, State omitted information concerning whether UNRWA found that any educational materials used in its schools do not comply with two of four elements, dignity and not inducing incitement. Standards for Internal Control in the Federal Government states that management should use quality information to achieve the entity's objectives and communicate it in a way that is useful to users. Without a fuller explanation, Congress may not have the information it needs to oversee efforts to identify and address potentially problematic textbook content.
What GAO Recommends
GAO made four recommendations in our April 2018 report that State improve its reports to Congress, including to ensure the information presented is accurate and to provide additional information on the textbook content UNRWA identified as not aligned with UN values. State implemented all of GAO's recommendations. |
gao_GAO-20-70 | gao_GAO-20-70_0 | <1. Background> <1.1. Executive Retirement Plans> Companies that offer executive retirement plans typically do so to supplement benefits provided under qualified retirement plans or to provide retirement benefits in lieu of a qualified retirement plan. In an executive retirement plan, a select group of managers or highly compensated employees defer the receipt of compensation earned in one year to be paid in a future year, generally at or after retirement. Executive retirement plans are not subject to certain statutory limits that apply to qualified retirement plans, such as limits on the annual amount of benefits received, the annual amount of contributions made to the plan, or the annual compensation level used to determine benefits and contributions. Executive retirement plans can be structured as defined benefit plans or defined contribution plans but generally must defer compensation to a future year. For executive retirement plans structured as a defined contribution plan, executives benefits are based on a plan account balance. During the deferral period, companies will typically allow executives to select from among a menu of market indices (e.g., of stock, or bond performance or of interest rates) or other investment options and base the plan account balance on the performance of those selections. The company generally credits plan contributions and changes in the value of the plan account balance to executives, but does not have to make actual investments that correspond to executives selections because companies are not obligated to designate funds for the plan before distributions are made. For executive retirement plans structured as a defined benefit plan, executives are typically paid based on a formula that accounts for salary and years of employment. Distributions from all executive retirement plans are made from company assets. In the first objective of this report, we discuss and illustrate the defined contribution form of executive retirement plans, except as otherwise indicated. <1.2. Employee Retirement Income Security Act of 1974> ERISA contains various provisions intended to protect the interests of plan participants and beneficiaries in workplace retirement plans. These protections include requirements related to reporting and disclosure, participation, vesting, and benefit accrual, as well as plan funding. Generally, most of the substantive protections of ERISA do not apply to executive retirement plans. Specifically, ERISA requirements pertaining to participation, vesting, funding, and fiduciary responsibilities do not apply to executive retirement plans. The policy underlying the executive retirement plan exemption from the substantive provisions of ERISA has been described by DOL as based on a recognition by Congress that certain individuals, by virtue of their position or compensation level, have the ability to affect or substantially influence, through negotiation or otherwise, the design and operation of their deferred compensation plan. Additionally, ERISA grants DOL the authority to prescribe alternative methods of compliance for the reporting and disclosure provisions under Part 1 of Title I for any plan or class of plans, which includes executive retirement plans. Using this authority, DOL issued a regulation permitting administrators of executive retirement plans to submit a one- time single page filing statement to satisfy ERISA reporting requirements in 1975, according to DOL. DOL s executive retirement plan filing statement includes: the name and address of the employer, the employer identification number (EIN) assigned by the IRS, a declaration that the employer maintains a plan or plans primarily for the purpose of providing deferred compensation for a select group of management or highly compensated employees, and a statement of the number of such plans and the number of employees in each plan. In addition, plan administrators are required to provide plan documents to DOL upon request. <1.3. The Internal Revenue Code and Tax Treatment of Executive Retirement Plans> The Internal Revenue Code (IRC) provides preferential tax treatment for workplace retirement plans that meet certain qualification requirements set out in the IRC. The structure of tax incentives and certain limits on qualified retirement plans are intended to balance encouraging employers to establish and maintain voluntary, tax-qualified pension plans with ensuring lower-income employees receive an equitable share of the tax-subsidized benefits. Although executives may benefit from tax deferral under an executive retirement plan, these plans are not eligible for the same preferential tax treatment afforded to qualified retirement plans under the IRC. For the executive to be eligible for the tax deferral, executive retirement plans must be an unfunded and unsecured company promise to pay benefits in the future. Generally, for an executive retirement plan to be considered unfunded and unsecured, the executive s rights to receive plan distributions will be no greater than the rights of an general unsecured creditor in the event of company bankruptcy or insolvency. Companies are not permitted to fund (i.e., set aside assets for the exclusive benefit of participants that are separate from company assets and beyond the reach of creditors) executive retirement plans while maintaining the benefits of tax-deferral for executives. However, companies are able to informally fund executive retirement plans by transferring amounts to a trust that remains part of the company s general assets often referred to as a Rabbi Trust to help keep its promise to pay benefits. Because executive retirement plans are unfunded, executives benefits in these plans can be subject to credit risk of non-payment, such as in the event of a company bankruptcy, according to IRS officials. The IRC provides rules regarding deferring compensation in executive retirement plans, including restrictions on the timing of distributions, restrictions on payment acceleration, and restrictions on the timing of deferral elections. At the time of deferral, the amount of compensation deferred under the plan is generally excluded from executives income for tax purposes and not tax deductible for the company (see fig. 1). During the deferral period, because any assets associated with the executive retirement plan remain company assets (and subject to creditor claims), the company is subject to applicable taxes on any investment earnings attributable to the assets. Executives are subject to federal income taxes on their executive retirement plan distributions when they are received. However, if an executive retirement plan fails to meet the applicable requirements at any time during a taxable year, all of the compensation deferred, including investment earnings associated with the deferred compensation, is included in each executive s gross income for the taxable year to the extent it is vested, along with an additional 20 percent tax on the compensation to be included in gross income plus additional income tax. Companies must defer taking their tax deductions, up to statutory limits, for plan contributions they make until the executive is taxed on those benefits. <1.4. Additional Federal Regulatory Oversight> In addition to DOL s role under ERISA and IRS s role administering the IRC requirements related to executive retirement plans, other federal agencies may have roles related to executive retirement plans. For example, SEC requires public companies to provide an annual proxy statement that includes information on the amount and type of executive compensation including benefits from executive retirement plans paid to their Chief Executive Officer (CEO), Chief Financial Officer (CFO), and the next three most highly compensated executive officers. Other federal agencies that play a role with respect to qualified retirement plans, such as the PBGC, may monitor the status of executive retirement plans in certain circumstances, such as in bankruptcy proceedings involving a company with both an executive retirement plan and a qualified single-employer defined benefit plan (see table 1). <2. Most Large Public Companies Provide Their Top Executives with Executive Retirement Plans but the Federal Revenue Effects of these Plans Are Unknown> <2.1. Most Large Public Companies Provide Top Executives with Executive Retirement Plans> According to our analysis, more than 400 of the 500 largest U.S. public companies provided executive retirement plans to almost 2,300 top executives, totaling about $13 billion in accumulated plan benefits in 2017 (see fig. 2). Although DOL collects limited data on the prevalence of executive retirement plans, public companies subject to SEC reporting requirements for executive retirement plans must report the benefits provided to the Chief Executive Officer (CEO), Chief Financial Officer (CFO), and the next three most highly compensated executive officers. Industry experts we interviewed said that most large companies offer executive retirement plans to help executives and highly compensated employees save more for retirement because most executives have reached the contribution and income limits imposed on savings in qualified retirement plans. <2.2. Executive Retirement Plan Benefits Are Concentrated Among a Subset of Top Executives> Top executives at large public companies generally accumulated more executive retirement plan benefits than top executives at smaller companies. The most recent available data from 2017 show that the average accumulated plan benefit among the top five executives in large companies was about $5.7 million, about twice as much as their counterparts in smaller companies, where the average was about $2.8 million. The average and median accumulated plan benefits generally remained consistent for large and smaller companies from 2013 to 2017 (see fig. 3). In addition, our analysis showed that, among the top five executives at large public companies, accumulated plan benefits are concentrated among a subset of these top executives based on their job title, company contributions, and plan type. The average accumulated plan benefit among top executives in large companies was consistently greater than the median accumulated plan benefit from 2013 to 2017 (see fig. 3). For example, as of 2017, the average accumulated plan benefit among top executives was more than four times the median, indicating that plan benefits for a smaller subset of executives is greater than a majority of other individual executives. <2.2.1. Total Accumulated Plan Benefits by Title> CEOs accumulated more executive retirement plan benefits than the next four highest compensated executives. As of 2017, the CEOs had accumulated, on average, about $14 million in executive retirement plan benefits. In contrast, CFOs had accumulated, on average, about $3 million and the next three most highly compensated executive officers with other titles accumulated an average of about $3.4 million in accumulated plan benefits. Our analysis also showed that, for each of the three job title categories (CEO, CFO, and the next three most highly compensated executive officers), the average accumulated plan benefits were at least twice the median amount from 2013 to 2017 (see fig. 4). <2.2.2. Plans with Company Contributions> From 2013 to 2017, about 80 percent of large companies that offered an executive retirement plan made company contributions to the plan. As of 2017, the average accumulated plan benefit for top executives among companies providing company contributions was more than $6.5 million. This was more than twice the average of nearly $2 million for executives in about 20 percent of the remaining companies that offered an executive retirement plan that did not include company contributions. Our analysis showed that plan benefits are also concentrated among a subset of executives as the average amount of accumulated plan benefits for executives in plans that received company contributions were several times greater than the median from 2013 to 2017 (see fig. 5). <2.2.3. Executives with Defined Benefit Plans> The top five executives with defined benefit executive retirement plans generally accumulated more plan benefits than those with defined contribution executive retirement plans alone. As of 2017, about 30 percent of large companies that sponsored an executive retirement plan offered a defined benefit plan, as compared with about 70 percent that only offered a defined contribution plan. In 2017, the top five executives at large companies with a defined benefit plan had accumulated plan benefits of nearly $9 million on average, more than twice the average of about $4.4 million for top five executives with defined contribution executive retirement plans alone. Our analysis showed that plan benefits are concentrated among a subset of executives as the average accumulated plan benefits for top five executives with a defined benefit plan was several times more than the median from 2013 to 2017 (see fig. 6). However, industry experts told us the number of companies offering defined benefit executive retirement plans has declined over time. <2.3. Executive Retirement Plans Can Offer Executives Tax, Savings, and Financial Planning Advantages> Executive retirement plans can help executives reduce their potential tax liability, increase retirement savings, and provide financial planning advantages through: (1) tax substitution of investment earnings, (2) additional company compensation for investment earnings, (3) additional company compensation for personal income taxes, and (4) allowable distributions during working years. <2.3.1. Tax Substitution of Investment Earnings> Treasury officials and some industry experts told us that executives who participate in executive retirement plans may be able to reduce their potential federal tax liability on plan investment earnings and increase their savings because these plans substitute the executive s applicable individual tax rate on investment earnings with the company s corporate tax rate (see fig. 7). In an executive retirement plan, the company defers compensation for the executive, but investment earnings on associated assets during the deferral period are taxed to the company at the company s applicable corporate tax rate (see Executive defers compensation at top of fig. 7). In contrast, the executive who chooses not to defer compensation and instead takes the current compensation (paying income taxes) and invests the balance will pay taxes on investment earnings at the individual tax rate (see Executive does not defer compensation at bottom of fig. 7). The actual taxes paid under either scenario deferring compensation or not will depend on a number of factors, including the type of investments, if any, selected by the executive or the company, length of time invested, and applicable tax rates. For example, an executive who does not defer compensation and invests outside of the plan might select investments that are expected to produce long-term capital gains, which are taxed at lower individual rates than short-term capital gains. This same executive, if deferring compensation through the plan, might elect to invest in short-term bonds or investment earnings based on a market interest rate, which are taxed at a lower corporate tax rate inside the plan than outside. As another example, a company might invest deferred compensation in a tax-favored vehicle such as corporate- owned life insurance. According to Treasury officials and some industry experts, by participating in an executive retirement plan, executives may be able to effectively reduce their potential federal income tax liability during the deferral period because investment earnings on associated plan assets are taxed at the company s corporate rate that may be lower than the executive s individual tax rate. This tax substitution of investment earnings may allow the plan account to grow over time at a higher rate of investment return than if an executive invested in the same or similar assets outside the plan. Further, any such tax advantages may allow companies to reduce their total compensation costs. Conversely, Treasury officials told us the IRC may effectively disadvantage executive retirement plans to the extent the tax on an executive s investment earnings outside the plan is lower than the tax the company would pay if invested through the plan. In this circumstance, the tax disadvantage may increase the cost of companies total compensation. However, our analysis of tax rates suggests that the corporate tax rate may be lower than the individual tax rate on several forms of investment income. In this case, the company may be able to achieve a higher after-tax rate of return on investments than the executive can, depending on the type of investment and amount of time invested. The lower the applicable corporate tax rate is relative to the applicable individual tax rate, the greater the tax benefit for the executive or the company. Treasury officials and some industry experts told us that, in this scenario, the potential tax advantage resulting from tax substitution of investment earnings is effectively a federal subsidy because the federal government receives less in tax revenue. And due to the effects of compounding, the tax advantage is also greater the longer the deferral period (and higher the investment return). Treasury officials and experts whose published work we reviewed and interviewed told us the potential effective federal tax subsidy for executive retirement plan investment earnings can be greater when companies have effective tax rates that are lower than statutory tax rates. This can occur, for example, when a company s losses from the current year or losses carried over from prior years offset all other company income, including any investment earnings associated with their executive retirement plan. In these instances, the federal government could effectively subsidize the plan investment earnings because it receives no taxes on those earnings until funds are distributed. <2.3.2. Additional Compensation for Investment Earnings> Companies also provide executives with additional executive retirement plan compensation that increases their overall savings by not passing along taxes paid on investment earnings during the deferral period, according to Treasury officials and some industry experts. In this scenario, a company s assets associated with the executive retirement plan are reduced for taxes it pays on investment earnings, but the executive s corresponding plan account balance is unaffected by tax because the company provides the executive with additional plan compensation in the same amount as the taxes the company pays. Unaffected by taxation on investment earnings, the account balance accumulates over time at a pre-tax investment rate of return, rather than at the company s potentially lower after-tax investment rate of return, until those funds are distributed to the executive. In this manner, this additional compensation provided by the company allows the account balance of an executive retirement plan to accumulate in the same way as in a qualified defined contribution retirement plan (e.g., a 401(k) plan). The additional compensation can result in a substantial benefit for an executive, and due to the effects of compounding, the benefit is greater the longer the deferral period (and higher the investment return). <2.3.3. Additional Compensation for Personal Income Taxes> Lastly, industry experts said some companies provide additional executive retirement compensation to pay for the personal income taxes that executives expect to pay when plan benefits are distributed. This practice is known as a tax gross-up because the company increases the amount of gross or pre-tax executive retirement plan benefits to pay for the executive s anticipated income taxes at distribution. As a result, the executive effectively receives the total amount of the initial pre-tax benefit at distribution. For example, a company that wants an executive who is in the 37 percent income tax bracket to receive $1,000 from the plan on an after-tax basis would provide an additional $588 in plan compensation (for a total of $1588) to cover the executive s anticipated taxes at distribution. Treasury officials said that while tax gross ups and other similar executive compensation practices provide an economic benefit to executives, these practices by companies to offset executives tax burden is a corporate governance issue for shareholders to decide and that tax law does not address their appropriateness. Some industry experts told us that it has become less common for public companies to offer tax gross-ups, mostly due to shareholder concerns about their appropriateness in light of required public disclosures. <2.3.4. Plan Distributions during Working Years> Executive retirement plans can also provide executives with financial planning benefits through allowable distributions during their working years. Treasury officials and industry experts said that while executive retirement plans are intended for retirement purposes, plans typically also allow executives to take distributions while still working. These distributions generally are allowed if they comply with applicable statutory requirements. Industry experts told us that executives can align distributions during their working years with income needs, such as to pay for a child s college expense, or for specific goals, such as buying a home. Industry experts said that the ability to structure pre-retirement distributions can allow executives to smooth out their overall income over time to better coordinate use of other income sources during their working years and retirement, which they said can lead to overall tax savings. <2.4. Federal Revenue Effects of Executive Retirement Plans Are Unknown> Executive retirement plans can provide tax advantages that may have revenue effects for the federal government, but the extent of those effects currently is unknown. Treasury is responsible for providing economic analysis and revenue estimates of tax legislation for the executive branch, and Treasury officials said that the Congressional Joint Committee on Taxation prepares official revenue estimates of all tax legislation considered by the Congress. Treasury officials told us that while executive retirement plans do not receive the preferential tax treatment afforded to qualified retirement plans, these arrangements can result in tax advantages that may have revenue effects for the federal government. These officials explained that executive retirement plans are tax revenue neutral when corporate tax rates and individual tax rates (or taxes paid) are the same because the federal government would generally receive the same amount of taxes regardless of the executive s decision to defer compensation. Treasury officials also told us that executive retirement plans could have federal revenue effects to the extent corporate and individual tax rates (or taxes paid) diverge from each other. <3. Bankruptcies Reviewed Resulted in Various Expected Outcomes for Executive Retirement Plan Benefits> <3.1. Executive Retirement Plan Participants Expected Benefit Losses and Recoveries Varied Across Company Bankruptcies Reviewed> Among the 38 Chapter 11 corporate bankruptcy cases we reviewed, 30 cases showed that participants in executive retirement plans expected to receive general unsecured creditor status when settling their plan benefit claims. As a general unsecured creditor, executives in these plans are part of what is typically the last creditor class to be paid in bankruptcy, and only if funds remain after claims from all other creditors with payment priority have been paid in full (see fig. 8). Our review of bankruptcy cases showed that executives expected losses and recoveries varied among the 30 Chapter 11 cases we reviewed where all or some plan participants were expected to receive general unsecured creditor status for their plan benefit claims (see fig. 9). In 21 of the 30 cases, plan participants were expected to sustain losses of more than 75 percent of their plan benefit claims, and in 17 of these 21 cases, participants were estimated to lose 90 percent or more. However, the remaining nine cases showed that participants were expected to recover more than half of their plan benefit claims with six of those cases expecting a full recovery and one case expecting a 99 percent recovery. Companies generally file for bankruptcy when they do not have sufficient assets to pay off their debts. Bankruptcy and industry experts said that executive retirement plan participants as general unsecured creditors may expect to sustain a significant or even a total loss of their deferred compensation in a company bankruptcy. However, bankruptcy and industry experts noted that the level of losses or recoveries depends on the facts and circumstances of each case, including the type of bankruptcy the company filed. Our review of bankruptcy cases showed differences in expected benefit losses and recoveries based on whether the bankrupt company intended to continue to operate by filing a reorganization plan or sell all of its assets to pay creditors by filing a liquidation plan. Among the 30 Chapter 11 bankruptcy cases where participants in executive retirement plans were expected to receive general unsecured creditor status, 14 filed a reorganization plan and 16 filed a liquidation plan. <3.1.1. Reorganization> Among the bankruptcy cases we reviewed, executives were generally estimated to sustain less severe claims losses and recover more of their plan benefits if their company filed a reorganization plan to continue to operate and restructure its debts. In seven of 14 reorganization cases we reviewed, executive retirement plan participants were estimated to recover about 80 percent or more of their plan benefit claims, with participants in six of those cases expected to fully recover their benefits. In contrast, participants in the remaining seven of 14 cases were estimated to sustain benefit claims losses of about 20 percent or more, with participants in five cases expected to lose 90 percent or more. Industry experts told us plan participants are more likely to sustain fewer losses when their bankrupt company reorganizes because it has a plan to emerge from bankruptcy and pay its debts as it continues to operate. Bankruptcy and industry experts noted that in some reorganization cases, general unsecured creditors can receive full recoveries. <3.1.2. Liquidation> Executives were generally estimated to sustain greater plan benefit claim losses if their company filed a liquidation plan. In 15 of 16 liquidation cases we reviewed, executive retirement plan participants were estimated to sustain losses of nearly 50 percent or more of their plan benefit claims. Participants in the remaining case were expected to nearly fully recover their benefits. Industry experts told us that whether a company has a viable post-bankruptcy future affects its ability to fulfill its debt obligations, including paying promised plan benefits to executive retirement plan participants. Bankruptcy experts said the severity of plan benefit claims losses for participants is generally greater when a bankrupt company liquidates because it signals the end of a company and is a last resort after it has exhausted all other options to restructure its debts and continue to operate. <3.2. Executive Retirement Plan Benefits Were Expected to be Maintained in Some Bankruptcies Reviewed Where Participants Were Not Expected to Receive General Unsecured Creditor Status> Among the 38 Chapter 11 bankruptcy cases we reviewed, 11 involved the situation where all or some of the executive retirement plan participants were not expected to receive general unsecured creditor status for their benefit claims. Although the circumstances varied among these 11 cases, the expected outcome was that some of these participants plan benefits which were accrued at or around the time the company filed for bankruptcy were expected to be preserved or paid. <3.2.1. Reorganization> Among the 11 cases we reviewed in which executive retirement plan benefits were expected to be maintained, eight occurred with a bankrupt company that filed a reorganization plan. In three of the eight cases, benefits for all plan participants were expected to be preserved; in five cases participants were divided into different groups where some were expected to have their benefits preserved and others were not. Bankruptcy and industry experts said that, paying plan benefit claims in a bankruptcy often depends on the financial health of the company and the value of the executive to the future of the company. These experts also said that not all executive retirement plan participants receive the same treatment for their claims. These experts added that a common scenario is to preserve in some manner the benefits for key executives who are retained, while giving executives who are not retained, or former executives no longer with the company, less favorable treatment as a general unsecured creditor. Industry experts also told us that some executive retirement plan participants benefits may be preserved or the participants may be provided with more favorable treatment because they are key executives who need to be retained to help ensure their company successfully reorganizes and emerges from bankruptcy. These experts explained that key executives may not be willing to risk staying on without assurances that accrued plan benefits will be preserved or made up in some manner. Bankruptcy and industry experts said that because key high-level executives can be integral to the success of a company reorganization, its major creditors are more likely to agree to preserve plan benefits for them because it will likely result in increased overall recoveries and greater benefits for their stake in the company. Lastly, bankruptcy and industry experts said that in order for bankrupt companies to retain key executives, they typically need to provide assurances that, in addition to executive retirement plan benefits, executives will receive other forms of compensation. Bankruptcy and industry experts noted that because various forms of executive compensation may be interchangeable to the executive, informal agreements may be arranged so that executive retirement plan benefit losses that may occur as a general unsecured creditor are made-up through other forms of compensation. However, they told us these types of arrangements are not discernable from bankruptcy filings. <3.2.2. Liquidation> In three of the 11 cases we reviewed in which executive retirement plan benefits were expected to be preserved, the companies filed a Chapter 11 liquidation plan. Court filings indicated executive retirement plan participants in two of the three cases received distributions shortly before the company filed bankruptcy. In one case, the bankruptcy estate chose not to seek to recover those funds despite restrictions for early distributions before a bankruptcy in part because the costs to recover the monies outweighed the benefits. Bankruptcy and industry experts said that while there are restrictions and penalties for early distributions before a bankruptcy, the costs and time associated with suing to recover monies can discourage bankruptcy estates from pursuing legal action. <4. Opportunities Exist to Strengthen Agency Oversight Efforts to Protect Benefits and Prevent Ineligible Employees from Participating in Executive Retirement Plans> <4.1. IRS Provides Little Oversight of Companies with Executive Retirement Plans during a Restricted Period> IRS oversees executive retirement plans for compliance with the IRC during audits of companies who offer such plans. The Pension Protection Act of 2006 amended the IRC to provide that, during a restricted period, which includes bankruptcy, if a company that sponsors a qualified single- employer defined benefit plan sets aside or reserves assets in a trust for the purposes of paying nonqualified deferred compensation (which includes executive retirement plan compensation) to applicable covered employees (key executives), the key executives are required to include the amount of assets in their gross income for the taxable year. A restricted period is defined as: (1) any period in which the plan sponsor is a debtor in bankruptcy; (2) any period when the qualified single-employer defined benefit plan of the company is in at-risk status; or (3) the 12- month period that begins 6 months before the date the qualified single- employer defined benefit plan is terminated if, as of the termination date, the plan s assets are not sufficient to cover benefit liabilities. In general, a company s qualified single-employer defined benefit plan is in at-risk status if it is less than 80 percent funded. As part of its oversight effort, IRS officials said that its examiners can use IRS s Nonqualified Deferred Compensation Audit Techniques Guide (the guide) to audit these plans for compliance with the IRC, including the relevant provision, which was added by the Pension Protection Act of 2006. The guide describes the requirements in section 409A of the IRC related to deferred compensation set aside during a restricted period. While the guide is designed to provide guidance for IRS employees, the guide is publicly available and also useful for businesses and tax professionals who prepare returns. However, the guide does not instruct examiners or other users on how to determine compliance with the relevant provision. For example, the guide does not instruct examiners or other users to determine if the company has set aside assets such as by making contributions of funds to a Rabbi Trust to pay deferred compensation during bankruptcy. It also does not require examiners or other users to obtain data sufficient to determine whether there exists a restricted period with respect to the company s qualified single-employer defined benefit plan. Lastly, it does not provide instructions regarding the type of data to collect or questions to ask to determine whether a company s defined benefit plan is in a restricted period. When asked if additional instructions were available to examiners on auditing companies with these plans for compliance with the relevant provision, IRS officials pointed us to sections of the Internal Revenue Manual (IRM), IRS s primary source of instructions to staff, and other internal training manuals. However, we found no specific instructions in these sources related to the relevant IRC provision or its oversight. IRS officials said examiners can also review SEC filings to determine whether there exists a restricted period with respect to a company s qualified single-employer defined benefit plan. However, SEC filing requirements do not apply to many privately-held companies, limiting the usefulness of this information source for IRS audit examiners for this purpose. IRS officials also said that Form 5500, Annual Return/Report of Employee Benefit Plan, and the attached schedules are available on the DOL website and that examiners can download and review these data during their examinations. For example, officials said information on the 5500 Form s Schedule SB, Single-Employer Defined Benefit Plan Actuarial Information, can be used to verify the income tax deduction for contributions to pension plans. Specifically, the schedule s Item 4 box, Part I Basic Information, will be marked if the plan is in at-risk status. The form, however, does not capture whether companies set aside assets for the purpose of paying deferred compensation or elicit information about a company s bankruptcy. Moreover, the IRM, the guide, and the IRS training manuals provide no instruction to examiners regarding how to review this information during audits of companies with executive retirement plans. IRS also may be able to use non-confidential information that PBGC collects to monitor the financial condition of companies that sponsor single-employer defined benefit plans. In its capacity to provide plan termination insurance, PBGC monitors single-employer defined benefit plans including companies financial condition and plans at-risk status through a variety of reporting requirements and initiatives. For example, because PBGC represents itself and the pension plan and participants as a creditor when companies (publicly and privately-held) sponsoring single-employer defined benefit plans file for bankruptcy, it is aware of such bankruptcy filings. PBGC also uses data that companies are required to report on Form 5500, describing the assets and liabilities of their single-employer defined benefit plans, to identify when a defined benefit plan is underfunded or in at-risk status. IRS may be able to use the timely, non-confidential information PBGC possesses to help IRS identify whether companies with single-employer defined benefit plans are setting aside assets for the purpose of paying deferred compensation under an executive retirement plan during a restricted period. Federal standards for internal control require federal agencies to obtain and use quality information and to communicate this information to internal and external parties that can help the agency achieve its objectives and address related risks. Without providing specific instruction to its examiners to collect and evaluate information that describes company actions relative to this requirement limiting tax deferral for key executives for amounts deferred under an executive retirement plan and set aside by the company during a restricted period, IRS cannot sufficiently determine if companies are including these amounts in the executives gross income as required by the IRC provision. Without taking steps to improve the sufficiency of its audit instructions to help strengthen its oversight, IRS cannot know if companies are reporting the correct amount of income for taxation for these key executives and if the correct amount of tax is being paid by the executives in these instances. IRS also may not be collecting additional taxes and interest due from key executives who participate in executive retirement plans. Absent improved IRS oversight in this area, companies may be failing to report assets set aside to pay deferred compensation to key executives while in a restricted period as income for these employees. To the extent some companies are failing to report this income, they may continue to do so at the cost of foregone federal tax revenues while lacking an important incentive from IRS to cease this practice. <4.2. Required DOL Reporting on Executive Retirement Plans Does Not Include Complete and Timely Data on Employee Participation> Another aspect of executive retirement plan oversight is ensuring that only eligible executives are allowed to participate since these plans are excluded from most of ERISA s substantive protections. DOL requires companies to report on their executive retirement plans, but the reporting lacks important information that could allow the agency to identify plans that may be including ineligible employees. Currently, under its alternative reporting method regulation, DOL regulations require the administrator of the executive retirement plan, typically the sponsoring company, to submit a one-time single page filing statement within 120 days of the executive retirement plan being established to satisfy ERISA reporting requirements (see fig. 10). According to DOL officials, no other filings are required for executive retirement plans to comply with Part 1 of Title I of ERISA. The information provided in the filing statement does not describe the job title or salary of executives participating in the plan, the percentage of the company s workforce that is eligible to participate, or the actual percentage of employees who participate in the plan; nor does it compare the salaries of executives with rank-and-file workers. Because DOL only requires companies to submit the filing statement once within 120 days of plan formation, the agency is not aware when participation in the plan changes over time or if plans are terminated. When asked if these additional data would be useful to the agency, one DOL official said that they could be used to increase oversight of executive retirement plans. For example, the official said if the filing statement included the percentage of the company s workforce that participated in such a plan, a high participation percentage could signal to DOL that the company might be permitting employees to participate in the plan who do not meet the select group requirements, and that such information could prompt a DOL audit. However, the DOL official said the agency would need to evaluate how the data would be used and the collection costs before determining the data s overall value. The preamble to DOL s regulation states that the agency chose to require limited reporting because these plans are for executives who generally have access to information concerning their rights and obligations under the plan and do not need ERISA protections. Moreover, DOL officials said there is no statutory requirement specifically directing the agency to collect executive retirement plan data and no requirement for companies to file an amended filing statement to report substantive plan changes. However, ERISA authorized DOL to prescribe an alternative method of reporting and the agency chose to require a limited one-time single page filing statement for executive retirement plans. DOL officials said the data currently collected can only be used for simple analysis or to facilitate the agency s ability to respond to requests from Congress, the media, or the public. This limited usefulness regarding eligibility is due to the age and limits of the original data submitted. However, officials told us there currently is no plan to place executive retirement plan reporting on DOL s regulatory project agenda. Federal standards for internal control state that agencies should (a) use quality information to achieve its objectives; (b) obtain data from reliable sources in a timely manner based on identified information requirements; and (c) process the data into quality information information that is appropriate, current, complete, accurate, accessible, and timely to support its internal control system. Without reviewing or clarifying its reporting requirements to allow the agency to collect more useful information on executive retirement plans, DOL will continue to lack insight into the composition of these plans and, as a result, may be missing opportunities to ensure that companies with executive retirement plans are meeting the eligibility requirements for the plan. <4.3. Experts Have Indicated Companies are Often Unclear on How to Establish Executive Retirement Plan Eligibility> Many industry experts we spoke to said that eligibility requirements for executive retirement plans are not clearly defined and that companies are unclear on how to establish eligibility. DOL has acknowledged that at least in one case a company may have denied ERISA protections to rank- and-file employees by allowing them to participate in executive retirement plans. DOL officials also said the agency has issued guidance on the executive retirement plan provisions in ERISA. For example, DOL pointed us to Advisory Opinion 90-14A, which DOL officials said is the agency s most recent advisory opinion on provisions related to plan participant eligibility. The Advisory Opinion restates that executive retirement plans are excluded from most of ERISA s substantive protections and describes DOL s view that the term primarily, as used in the statute, refers to the purpose of the plan the benefits provided rather than the participant composition of the plan (see fig. 11). The Advisory Opinion further states DOL s view that executive retirement plans that include employees who are not from a select group of management or highly compensated would fail to constitute a select group under ERISA, which would subject the plan to all of the requirements of Title I. Despite the information in the Advisory Opinion, several industry experts expressed the view that DOL s current policy lacks specific information on the factors companies should consider when establishing eligibility for participation in these plans. Recent industry surveys we reviewed have suggested some companies may be extending employee eligibility to a relatively high percentage of their workforce in some cases, more than 30 percent and to relatively lower-paid or lower-ranked employees. For example, results from a recent survey of executive retirement plan sponsors suggested that just over 8 percent of respondents offer eligibility to between 20 to 30 percent of their workforce and just over 4 percent offer eligibility to more than 30 percent of their employees. Further, over 20 percent of respondents indicated that over 15 percent of their workforce was considered highly compensated employees and eligible to participate in an executive retirement plan. Industry experts pointed to court cases that they identified as contributing to the confusion regarding executive retirement plan eligibility, including cases that have suggested a limit on the percentage of employees who may participate in an executive retirement plan and still constitute a select group. Several industry experts suggested that DOL could help to address this issue in the future by providing a safe harbor that describes limits or thresholds companies could follow to establish eligibility. Two industry experts identified a range of possible information DOL could provide, such as a ceiling on the percentage of the company s workforce permitted to participate, job titles that could be eligible for participation, or a compensation threshold. Industry experts also suggested more detailed information on factors to consider for eligibility, rather than a one-size-fits-all design, would help to ensure the information would be flexible enough for a variety of companies to apply. We asked DOL officials about issuing clarifying information on the statutory requirements under ERISA for eligibility into these plans. DOL officials stated that the agency has the authority to do so but has no plans to issue guidance because it has not encountered eligibility problems during plan audits and enforcement actions. Rather, DOL officials said that in light of resource constraints, other high priority guidance projects, and the absence of systematic abuses involving these plans, it does not believe it advisable to shift resources from other projects to undertake a guidance project in this area. DOL officials said the agency no longer renders decisions on the status of select group eligibility for executive retirement plans in advisory opinions or in response to external inquiries because such determinations involve factual questions that are not well suited to an advisory opinion or informal participant assistance process. Federal standards for internal control require federal agencies to communicate quality information externally through reporting lines so that external parties can help the entity achieve its objectives and address related risks. By exploring ways it may be able to help reduce the incidence of ineligible employees participating in executive retirement plans, DOL could help ensure ineligible rank-and-file employees are not participating in these plans and are receiving the applicable protections under ERISA. One such way may be by providing information to companies on factors to consider when determining a select group to aid companies in establishing plan eligibility. A related issue that companies can face is dealing with eligibility decisions that turn out to be in error. DOL officials told us they have not issued any guidance on how companies are to correct eligibility errors found in executive retirement plans. Officials referred us to a 2015 amicus brief DOL filed in a particular case that described the department s views on how companies might consider addressing eligibility errors. The amicus brief suggests that the company could modify the plan to exclude the ineligible rank-and-file employees and award them the full vesting and other protections under ERISA while maintaining the plan s status under ERISA as an executive retirement plan for those executives who do qualify. However, the amicus brief states that DOL took no position on the form of equitable relief appropriate under ERISA to redress an employer s violation of vesting requirements by including rank-and-file employees in an executive retirement plan. The amicus brief also suggests that this approach would avoid providing a windfall gain to executives who properly could have been included in such a plan, because they possess sufficient bargaining power to protect their rights, and are not the intended beneficiaries of the substantive provisions under Parts 2, 3, and 4 of Title I of ERISA. When asked about this remedy, DOL officials said that funds from the executive retirement plan could be distributed to a qualified retirement plan for rank-and-file employees, with their benefits immediately fully vested and receiving ERISA protections. When we discussed the possible remedy described in the amicus brief with IRS officials, they said that while 409A regulations were being drafted, they were aware that applying strict distribution rules could have adverse tax consequences for rank-and-file employees participating in executive retirement plans. IRS officials said that removing these employees from these plans and awarding them full vesting of their benefits under Title I of ERISA could violate section 409A, raising concerns that the possible remedy noted in DOL s amicus brief may be inadequate for companies seeking a method to correct plan errors. Officials also said that there are certain exceptions under section 409A when accelerated payments may be permitted; however, IRS officials said there is no current exception permitting an accelerated payment to be made to a rank-and-file employee in order to correct a violation of Title I of ERISA. IRS officials said they are willing to work with DOL to promulgate new section 409A regulations to create an exception to the accelerated payment rule for plans that seek to remove ineligible rank-and-file employees from the plan and make distributions to an employee s qualified retirement plan in order to maintain the plan s ERISA exemption. However, IRS officials said that prescribing corrective action in these situations is under DOL s purview and that DOL first would need to further delineate the meaning of an executive retirement plan employee and then decide the proper approach for removing ineligible rank-and-file employees from a plan before any new regulations under section 409A could be considered. As mentioned above, federal standards for internal control require federal agencies to externally communicate necessary quality information to achieve their objectives. Without additional information from DOL on what companies can do to reduce the incidence of ineligible rank-and-file employees participating in these plans, some ineligible employees may continue to participate in some instances, potentially subjecting them to unexpected tax consequences such as if they are removed from the plan and the payment of their deferred compensation is accelerated. Further, without knowing how to properly remove ineligible rank-and-file employees when they are found participating in executive retirement plans, companies may be uncertain on how to re-establish an executive retirement plan s exemption from the substantive provisions of Title I of ERISA for otherwise eligible participants. <5. Conclusions> Although executive retirement plans are an important retirement savings vehicle for corporate executives and other highly compensated employees, little is known about certain key aspects of these arrangements. While some federal regulatory data exist on plans provided to the top five executives of publicly owned companies, information about the design, participation, and benefits provided under plans offered by privately owned companies or offered to employees beyond top five executives are largely unknown, as is their net revenue effect on the federal government. In addition, IRS has not taken steps nor collected adequate information to know if companies under audit with a qualified single-employer defined benefit plan are setting aside assets for the purpose of paying benefits deferred under executive retirement plans while the companies are in at- risk status a practice the law intended to discourage. Through effective oversight, IRS can help ensure that it is collecting the appropriate amount of income taxes as a result of this potential practice. Another important consideration with respect to executive retirement plans is their potential to permit ineligible rank-and-file employees to participate in the plan, thereby leaving such employees without the protections of ERISA. Little information is available at the federal level about who is included in executive retirement plans because companies provide minimal information to DOL only once when they implement such a plan. By revisiting its reporting requirements, DOL can help ensure that only executives who can bear the risks inherent in these plans are permitted to participate. DOL has other opportunities to diminish this risk by providing assistance to companies, such as additional information describing plan eligibility, which could help companies reduce the incidence of rank-and-file employees participating in these plans. In addition, DOL can provide direction that companies can follow to remove rank-and-file employees found participating in these plans to ensure their benefits are protected and coordinate with the IRS so that these employees do not incur unexpected tax consequences that could result from erroneous inclusion in an executive retirement plan. <6. Recommendations for Executive Action> We are making a total of four recommendations, including one to IRS and three to DOL. The IRS Commissioner should develop specific instructions within the Internal Revenue Manual, the Nonqualified Deferred Compensation Audit Techniques Guide, or other IRS training material to aid examiners in obtaining and evaluating information they can use to determine whether there exists a restricted period with respect to a company with a single- employer defined benefit plan and if a company with a single-employer defined benefit plan has, during a restricted period, set aside assets for the purpose of paying deferred compensation under an executive retirement plan. (Recommendation 1) The Secretary of Labor should review and determine whether its reporting requirements for executive retirement plans should be modified to provide additional information DOL could use to oversee whether these plans are meeting eligibility requirements. (Recommendation 2) The Secretary of Labor should explore actions the agency could take to help companies prevent the inclusion of rank-and-file employees in executive retirement plans and determine which, if any, actions should be implemented. (Recommendation 3) The Secretary of Labor should provide specific instructions for companies to follow to correct eligibility errors that occur when rank-and-file employees are found to be participating in executive retirement plans, and should coordinate with other federal agencies on these instructions, as appropriate. (Recommendation 4) <7. Agency Comments and Our Evaluation> We provided a draft of this report to DOL, IRS, PBGC, SEC, Treasury, and the United States Trustee Program within the Department of Justice for review and comment. DOL, IRS, PBGC, SEC, and Treasury provided technical comments, which we have incorporated where appropriate. IRS and DOL also provided formal comments, which are reproduced in appendices II and III, respectively. In response to our recommendation to develop specific instructions to aid IRS examiners in monitoring executive retirement plans for compliance with federal tax law, IRS stated that they would review and consider developing further specific instructions within the Internal Revenue Manual, the Nonqualified Deferred Compensation Audit Techniques Guide or other IRS training material to aid examiners. GAO continues to maintain that implementing this recommendation will help ensure that IRS is aware of when companies with at-risk single-employer defined benefit plans are reporting assets set aside to pay deferred compensation to key executives while in a restricted period as income for those employees. DOL stated that it does not have plans to issue guidance or regulations regarding executive retirement plans, citing, among other considerations, existing resource constraints and priority regulatory and guidance projects in development, and that it would not be advisable to shift resources from other projects. GAO continues to maintain that DOL s one-time single page alternative reporting for executive retirement plans lacks important information sufficient to help the agency identify whether companies may be including ineligible employees in its plan and DOL s current data on executive retirement plans has limited usefulness due to the age and limits of the original data submitted. DOL also stated that the agency has not encountered evidence of systematic abuses involving executive retirement plans or that ERISA s claims procedure rules and judicial remedies are inadequate to protect participants benefit rights. As we report, industry surveys indicate that some companies may be extending employee eligibility to high percentages of their workforce who are lower- paid and lower-ranked employees who may not be considered a part of a select group. Industry experts also told us that plan eligibility requirements for executive retirement plans are not clearly defined and that companies are unclear on how to establish eligibility, and they identified court cases that contribute to the confusion regarding plan eligibility. Additionally, the remedy DOL suggested in an amicus brief for companies to follow to correct eligibility errors in these plans could have unintended consequences for participants because, according to IRS officials, it could result in violations of federal tax law and additional tax for participants. Without implementing our recommendations, DOL will continue to be unable to ensure that only executives who can bear the risks inherent in these plans are participating. We urge DOL to develop instructions to correct eligibility errors, in coordination with other federal agencies, as needed, in a way that does not adversely affect rank-and-file employees participating in these plans. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Secretaries of the Departments of the Treasury, Labor, and Justice; the Commissioner of the Internal Revenue Service; the Chairman of the Securities and Exchange Commission; and the Director of the Pension Benefit Guaranty Corporation. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or jeszeckc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix IV. Appendix I: Objective, Scope, and Methodology This report examines (1) what is known about the prevalence, key advantages, and revenue effects of executive retirement plans; (2) the potential outcomes of executive retirement plan benefits in company bankruptcy; and (3) how federal agency oversight protects benefits and prevents ineligible participation in executive retirement plans. <8. Overall Methodology> To address these objectives, we reviewed relevant federal laws, regulations, guidance, and other agency documents related to executive retirement plans. We reviewed relevant research on executive retirement plans, which we identified with the help of a GAO librarian, through stakeholder interviews, by reviewing sources cited in documents we obtained, and through limited internet searches driven by stakeholder and documentary evidence. This research included published research on the costs of executive retirement plans on the companies that offer them and the revenue effects on the federal government. We interviewed a non- generalizable sample of executive retirement plan experts representing different roles in the industry, including plan consultants, plan providers (including record keepers and insurers), attorneys, investment advisors, actuaries, proxy advisors, and researchers. We also interviewed an array of bankruptcy experts including those with experience in executive compensation to understand bankruptcy procedure and the treatment of executive retirement plans in company bankruptcy. We selected executive retirement plan and bankruptcy experts to interview based on a combination of published work, breadth and depth of experience, as well as peer referrals. We interviewed representatives from industry associations representing a diverse range of stakeholder groups, such as those that offer, provide services to, or conduct research on executive retirement plans. As part of this effort, we contacted the American Institute of Certified Public Accountants to discuss their perspective on the use of executive retirement plans but they declined to meet with us. We also interviewed agency officials from the Department of Labor s (DOL) Employee Benefits Security Administration (EBSA), Department of the Treasury s Office of Tax Policy, the Internal Revenue Service (IRS), the Securities and Exchange Commission, the Pension Benefit Guaranty Corporation (PBGC), and the United States Trustee Program within the Department of Justice. <9. Prevalence of Executive Retirement Plans> To understand the prevalence of executive retirement plans, we analyzed data provided by the Main Data Group (MDG), an executive compensation benchmarking and corporate governance analytics firm. MDG compiled the data provided from required SEC disclosures from filing years 2013 to 2017 (the most recent data available at the time of our analysis) for executive retirement plan benefits provided to top executives in Standard & Poor s (S&P) 500 and Russell 3000 companies as reported in the annual 10-K, proxy statement, and other documents. Companies listed in the S&P 500 are generally also listed in the Russell 3,000. The SEC generally requires public companies to disclose executive compensation information including executive retirement plan benefits provided to the Chief Executive Officer, Chief Financial Officer, and the next three most highly compensated executive officers. These data are principally found in the annual proxy statement within the Summary Compensation Table, Pension Benefits Table, and Nonqualified Deferred Compensation Table. The data include executive retirement plan benefits offered as a defined benefit plan and defined contribution plan. For a given year, the total accumulated value of executive retirement plans structured as a defined benefit provided to top executives are based on the present value of accumulated benefit and payments during the last fiscal year as reported in the Pension Benefits Table. For defined contribution plans, the total accumulated values are based on the aggregate balance at last fiscal year end and the aggregate withdrawals/distributions for the reporting period as disclosed in the Nonqualified Deferred Compensation Table. To determine the average level of plan benefits for top executives, we summed the total accumulated plan benefits for all top executives in a given year and divided them by the total number of executives. For the median, we sorted the total accumulated plan benefits for all executives in a given year and determined the midpoint. To assess the reliability of the data provided, we interviewed MDG officials regarding their data collection processes. We also independently compared executive retirement plan data from a random sample of SEC filings obtained from Edgar (the SEC s public database for required disclosures) with data for the same companies as reported by MDG. We found the data to be sufficiently reliable for the purpose of describing the prevalence of executive retirement plans among companies subject to SEC s disclosure requirements. <10. Corporate Bankruptcy Case Reviews> To understand the expected outcomes for executive retirement plan benefits during company bankruptcy, we analyzed data collected from our non-generalizable review of a random sample of companies that offered an executive retirement plan and filed for bankruptcy during the period from October 17, 2005 the effective date for most of the provisions of the Bankruptcy Abuse Prevention and Consumer Protection Act of 2005 (2005 Bankruptcy Act) through November 30, 2017 the most recent at the time of our analysis. The 2005 Bankruptcy Act made significant changes to federal bankruptcy law, including provisions limiting executive compensation in corporate bankruptcy. Using the unique Employer Identification Number (EIN) the IRS assigns to companies, we matched corporate Chapter 7 and Chapter 11 bankruptcy cases with DOL s database of executive retirement plans to obtain lists of companies that filed for bankruptcy and offered at least one executive retirement plan. We obtained lists of corporate bankruptcy filings from New Generation Research Inc. s (NGR) online database. NGR is a provider of data on corporate bankruptcies and companies in financial distress. We obtained from DOL its comprehensive list of executive retirement plans as filed with the agency from July 1982 to August 2017. The NGR and DOL data are not exclusive to public or private companies. To assess the reliability of the NGR and DOL data, we corresponded with officials regarding their respective data collection processes and requirements. We found the data to be sufficiently reliable for our purposes. The results of our data matching produced 138 Chapter 7 cases and 594 Chapter 11 cases of companies that filed for bankruptcy and offered an executive retirement plan. We reviewed a random selection of 151 cases (30 Chapter 7 and 121 Chapter 11) from a total of 732 relevant bankruptcy cases. To review bankruptcy court cases, we developed a standardized protocol to review each identified case and data collection instrument to input the data. The protocol included step-by-step instructions for reviewers to follow, including prescribed court documents to review and data to be collected. We obtained feedback on our case review protocol and data collection instrument from two outside bankruptcy experts an attorney with expertise in the tax aspects of corporate bankruptcies and a bankruptcy law professor and former attorney who previously served as a federal bankruptcy judge and incorporated their technical feedback on the documents. We also worked with a GAO methodologist to pretest our case review protocols and data collection instruments on a review of a select sample cases from the matched list to ensure our review process could collect reliable data between different reviewers. To obtain bankruptcy case documents to review, we used court filings obtained from PACER exclusively and did not rely on other data sources. PACER is an electronic public access service provided by the Federal Judiciary that allows users to obtain case and docket information online from federal appellate, district, and bankruptcy courts. Case documents are available on PACER as they are filed or entered into the court s case system. Based on our case review protocol, we reviewed (where available), the court docket, case summary, bankruptcy petition, first day motions, management affidavit, schedule of assets and liabilities, statement of financial affairs, court-approved disclosure statement, court- approved plan (of reorganization or liquidation), and settlement agreements, among other documents with information relevant to executive retirement plans and their expected resolution in bankruptcy. We reviewed cases based on documents available in PACER between April and May 2018. Our review of 151 cases (30 Chapter 7 and 121 Chapter 11) from the matched lists resulted in 38 Chapter 11 cases where we identified executive retirement plan benefits in existence at or around the time of company bankruptcy and were able to determine the expected resolution of those benefits for employees as a result of the bankruptcy proceeding. As part of our review, we excluded cases if: (1) we were unable to confirm the presence of an executive retirement plan through review of court documents, (2) the case did not have a court-approved disclosure statement with estimated recovery percentages for various creditor classes in the case docket, or (3) if the case was open (i.e., not terminated) and had a reorganization or liquidation plan confirmed on or after May 2016, about 2 years from the start of our review. For the foregoing reasons, we were unable to identify expected outcomes in any of the Chapter 7 cases reviewed. For Chapter 11 cases, we were unable to ascertain actual outcome information for any of the cases we reviewed, but based the expected outcome of the executive retirement plan benefits on estimates provided in the court-approved disclosure statement, bankruptcy plan (reorganization or liquidation), or settlement agreement, which may differ from actual recoveries. To determine the expected resolution of executive retirement plan benefits, we reviewed case filings for evidence of specific treatment provided to employees with these claims. To the extent we did not find evidence of specific treatment for executive retirement plan benefits, we relied on estimated recovery information for the class of general unsecured creditors. Because the nature of bankruptcy proceedings depends on the facts and circumstances of each individual cause, the results of our analysis are not generalizable but provide illustrative examples of the potential outcomes of such cases. <11. Review of Selected Court Cases and Surveys on Plan Eligibility and Participation> We reviewed selected court cases related to employee eligibility in executive retirement plans as identified by DOL, industry experts, and other literature. We also reviewed executive retirement plan surveys produced by industry firms, including plan sponsor organizations, benefit consultancies, record keepers, and other plan providers. We also interviewed representatives from many of these organizations regarding the use of executive retirement plans and determined that their survey data generally accorded with these discussions. We found the data to be sufficiently reliable for our purposes. Appendix II: Comments from the Internal Revenue Service Appendix III: Comments from the Department of Labor Appendix IV: GAO Contact and Staff Acknowledgments <12. GAO Contact> <13. Staff Acknowledgments> In addition to the contact named above, the following individuals made important contributions to this report: Tamara Cross (Assistant Director), David Lin (Analyst-in-Charge), Ted Burik, Dan Powers, and David Reed. Also contributing to this report were James Bennett, Joanna Berry, Colenn Berracasa, Sherwin Chapman, Nina Daoud, Sarah Gilliland, Laura Hoffrey, Angie Jacobs, Kirsten Lauber, Ted Leslie, Avani Locke, Sheila R. McCoy, James R. McTigue Jr., Jeffrey Miller, Ed Nannenhorn, Oliver Richard, Marylynn Sergent, Frank Todisco, Walter Vance, Kathleen Van Gelder, and Adam Wendel. | Why GAO Did This Study
Some types of employers offer executive retirement plans to help select employees save for retirement. There are no statutory limits on the amount of compensation that executives can defer or benefits they can receive under these plans. However, employees in these plans do not receive the full statutory protections afforded to most other private sector employer-sponsored retirement plans, such as those related to vesting and fiduciary responsibility, among other things. These plans can provide advantages but they also have disadvantages because plan benefits are subject to financial risk, such as in a company bankruptcy. GAO was asked to review these plans.
This report examines, among other objectives, (1) the prevalence, key advantages, and revenue effects of executive retirement plans and (2) how federal oversight protects benefits and prevents ineligible participation. GAO analyzed industry-compiled Securities and Exchange Commission plan data for 2013 to 2017 (the most recent data available at the time of our analysis); reviewed relevant federal laws, regulations, and guidance; and interviewed officials from IRS and DOL, among others.
What GAO Found
Executive retirement plans allow select managers or highly compensated employees to save for retirement by deferring compensation and taxes. As of 2017, more than 400 of the large public companies in the Standard & Poor's 500 stock market index offered such plans to almost 2,300 of their top executives, totaling about $13 billion in accumulated benefit promises. Top executives at large public companies generally accumulated more plan benefits than top executives at the smaller public companies in the Russell 3000 stock market index. Advantages of these plans include their ability to help executives increase retirement savings and potentially reduce tax liability, but the plans come with risks as well. To receive tax deferral, federal law requires the deferred compensation to remain part of a company's assets and subject to creditor claims until executives receive distributions (see figure). Department of Treasury officials and industry experts said executive retirement plans can be tax-advantaged and may have revenue effects for the federal government; however, the revenue effects are currently unknown.
The Internal Revenue Service (IRS) oversees executive retirement plans for compliance with federal tax laws. For example, IRS must ensure that key executives are taxed on deferred compensation in certain cases where that compensation has been set aside, such as when a company that sponsors a qualified defined benefit retirement plan is in bankruptcy. However, IRS audit instructions lack sufficient information on what data to collect or questions to ask to help its auditors know if companies are complying with this requirement. As a result, IRS cannot ensure that companies are reporting this compensation as part of key executives' income for taxation. The Department of Labor (DOL) oversees these plans to ensure that only eligible employees participate in them since these plans are excluded from most of the federal substantive protections that cover retirement plans for rank-and-file employees. DOL requires companies to report the number of participants in the plan; however, the one-time single page filing does not collect information on the job title or salary of executives or the percentage of the company's workforce participating in these plans. Such key information could allow DOL to better identify plans that may be including ineligible employees. Without reviewing its reporting requirements to ensure adequate useful information, DOL may continue to lack insight into the make-up of these plans and will lack assurance that only select managers and highly compensated employees are participating.
What GAO Recommends
GAO is making four recommendations, including that IRS improve its instructions for auditing companies that offer these plans, and that DOL consider modifying reporting by companies to better describe participants in these plans. IRS and DOL neither agreed nor disagreed with our recommendations. |
gao_GAO-19-722T | gao_GAO-19-722T_0 | <1. Background> The FSM and RMI are independent countries about 3,000 miles southwest of Hawaii. The FSM is a federation of four semiautonomous states Chuuk, Kosrae, Pohnpei, and Yap whose population and income vary widely. Chuuk, the largest state by population, has the lowest per-capita gross domestic product (GDP). Overall, the FSM had a 2016 population of approximately 102,000 and a GDP per capita of about $3,200. The RMI s 2016 population was approximately 54,000 with a GDP per capita of about $3,600. The RMI s most recent census, in 2011, found that approximately three-quarters of the population lived in Majuro, the nation s capital, and on the island of Ebeye in the Kwajalein Atoll. Table 1 shows the FSM s, FSM states , and RMI s estimated populations and annual GDP per capita in fiscal year 2016. <1.1. Compact of Free Association (1986 2003)> U.S. relations with the FSM and the RMI began during World War II, when the United States ended Japanese occupation of the region. Starting in 1947, the United States administered the region under a United Nations trusteeship. In 1986, after a period of negotiations, the United States entered into a compact of free association with the FSM and RMI that provided for economic assistance to the two countries, secured U.S. defense rights, and allowed FSM and RMI citizens to migrate to the United States. <1.2. Amended Compacts of Free Association (2004 Present)> In 2003, after a period of negotiations, the United States approved separate amended compacts with the FSM and the RMI that went into effect on June 25, 2004, and May 1, 2004, respectively. <1.2.1. Compact Grants and Trust Fund Contributions> The amended compacts implementing legislation authorized and appropriated direct financial assistance to the FSM and the RMI in fiscal years 2004 through 2023, with the base amounts decreasing in most years. The legislation also provided for partial inflation adjustment of the base amount of compact sector grants and trust fund contributions each year. As the base amount of compact sector grants decreases, the trust fund contributions generally increase by an equivalent amount. Because the annual inflation adjustment is less than full inflation, the value of compact sector grants declines in real terms. Figure 1 shows the amount of compact sector grants and trust fund contributions each fiscal year from 2004 through 2023. The amended compacts and associated fiscal procedures agreements require that compact sector grants support the countries in six core sectors education, health, infrastructure, environment, private sector development, and public sector capacity building with priority given to the education and health sectors. These grants are described in section 211(a) of each compact and are referred to as compact sector grants or 211(a) grants. Section 211(b) of the RMI compact further states that the RMI must target a specified amount of grants to Ebeye and other Marshallese communities within Kwajalein Atoll. The RMI military use and operating rights agreement (MUORA) states that the Kwajalein- related funds provided to the RMI in the compacts shall be provided through fiscal year 2023 and thereafter for as long as this agreement remains in effect. <1.2.2. Compact Trust Fund Management and Implementation> The amended compacts and their subsidiary trust fund agreements provided that each trust fund is to be managed by a compact trust fund committee. Each compact trust fund committee includes representatives from both the United States and the respective country, but the terms of the trust fund agreements require the United States to hold the majority of votes. The Director of Interior s Office of Insular Affairs serves as the chair of each committee. Trust fund committee responsibilities include overseeing fund operation, supervision, and management; investing and distributing the fund s resources; and concluding agreements with any other contributors and other organizations. As part of this oversight, the committees are to establish an investment and distribution policy. The committees are also to determine fiscal procedures to be used in implementing the trust fund agreements on the basis of the fiscal procedures used for compact grant administration, unless otherwise agreed by the parties to the agreement. The U.S. FSM and U.S. RMI trust fund agreements allow for the agreements to be amended in writing at any time, with mutual consent of the governments. However, the U.S. legislation implementing the amended compacts requires that any amendment, change, or termination of all, or any part, of the compact trust fund agreements shall not enter into force until incorporated into an act of Congress. <1.2.3. Compact Trust Fund Structure> The compact trust fund agreements state that no funds, other than specified trust fund administrative expenses, may be distributed from the funds before October 1, 2023. From fiscal year 2024 onward, the maximum allowed disbursement from each compact trust fund is the amount of the fiscal year 2023 annual grant assistance, as defined by the trust fund agreement, with full inflation adjustment. In addition, the trust fund committees may approve additional amounts for special needs. The RMI compact trust fund agreement excludes Kwajalein-related assistance, defined in section 211(b) of the RMI compact, from the calculation of the allowed disbursement. Although the compact trust fund agreements state the maximum allowable disbursement level, they do not establish or guarantee a minimum disbursement level. Each country s compact trust fund consists of three interrelated accounts: the A account, the B account, and the C account. The A account is the trust fund s corpus and contains the initial, and any additional, U.S. and FSM or RMI contributions; contributions from other countries; and investment earnings. No funds, other than specified trust fund administrative expenses, may be disbursed from the A account. The B account is the trust fund s disbursement account and becomes active in fiscal year 2023. All income earned in 2023 will be deposited in the B account for possible disbursement in 2024. Each subsequent year s investment income will similarly be deposited in the B account for possible disbursement the following year. If there is no investment income, no funds will be deposited in the B account for possible disbursement the following year. The C account is the trust fund s buffer account. Through 2022, any annual income exceeding 6 percent of the fund balance is deposited in the C account. The size of the C account is capped at three times the amount of the estimated annual grant assistance in 2023, including estimated inflation. From 2023 onward, if annual income from the A account is less than the previous year s disbursement, adjusted for inflation, the C account may be tapped to address the shortfall. After 2023, any funds in the B account in excess of the amount approved for disbursement the following fiscal year are to be used to replenish the C account as needed, up to the maximum size of the account. If there are no funds in the C account and no prior-year investment income in the B account, no funds will be available for disbursement to the countries the following year. Figure 2 shows the compact trust fund account structure and associated rules. According to the U.S. trust fund agreements with the FSM and the RMI, contributions from other donors are permitted. In May 2005, Taiwan and the RMI reached an agreement that Taiwan would contribute a total of $40 million to the RMI s compact trust fund A account between 2004 and 2023. A D account may also be established to hold any contributions by the FSM and the RMI governments of revenue or income from unanticipated sources. According to the trust fund agreements, the D account must be a separate account, not mixed with the rest of the trust fund. Only the RMI has a D account, governed in part by the agreement between Taiwan and the RMI. <1.2.4. Programs and Services Provided in Compact-Related Agreements> The amended compacts implementing legislation incorporates, by reference, related agreements extending programs and services to the FSM and RMI. The programs and services agreement with each country identifies the following programs and services as being available to each country: U.S. postal services, weather services, civil aviation, disaster preparedness and response, and telecommunications. Each programs and services agreement extends for 20 years from the compact s entry into force. The agreement with the FSM ends on June 24, 2024, and the agreement with the RMI ends on April 30, 2024. <1.2.5. Programs Authorized by U.S. Legislation> The amended compacts implementing legislation (Pub. L. No. 108-188) and other U.S. legislation authorize other U.S. grants, programs, and services for the FSM and RMI. Pub. L. No. 108-188 authorizes an annual supplemental education grant (SEG) for the FSM and RMI in fiscal years 2005 through 2023, to be awarded in place of grants formerly awarded to the countries under several U.S. education, health, and labor programs. The FSM and RMI are not eligible for the programs replaced by the SEG during these years. Unlike the compact sector grants, the amended compacts implementing legislation authorized the SEG but did not appropriate funds for it. Funding for the SEG is appropriated annually to the U.S. Department of Education (Education) and is transferred to Interior for disbursement. Other provisions of the amended compacts implementing legislation, as well as other U.S. law, make the FSM and RMI eligible for a number of additional programs. <2. The FSM and RMI Rely on U.S. Grants and Programs That End in 2023> As of fiscal year 2016, compact sector grants and the SEG, each of which end in 2023, supported a substantial portion of government expenditures in the FSM and RMI. Compact sector grants and the SEG supported about one-third of all FSM government expenditures. The four FSM states relied on these grants to a greater extent than did the FSM national government. In the RMI, compact sector grants and the SEG supported about one-quarter of all government expenditures. The expiration of the compacts programs and services agreements in 2024 would also require the FSM and RMI to bear additional costs to provide services currently provided by the United States under the agreements. <2.1. U.S. Compact Grants and Other Grants Provide Substantial Support to the FSM and RMI Budgets> <2.1.1. U.S. Grants Scheduled to End in 2023 Supported About One- Third of Total FSM Government Expenditures in Fiscal Year 2016> Compact sector grants, the SEG, and other U.S. grants supported almost half of FSM national and state government expenditures in fiscal year 2016. Compact sector and supplemental education grants that end in 2023 supported approximately one-third of total FSM national and state government expenditures in fiscal year 2016, while other U.S. grants supported an additional 15 percent of total FSM government expenditures (see fig. 3). In fiscal year 2016, compact sector and supplemental education grants that end in 2023 supported a larger proportion of FSM state governments expenditures than of the FSM national government s expenditures. Compact sector grants and the SEG supported 8 percent of national government expenditures but supported 50 percent or more of each state s government expenditures. Among the FSM states, Chuuk, which has both the largest population and the lowest per-capita income in the FSM, had the highest percentage of expenditures supported by U.S. grants. (See table 2 for a summary of FSM national and state government expenditures supported by compact sector grants and the SEG and by other U.S. grants.) <2.2. U.S. Grants Scheduled to End in 2023 Supported About One Quarter of RMI Government Expenditures in Fiscal Year 2016> Compact sector and supplemental education grants that end in 2023 supported approximately 25 percent of the RMI s $123.5 million in government expenditures in fiscal year 2016, while other U.S. grants supported an additional 8 percent. Kwajalein-related compact grants that do not end in 2023 supported an additional 3 percent (see fig. 4). <2.3. FSM and RMI Eligibility for Some U.S. Grants, Programs, and Services Will Change after 2023> FSM and RMI budgets would be further affected if the countries assumed responsibility for providing programs and services currently provided by the United States. The following describes the status after 2023 of U.S. grants, programs, and services in the FSM and RMI under current law: Compact sector grants are scheduled to end in 2023, but the RMI MUORA extends the time frame of Kwajalein-related compact grants for as long as the MUORA is in effect. The SEG and additional grants identified in the amended compacts implementing legislation are scheduled to end in 2023. Also, after fiscal year 2023, the FSM and RMI will no longer be eligible for some programs that the SEG replaced, including Head Start (early childhood education, health, and nutrition services for low-income children and their families). The compact-related programs and services agreements with each country will end in 2024. However, some U.S. agencies, such as the National Weather Service, Federal Aviation Administration, and U.S. Agency for International Development, may continue to provide programs and services similar to those provided in the agreement under other authorities. The FSM and RMI will generally remain eligible for other programs identified in the amended compacts implementing legislation. These programs include U.S. Department of Agriculture (USDA) Rural Utilities Service grant and loan programs and U.S. Department of Education Pell grants for higher education and grants under Part B of the Individuals with Disabilities Education Act for children with disabilities. The FSM and RMI will remain eligible for additional programs we identified that have been provided under other current U.S. laws. Examples of these programs include USDA housing assistance programs and multiple public health, medical, and disease control and prevention grants provided by the U.S. Department of Health and Human Services. See appendix I for more information about the status after 2023 of U.S. grants, programs, and services in the FSM and RMI under current law. <3. Compact Trust Funds Face Risks to Future Disbursements> Our May 2018 projections for the compact trust funds showed that after fiscal year 2023, the funds are unlikely to provide maximum annual disbursements and may provide no disbursements at all in some years. The risk of disbursements below the maximum and the risk of zero disbursements increase over time for both funds. Potential strategies we analyzed in our May 2018 report would reduce or eliminate the risk of the compact trust funds experiencing years of zero disbursement. However, all of the potential strategies would require the countries to exchange a near-term reduction in resources for more-predictable and more- sustainable disbursements in the longer term. <3.1. Projections Show Risks to Compact Trust Fund Disbursements> Our May 2018 projections for the FSM and RMI compact trust funds after 2023 indicated that, given their balance at the end of fiscal year 2017 and current compact trust fund rules the baseline scenario the funds will be unable to provide maximum disbursements (equal to the inflation- adjusted amount of annual grant assistance in 2023) in some years and unable to provide any disbursement at all in some years, with the likelihood of zero disbursement in a given year increasing over time. The compact trust funds C account designed as a buffer to protect disbursements from the B account in years when the funds do not earn enough to fund the disbursement could be exhausted by a series of years with low or negative annual returns. Since current rules do not allow disbursements from the compact trust fund corpus (the A account), exhaustion of the C account would result in zero disbursement in years when fund returns are zero or negative. Thus, there may be no funds available to disburse even if the funds A accounts have a balance. As a result of low or zero disbursements, the countries could face economic and fiscal shocks and significant challenges in planning programs and budgets. Since we published our May 2018 report, an additional year of compact trust fund performance data and updated estimates of future inflation have become available; however, the updated information does not alter the conclusions we presented in May 2018. The updated data and inflation estimates change our model s assumptions about the current compact trust fund balance, size of future U.S. contributions to the FSM and RMI compact trust funds, annual grant assistance in fiscal year 2023, and C account balance each of which are relevant variables for our analysis. However, the updated variables would result in only slight changes to our 2018 report s projections of future compact trust fund performance presented in this testimony and do not alter our broader conclusions about future risks to the compact trust funds. FSM compact trust fund projections. In May 2018, our model projected that, given the baseline scenario and a 6 percent net return, the FSM compact trust fund will experience declining disbursements relative to the maximum allowable disbursements and an increasing chance of zero disbursements. (See app. I of GAO-18-415 for a full description of our methodology, and see app. V of GAO-18-415 for the baseline results with alternative net returns.) Projected disbursements. We projected that the FSM compact trust fund will, on average, be able to provide disbursements equal to 82 percent of the maximum allowable disbursement the inflation- adjusted amount of 2023 annual grant assistance in its first decade of disbursements. The likely average disbursement falls to 49 percent of the maximum in the next decade and falls further in subsequent decades. In addition, the amount available for disbursement may fluctuate substantially from year to year. Depending on the compact trust fund s performance in the previous year, disbursements may be higher or lower than the average amount if the balance in the C account is not sufficient to provide additional disbursements. Likelihood of providing zero disbursement. We projected a 41 percent likelihood that the FSM compact trust fund will be unable to disburse any funds in 1 or more years during the first decade of trust fund disbursements. This likelihood increases over time, rising to 92 percent in fiscal years 2054 through 2063. Figure 5 shows our May 2018 projections of the FSM compact trust fund s average disbursements as a percentage of maximum disbursement and the likelihood of 1 or more years of zero disbursement, given the baseline scenario and a 6 percent net return. We calculated the average disbursement as a percentage of the maximum allowable disbursement by averaging, over each 10-year period and over 10,000 simulated cases, the ratio of simulated disbursement to the maximum inflation-adjusted allowable disbursement in the given period. We calculated the likelihood of zero disbursement by counting cases with 1 or more years of zero disbursement in each of the given periods over 10,000 simulated cases. RMI compact trust fund projections. In May 2018, our model projected that, given the baseline scenario and a 6 percent net return, the RMI compact trust fund will experience declining disbursements relative to the maximum allowable disbursements and an increasing chance of zero disbursements. Projected disbursements. We projected that in its first decade of disbursements, the RMI compact trust fund will, on average, be able to provide disbursements nearly equal to the inflation-adjusted amount of 2023 annual grant assistance as defined by the trust fund agreement the maximum allowable. However, in each subsequent decade, the projected disbursements as a percentage of the maximum disbursements decline by about 10 percentage points. In addition, from year to year, the amount available to disburse may fluctuate substantially. Depending on the compact trust fund s performance in the previous year, disbursements may be higher or lower than the average amount if the balance in the C account is not sufficient to provide additional disbursements. Likelihood of providing zero disbursement. We projected a 15 percent likelihood that the RMI compact trust fund will be unable to disburse any funds in 1 or more years during the first decade of trust fund disbursements. This likelihood increases over time, rising to 56 percent in fiscal years 2054 through 2063. Figure 6 shows our May 2018 projections of the RMI compact trust fund s average disbursements as a percentage of maximum disbursement and its likelihood of 1 or more years of zero disbursement, given the baseline scenario and a 6 percent net return. We calculated the average disbursement as a percentage of the maximum allowable disbursement by averaging, over each 10-year period and over 10,000 simulated cases, the ratio of simulated disbursement to the maximum inflation-adjusted allowable disbursement in the given period. We calculated the likelihood of zero disbursement by counting cases with 1 or more years of zero disbursement in each of the given periods over 10,000 simulated cases. For our May 2018 report, we conducted a series of simulations to determine the likely effects of potential strategies for improving the outlook of the FSM and RMI compact trust funds. For example, we developed and analyzed potential strategies in which annual disbursements are reduced below the maximum allowable additional annual contributions are made to the trust fund prior to the end of fiscal year 2023; and the trust fund agreement disbursement policies are modified to limit the annual disbursement to a fixed percentage of the fund s moving average balance over the previous 3 years, up to the maximum disbursement amount defined by the current trust fund agreement. All of the potential strategies we analyzed would reduce or eliminate the risk of the compact trust funds experiencing years of zero disbursement. However, some of the potential strategies might require changing the trust fund agreements, and all of the potential strategies would require the countries to exchange a near-term reduction in resources for more- predictable and more-sustainable disbursements in the longer term. (See app. VII of our May 2018 report for detailed results of our analysis.) <4. Compact Trust Fund Committees Have Not Addressed Issues Related to Distribution Policies, Fiscal Procedures, and Disbursement Timing> The compact trust fund committees have not taken the actions we recommended in 2018 to prepare for the 2023 transition to trust fund income. The committees have not yet prepared distribution policies, required by the trust fund agreements, which could assist the countries in planning for the transition to trust fund income. In addition, the committees have not established fiscal procedures for oversight of compact trust fund disbursements as required by the trust fund agreements. Further, the committees have not yet addressed a potential misalignment between the timing of their annual calculation of the amounts available to disburse and the FSM s and RMI s budget timelines, potentially complicating each country s planning and management. <4.1. Trust Fund Committees Have Not Developed Distribution Policies Required by the Compact Trust Fund Agreements> The compact trust fund committees have not yet developed, as the compact trust fund agreements require, policies to guide disbursements from the trust funds after fiscal year 2023. Under the agreements, each trust fund committee must develop a distribution policy, with the intent that compact trust fund disbursements will provide an annual source of revenue to the FSM and RMI after the scheduled end of compact grant assistance. The trust fund committees could use distribution policies to address risks to each fund s sustainability. For example, the committees have the discretion to disburse an amount below the established maximum. Our analysis of potential strategies for improving the funds outlook shows that reducing the size of disbursements would improve each compact trust fund s long-term sustainability. Without a distribution policy that provides information about the size of expected disbursements, the FSM and RMI are hampered in their current and ongoing efforts to plan for the potential reduction in U.S. compact assistance after 2023. <4.2. Trust Fund Committees Have Not Established Fiscal Procedures Required by Compact Trust Fund Agreements> The compact trust fund committees have not yet established fiscal procedures for compact trust fund disbursements after fiscal year 2023. Each trust fund agreement requires the respective committee to determine the fiscal procedures to be used in implementing the trust fund agreement. The committees are to base their procedures on the compact fiscal procedures agreements, unless the parties to the trust fund agreement agree to adopt different fiscal procedures. No compact trust fund disbursements are to be made unless the committee has established such trust fund fiscal procedures. Without fiscal procedures in place, the trust fund committees will not be able to provide disbursements and the United States, the FSM, and the RMI will lack clear guidance to ensure oversight for trust fund disbursements. <4.3. Trust Fund Committees Have Not Addressed Issues Related to Disbursement Timing> The timing for the compact trust fund committees calculation of the amounts available for annual disbursement to the FSM and the RMI after fiscal year 2023 does not align with the countries budget and planning timelines. The amounts available for disbursement in a given fiscal year cannot be determined until each fund s returns have been determined at the end of the prior year. Further, if the disbursement amounts are calculated from audited fund returns as determined by annual audits required by the trust fund agreements, the amounts may not be determined until as late as March 31, 6 months into the fiscal year for which the disbursement is to be provided. However, both the FSM and the RMI government budget cycles are completed before the annual amounts available for disbursement will be known. As a result, the FSM and RMI would have to budget without knowing the amount to be disbursed, complicating their annual budget and planning processes. <4.4. Trust Fund Committees Continue to Discuss Potential Actions to Address Our Recommendations> The compact trust fund committees, chaired by Interior, have continued to discuss potential actions to address the recommendations in our May 2018 report. In May 2018, we made six recommendations to Interior three parallel recommendations regarding each country s trust fund. We recommended that the Secretary of the Interior ensure that the Director of the Office of Insular Affairs work with other members of the trust fund committees to develop distribution policies, develop the fiscal procedures required by the compact trust fund address the timing of the calculation of compact trust fund disbursements. Interior concurred with our recommendations and has stated that it plans to implement them before the FSM and RMI transition to trust fund income in 2023. The FSM and RMI also concurred with our recommendations to Interior. According to the Trust Fund Administrator and Interior officials, the distribution policy was discussed at trust fund committee meetings convened since our May 2018 report. At their September 2019 meetings, the FSM and RMI compact trust fund committees did not make any decisions regarding steps to address our recommendations. The FSM s and RMI s transition to relying on income from the compact trust funds will likely require significant budgetary choices. However, the lack of trust fund distribution policies as well as the lack of alignment between the trust fund committees annual disbursement calculations and the countries budget cycles, hampers the countries ability to plan for the transition. In addition, without the required fiscal procedures governing trust fund actions after 2023, the trust fund committees will be unable to make disbursements and the United States, the FSM, and the RMI will not have assurance of necessary oversight for trust fund disbursements. However, as of September 2019, Interior had not implemented our recommendations to address these issues. Further, while Interior has continued to discuss possible actions to address our recommendations with the trust fund committees, it targeted implementation of our recommendations for 2023. Chairmen Grijalva and Engel, Ranking Members Bishop and McCaul, and Members of the Committees, this concludes my statement. I would be pleased to respond to any questions you may have. <5. GAO Contact and Staff Acknowledgments> If you or your staff have any questions about this testimony, please contact David Gootnick, Director, International Affairs and Trade, at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Emil Friberg (Assistant Director), Ming Chen, Neil Doherty, Mark Dowling, Reid Lowe, Moon Parks, and Michael Simon. Appendix I: Status of U.S. Grants and Programs in the FSM and RMI After 2023 The amended compacts, compact related agreements, the amended compacts implementing legislation, and other U.S. laws provide grants or eligibility for U.S. programs and services for the Federated States of Micronesia (FSM) and Republic of the Marshall Islands (RMI). The amended compacts provide compact sector, Kwajalein-related, and audit grants. Under current law, compact sector and audit grants are each scheduled to end in 2023, but the RMI military use and operating rights agreement (MUORA) extends the time frame of Kwajalein-related compact grants for as long as the agreement is in effect. The amended compacts implementing legislation provides additional grants, including authorizing a supplemental education grant (SEG), and identifies several specific U.S. programs as available to the FSM and RMI. Under current law, the additional grants end in 2023, but the statutory authorizations for some programs identified in Pub. L. No. 108-188 provide for the continued eligibility of the FSM and RMI to receive benefits under the programs. However, after fiscal year 2023, the FSM and RMI will no longer be eligible under current U.S. law for some programs that the SEG replaced. The compact-related programs and services agreements with each country identify additional programs and services that the United States makes available to the FSM and RMI. While these agreements will end in 2024, under current law, some U.S. agencies may continue to provide programs and services similar to those provided in the agreement under other authorities. Based on the status of current law, the FSM s and RMI s eligibility for other programs we identified that have been provided under other current U.S. laws will not change after fiscal year 2023. <6. Compact Sector and Audit Grants End in 2023, but Kwajalein-Related Grants for the RMI Will Continue> Under current law, compact sector grants provided to the FSM and the RMI under section 211(a) of the amended compacts are scheduled to end in 2023. However, the RMI is scheduled to continue to receive $7.2 million, partially inflation adjusted, related to the U.S. military base in Kwajalein Atoll and provided under section 211(b) of its compact. Under the terms of the RMI MUORA, the United States agreed to provide these Kwajalein-related grants for as long as the MUORA is in effect. The MUORA continues until 2066 and may be extended at the discretion of the United States until 2086. The amended RMI compact provides for $18 million, partially inflation adjusted, in annual payments to the RMI government to compensate for impacts from the U.S. Army Garrison Kwajalein Atoll. These payments will continue for as long as the MUORA is in effect. Annual compact grants of up to $500,000 (not inflation adjusted) to each country to pay for required annual audits of compact grants are scheduled to end in 2023. See table 3 for a summary of compact sector, Kwajalein-related, and audit grants. <7. FSM and RMI Are No Longer Eligible for Many Programs Replaced by the Supplemental Education Grant> The supplemental education grant (SEG) authorized by the amended compacts implementing legislation is scheduled to end in fiscal year 2023 and, under current law, FSM and RMI eligibility for most programs that the SEG replaced will not resume after fiscal year 2023. Absent changes to current law, the FSM and RMI will not be eligible after fiscal year 2023 for the following programs that the SEG replaced during fiscal years 2005 through 2023: U.S. elementary and secondary education grant programs, adult education and literacy programs, career and technical education programs, job training programs, and Head Start early education programs. However, under other provisions of current law, qualifying individuals in the FSM and RMI will be eligible after fiscal year 2023 for undergraduate education grants and work-study programs that the SEG replaced. See table 4. <8. Some Programs and Services in the Programs and Services Agreement Will End, while Others May Continue under Other Authorities> Although the programs and services agreements with the FSM and RMI will end in fiscal year 2024, current U.S. law enables U.S. agencies to continue providing some programs and services now provided under the agreements. After the agreements end, no current provisions of U.S. law will enable the Federal Emergency Management Agency (FEMA) to provide disaster response funding, enable the Federal Deposit Insurance Corporation to provide deposit insurance, or enable the U.S. Postal Service to provide the services that it currently provides to the FSM and RMI. However, the National Weather Service, the U.S. Department of Transportation s (DOT) Federal Aviation Administration (FAA), and the U.S. Agency for International Development (USAID) could, under other legal authorities, provide services similar to those they now provide under the programs and services agreements. National Weather Service. The programs and services agreements authorize the National Weather Service to fund the operations of weather stations in the FSM and RMI, which it can continue to fund after the end of the agreements under other authorities, according to Department of Commerce officials. Federal Aviation Administration. The programs and services agreements authorize DOT s FAA to provide technical assistance in the FSM and RMI, which it can continue to provide after the end of the agreements under other provisions of current U.S. law. However, DOT officials stated that FAA would require new bilateral agreements with the FSM and the RMI in order for the countries to continue to receive the civil aviation safety services that FAA currently provides under the programs and services agreements. The FAA would also seek reimbursement for any technical assistance it provides to the FSM and RMI. With regard to the civil aviation economic services provided under the programs and services agreements, DOT officials stated that, while the FSM and RMI could voluntarily decide to allow U.S. air carriers to continue operations in the FSM and RMI, new bilateral agreements would be needed to assure that result. U.S. Agency for International Development. Following a U.S. presidential disaster declaration, FEMA provides the funding for disaster relief and reconstruction, which is programmed through USAID. Under current law, FEMA funds will no longer be available to the FSM and RMI for this purpose once the agreements end; however, USAID will be able to provide foreign disaster assistance funding to the two countries under the same terms as it provides this assistance to other countries. After the programs and services agreements end, FEMA will be able to support disaster relief efforts only if USAID or the countries request such support on a reimbursable basis. In addition, according to State and Interior officials, telecommunications- related services that the two agencies provide to the FSM and RMI under the programs and services agreements will continue as long as the FSM and RMI provide appropriate authorization for such services. Table 5 shows the status, under current law, of programs and services currently provided to the FSM and the RMI under the programs and services agreements after the agreements end in fiscal year 2024. <9. Programs Identified in Amended Compacts Implementing Legislation Generally Continue after Fiscal Year 2023> Although additional grants provided to the FSM and the RMI under the amended compacts implementing legislation will end in fiscal year 2023, the countries eligibility for programs now provided under that legislation will generally continue under current U.S. law. Grants provided under the amended compacts implementing legislation for (1) judicial training in the FSM and the RMI, and (2) agricultural and planting programs on the RMI s nuclear-affected Enewetak Atoll are scheduled to end. However, under current U.S. law, legal authorities permitting the operation of other programs will remain available to the FSM and RMI after fiscal year 2023. Eligibility under these legal authorities continues either because the amended compacts implementing legislation does not specify an ending date or because other provisions in current U.S. law make the FSM and RMI eligible for the program. Programs provided in the amended compacts implementing legislation include U.S. Department of Agriculture Rural Utilities Service grant and loan programs; U.S. Department of Education Pell grants for higher education and grants under Part B of the Individuals with Disabilities Education Act for children with disabilities; programs for nuclear-affected areas in the RMI; and additional programs provided by the Departments of Commerce and Labor as well as law enforcement assistance provided by the U.S. Postal Service. See table 6 for a summary of the programs identified in the amended compacts implementing legislation and their status as of the end of fiscal year 2023. <10. Programs Identified in Other Legislation Generally Continue after Fiscal Year 2023> In addition to being eligible for the programs provided through the compact, its associated agreements, and the amended compacts implementing legislation, the FSM and RMI are also eligible for a number of programs under other provisions of current U.S. law. The FSM and RMI have each received funds from the U.S. Department of Agriculture for forestry and rural housing programs, multiple U.S. Department of Health and Human Services public health program grants, U.S. Department of the Interior technical assistance and historic preservation programs, and the DOT FAA airport improvement program, among others. Under current U.S. law, the legal authorities permitting the provision of these programs in the FSM and RMI would not necessarily change after 2023. Table 7 shows the FSM s and RMI s eligibility for these additional grants and programs under current law after fiscal year 2023. | Why GAO Did This Study
In 2003, the United States approved amended compacts of free association with the FSM and RMI, providing a total of $3.6 billion in economic assistance in fiscal years 2004 through 2023 and access to several U.S. programs and services. Compact grant funding, overseen by the Department of the Interior (Interior), generally decreases annually. However, the amount of the annual decrease in grants is added to the annual U.S. contributions to the compact trust funds, managed by joint U.S.–FSM and U.S.–RMI trust fund committees and chaired by Interior. Trust fund earnings are intended to provide a source of income after compact grants end in 2023.
This testimony summarizes GAO's May 2018 report on compact grants and trust funds (GAO-18-415). In that report, GAO examined (1) the use and role of U.S. funds and programs in the FSM and RMI budgets, (2) projected compact trust fund disbursements, and (3) trust fund committee actions needed to address the 2023 transition to trust fund income. For this testimony, GAO also reviewed key variables for its trust fund model as of June 2019 to determine whether these variables had substantially changed. In addition, GAO reviewed the status of Interior's response to GAO's May 2018 recommendations.
What GAO Found
The Federated States of Micronesia (FSM) and the Republic of the Marshall Islands (RMI) rely on U.S. grants and programs, including several that are scheduled to end in 2023. In fiscal year 2016, U.S. compact sector grants and supplemental education grants, both scheduled to end in 2023, supported a third of the FSM's expenditures and a quarter of the RMI's. Agreements providing U.S. aviation, disaster relief, postal, weather, and other programs and services are scheduled to end in 2024, but some U.S. agencies may provide programs and services similar to those in the agreements under other authorities.
GAO's 2018 report noted that the FSM and RMI compact trust funds face risks and may not provide disbursements in some future years. GAO projected a 41 percent likelihood that the FSM compact trust fund would be unable to provide any disbursement in 1 or more years in fiscal years 2024 through 2033, with the likelihood increasing to 92 percent in 2054 through 2063. GAO projected a 15 percent likelihood that the RMI compact trust fund would be unable to provide any disbursement in 1 or more years in fiscal years 2024 through 2033, with the likelihood increasing to 56 percent in 2054 through 2063. Potential strategies such as reduced trust fund disbursements would reduce or eliminate the risk of years with no disbursement. However, some of these strategies would require changing the trust fund agreements, and all of the strategies would require the countries to exchange a near-term reduction in resources for more-predictable and more-sustainable disbursements in the longer term.
Interior has not yet implemented the actions GAO recommended to prepare for the 2023 transition to trust fund income. The trust fund committees have not developed distribution policies, required by the agreements, which could assist the countries in planning for the transition to trust fund income. The committees have not developed the required fiscal procedures for oversight of disbursements or addressed differences between the timing of their annual determinations of the disbursement amounts and the FSM's and RMI's annual budget cycles.
What GAO Recommends
In its May 2018 report, GAO made three recommendations to Interior regarding each country's trust fund to address trust fund disbursement risks. Interior concurred with GAO's recommendations and has continued to discuss actions in response at trust fund committee meetings, with implementation targeted for 2023. |
gao_GAO-20-284 | gao_GAO-20-284_0 | <1. Background> For many veterans, long-term care is provided directly or purchased by VA. VA provides or pays for long-term care for eligible veterans enrolled in VA s health care through a variety of programs, including institutional- based care like nursing homes and noninstitutional programs like home health care, which provides care to veterans in their own homes. <1.1. VA Long-Term Care Programs> VA provides or pays for long-term care ranging from assistance with dressing and bathing to clinical care for spinal injuries or dementia through a range of three institutional and 11 noninstitutional programs. Institutional programs, such as nursing homes, typically provide more acute skilled nursing care in a residential facility; noninstitutional programs, such as the Home-Based Primary Care program, provide care to veterans in their homes or communities. (See fig. 1 for a list of VA s institutional and noninstitutional long-term care programs and app. I for brief descriptions of these programs.) Institutional Programs. VA provides or pays for eligible veterans to receive long-term care in three institutional programs that primarily provide skilled nursing care, such as for rehabilitation after surgery or for health issues or disabilities that require 24-hour care in a residential facility. These three programs include: VA Community Living Centers (VA-owned and -operated), Community Nursing Homes (publicly or privately owned and under contract with VA), and State Veterans Homes (state-owned and -operated homes approved and supported by VA). Noninstitutional Programs. VA provides or pays for eligible veterans to receive noninstitutional long-term care through 11 home or community- based programs, where most veterans receive long-term care. Several of VA s noninstitutional programs provide personal care assistance to help veterans with activities of daily living e.g., dressing, eating, bathing that enable veterans to remain living at home, including the Homemaker Home Health Aide, Community Adult Day Health Care, and Respite Care programs. VAMCs evaluate veterans to determine the extent to which they can perform activities of daily living and to identify the available programs that would best meet their needs. In addition, VA s noninstitutional programs include the Community Residential Care program where caregivers in settings such as Medical Foster Homes where no more than three residents receive care provide 24 hour care for veterans who cannot live alone because of medical or mental health conditions. Several of VA s long-term care programs serve veterans with special needs. For example, some of these programs, such as certain Community Nursing Homes, Adult Day Health Care, and Hospice and Respite Care programs, have specially trained staff to serve veterans with dementia. The Spinal Cord Injury and Disability Home Care program and certain VA Community Living Centers are equipped to serve veterans needing ventilator care. In addition, some programs offer specific services for younger veterans, such as certain Adult Day Health Care programs. <1.2. Eligibility for and Placement into VA Long- Term Care Programs> All veterans enrolled in the VA health care system are eligible for VA s basic medical benefits package, which includes certain institutional and noninstitutional long-term care services. A veteran s eligibility for fully or partially covered nursing home care is determined by the veteran s priority for care, which is generally based on the veteran s service- connected disability status. Specifically, VA must cover the full cost of nursing home care for veterans who need this care for a service- connected disability and for veterans with service-connected disabilities rated at 70 percent or more. To the extent resources allow, VA may cover this nursing home care for certain other veterans, such as former prisoners of war and those awarded the Purple Heart. For all other veterans, VA may generally cover nursing home care to the extent resources and capacity allow and with the veteran s agreement to share certain costs. Veterans placement in any particular institutional or noninstitutional long- term care program may depend on their clinical needs, disability ratings, preferences, and the availability of VA programs. When funds are limited, the agency may prioritize program placement based on veterans service- connected disability ratings. Decisions about which long-term care programs may be the best fit are made at the VAMC level between VA providers, veterans, and their families. VA providers may discuss a range of factors when making decisions about this care, such as health needs, the type of care provided in different programs, space availability, eligibility, and the veteran s geographic preference. For facility-based programs, VAMC staff may also encourage veterans to take a tour of the prospective home. VA s stated goal is to honor veterans preferences for care, including finding ways for veterans to age in their homes and communities instead of nursing homes. <1.3. Selected Demographics of Veterans in Long-Term Care> A diverse set of veterans receive care in VA s long-term care programs. According to VA data for fiscal year 2018, 70 percent (370,821) of the veterans who received VA long-term care during the fiscal year were aged 65 or older. (See fig. 2.) In addition, 91 percent (480,299) of those who received this care had served in the military prior to September 11, 2001. Lastly, according to VA data for fiscal year 2018, 55 percent (291,197) of veterans receiving long-term care had some level of service- connected disabilities. <1.4. VA Planning for Long-Term Care> VA s planning for veterans long-term care is informed by broader strategic planning by VA and the VHA and then operationalized by GEC at the program level. Veterans Integrated Service Networks (VISN) then implement GEC strategies for their regions and VAMCs implement and manage the various programs. VA, through the Assistant Secretary for Enterprise Integration s office, sets a strategic plan that identifies agency-wide goals. For example, VA s fiscal year 2018 through 2024 strategic plan identifies a goal that veterans choose VA for easy access, greater choices, and clear information to make informed decisions, and the plan notes that VA should understand veterans needs throughout their lives to enhance their choices and improve customer experiences. VA develops its agency-wide strategic plan every four years. VHA, through its Office of Policy and Planning, identifies strategies within VA s health care system to address VA s agency-wide goals. For example, VHA s fiscal year 2018 through 2019 strategy, operationalizing VA s goal for veteran choice, is to honor veterans preferences by offering home and community based care to prevent unwanted nursing home care. VHA strategic planning occurs every two years according to VA officials. VHA s Office of Enrollment and Forecasting uses the EHCPM to project the utilization of and cost for care across most of VA s health care programs 20 years into the future, including most long-term care programs. GEC s strategic planning operationalizes VA and VHA goals and strategies for long-term care at the program level. For example, to achieve VA s goal of veteran choice and VHA s strategy of honoring veteran preferences, GEC developed a model to identify veterans at the highest risk of needing nursing home care. According to GEC officials, the GEC strategic planning process generally occurs annually. VISNs are responsible for managing and overseeing VAMCs within their regions where long-term care is delivered, with a GEC point of contact at each VISN who can address GEC issues as they arise, according to VA officials. VAMCs within each VISN are, according to VA officials, responsible for the management of individual long-term care programs, including oversight of long-term care programs quality of care. As previously noted, VAMCs also have a role in guiding decisions about individual veterans long-term care placement. Other health care systems nationwide are also planning to meet the growing demand for long-term care and have developed strategies to address future long-term care challenges. For example, some state agencies, which provide long-term care through Medicaid, have developed strategies to help aging citizens live in their communities by enhancing community-based services and developing the workforce to provide care. VA has a federal Geriatrics and Gerontology Advisory Group to share knowledge with other long-term care providers and to advise the Secretary and Under Secretary for Health on all matters related to geriatrics and gerontology for the care of veterans. <2. Utilization of and Spending for VA Long-Term Care Have Increased in Recent Years and Are Projected to Increase> <2.1. Utilization of VA Long- Term Care Increased from Fiscal Years 2014 through 2018> Our analysis of VA data shows that the number of veterans receiving care in one or more of the VA long-term care programs increased 14 percent from fiscal years 2014 through 2018, from 464,071 to 530,327 veterans. The data also show that utilization increased more for noninstitutional programs than for institutional programs. Specifically by program type, VA data show that the number of veterans receiving institutional long-term care increased 8 percent during these years, from 97,124 to 105,151, while the number receiving noninstitutional care increased 16 percent, from 395,736 to 459,783. VA officials told us that the agency is continuing to expand veterans access to noninstitutional care programs because institutional care is more costly than home- or community-based care, and because veterans prefer to delay or reduce the amount of nursing home care they receive. Our analysis showed that utilization of long-term care in terms of various VA workload units also generally increased from fiscal years 2014 through 2018. The average daily census increased for two of VA s three institutional programs Community Nursing Homes increased by 26 percent from 7,771 to 9,808 and State Veterans Homes increased by 1 percent from 23,176 to 23,423. Five of the 11 noninstitutional programs experienced increases in their workload over this period, ranging from 8 percent to 48 percent. For example, the number of VA clinic stops (one type of VA workload unit) in the Homemaker Home Health Aid program which served approximately 23 percent of the veterans receiving noninstitutional long-term care in fiscal year 2018 increased 48 percent from 8.3 million to 12.3 million clinic stops. (See app. II for more information on veterans utilization of institutional and noninstitutional long-term care by program.) According to VA, veterans use of VA long-term care programs increased during fiscal years 2014 through 2018 for several reasons, including that a large number of Vietnam veterans are aging and that more veterans are receiving higher service-connected disability ratings. We found the number of veterans who served on or after 9/11 and received VA long- term care to have increased at a faster rate than the overall number of veterans who received this care, from fiscal year 2014 through 2018. <2.2. VA Spending for Long- Term Care Increased 33 Percent from Fiscal Years 2014 through 2018> Our analysis of VA data shows that VA s spending for long-term care which VA reports as obligations increased 33 percent, from $6.8 billion in fiscal year 2014 to $9.1 billion in fiscal year 2018. Furthermore, over this time period institutional program obligations declined as a proportion of total obligations, from 74 percent to 67 percent, while the proportion of noninstitutional program obligations rose from 26 percent to 33 percent. (See fig. 3.) Looking at VA s three institutional programs, our analysis shows VA s obligations for these programs increased 21 percent from fiscal years 2014 through 2018, from $5.0 billion to $6.1 billion. The highest share of obligations for institutional care over this time period was for the VA Community Living Centers program, which increased 11 percent, from $3.3 billion to $3.7 billion. This percentage increase was less than the increases for the Community Nursing Homes program (49 percent) and the State Veterans Homes program (33 percent); however, costs for these last two programs are significantly lower than for the other institutional program. VA obligations for its 11 noninstitutional long-term care programs increased 66 percent, from $1.8 to $2.9 billion, between fiscal years 2014 and 2018. Noninstitutional programs with the highest share of obligations during that period included the Homemaker Home Health Aide, Home-Based Primary Care, Purchased Skilled Home Care, and Home Telehealth programs. Noninstitutional programs with the highest obligation increases included the Homemaker Home Health Aide (109 percent) and Purchased Skilled Home Care (164 percent) programs. However, two noninstitutional programs saw obligations decline during these years, including the State Home Adult Day Health Care program with a 59 percent decrease, and the Community Residential Care program with a 10 percent decrease. (See app. II for more information on VA s obligations for institutional and noninstitutional long-term care by program.) <2.3. VA Projects Utilization of VA Long-term Care to Increase from Fiscal Years 2017 through 2037> VA projects utilization of long-term care will increase for most of the programs included in VA s EHCPM from fiscal years 2017 through 2037. For the two institutional programs included in the EHCPM, VA projects that utilization based on workload units (average daily census) will increase by 80 percent for the Community Nursing Homes program but will decrease by 10 percent for the Community Living Centers program. For the 10 noninstitutional programs included in the EHCPM, VA projects that utilization based on workload units (which differ by program) will increase for nine of the 10 programs with increases ranging from 1 percent to 95 percent. For example, the number of VA clinic stops for the Homemaker Home Health Aide program is projected to increase 84 percent. (See app. III for more information on projected utilization for institutional and noninstitutional long-term care by program.) VA reports that these projections are based on expected increases in the number of veterans who will rely on VA for their long-term care needs through fiscal year 2037. According to VA officials, these projected increases are due to a variety of factors, including that VA plans to continue expanding the availability of home- and community- based care, and plans to provide care to an increasing number of aging veterans and veterans rated in the highest service-connected disability groups. For example, VA data show that the proportion of long-term care provided to veterans with service-connected disabilities is projected to increase from 60 percent to 78 percent of utilization from fiscal year 2017 to 2037, and the proportion of this care provided to post-9/11 deployed combat veterans is projected to increase from 1 percent to 6 percent of all long- term care utilization during these years. Further, VA officials told us that the agency has planned to expand veterans access to noninstitutional care when appropriate, and they have integrated these assumptions into the EHCPM. <2.4. VA Projects Expenditures for Long-Term Care to Increase from Fiscal Year 2017 through 2037, with Noninstitutional Programs Accounting for an Increased Share of Expenditures> VA projects that increases in overall demand for long-term care for veterans will result in future expenditure increases for the programs included in VA s EHCPM. Specifically, VA s model projects expenditures will more than double from fiscal years 2017 through 2037, increasing from $6.9 billion to $14.3 billion (107 percent). VA projects that its expenditures for its institutional programs will be higher than for its noninstitutional programs, reaching $7.5 billion and $6.8 billion, respectively, by fiscal year 2037. However, VA also projects that the proportion of expenditures for institutional long-term care will decrease from 63 percent to 53 percent, as the share for noninstitutional programs increases. (See fig. 4.) While VA expenditures are projected to increase for all long-term care programs included in the EHCPM from fiscal years 2017 through 2037, the size of these projected increases vary by program. For example, VA projects its expenditures for institutional programs to increase 71 percent overall over this time period, with the VA Community Living Centers program projected to increase 50 percent and the Community Nursing Homes program to increase 149 percent. VA projects that its expenditures for noninstitutional programs will increase 168 percent over this time, with the largest projected increases including the Community Adult Day Health Care (240 percent), Home Respite Care (231 percent), and the Homemaker Home Health Aide (212 percent) programs. (See app. III for more information on projected expenditures for institutional and noninstitutional long-term care by program.) The projected expenditures for care provided to veterans with service- connected disabilities are projected to represent a growing percent of VA s long-term care expenditures, increasing from 64 percent to 79 percent of expenditures for this care from fiscal years 2017 through 2037. VA projects that its expenditures for care provided to veterans with service-connected disabilities will increase 156 percent during this period, from $4.4 billion to $11.3 billion, while expenditures for care provided to veterans without service-connected disabilities will increase only 19 percent, from $2.5 billion to $3.0 billion. In addition, VA projects that the proportion of spending for long-term care provided to post-9/11 deployed combat veterans will rise from 1 percent to 7 percent during these years, from $89 million to $981 million, as that cohort of veterans ages. <3. VA Has Identified Several Key Challenges to Meeting the Demand for Long-Term Care, but Lacks Measurable Goals for Addressing Them> As VA works to meet veterans growing demand for long-term care, it faces a number of key challenges: workforce shortages, geographic alignment of care, and difficulty meeting veterans needs for specialty care. (See table 1.) These challenges, which VA has identified, are similar to challenges faced by other health care systems. However, while VA s GEC the office that manages VA long-term care programs is aware of these challenges, as of November 2019 GEC s strategic planning has not identified measurable goals for addressing them. Addressing workforce shortages. According to VA, the agency faces challenges hiring the staff needed to meet veterans demand for long-term care, a challenge that is likely to grow as demand for care is projected to increase in coming years. We have previously reported on workforce shortages in key positions such as nursing assistants and home health aides that are critical for supporting long-term care programs and affect health care systems beyond VA. Within VA, the Healthcare Analysis and Information Group (HAIG) report found that 80 percent of VA community living centers had, at the time of the report, current vacancies for nursing assistant or health technician positions. These workforce challenges have led to waitlists for some long-term care programs. For example, VA officials told us staffing challenges were the key factor creating a waitlist of 1,780 veterans for the Home-Based Primary Care program. (The HAIG report found 65 percent of VA facilities cited staffing as a barrier to expanding Home- Based Primary Care.) GEC officials recognize these workforce challenges and told us they have developed some workforce strategies such as offering geriatrics training to rural primary care providers through GEC s Geriatric Scholars Program. Aligning care geographically. According to VA, the agency faces challenges aligning its provided or purchased long-term care with where veterans live. VA data show that 2.8 million VA-enrolled veterans lived in rural areas as of 2018, and that veteran populations have shifted to different geographic regions. Providing long-term care in rural areas is a challenge experienced by other health care systems; for example, a report from the Rural Policy Research Institute identified challenges with providing long-term care in rural areas, including more limited access to services and support and the absence of an adequate workforce and infrastructure. VA officials also told us that veterans moving from one region to another presents demand and capacity challenges. For example, officials told us that veterans have moved away from the Northeast and to the South, and that VA now has too many long-term care beds in the Northeast and too few in the South. VA officials acknowledged the challenge of aligning care with where veterans live and pointed to telehealth, where veterans can receive care remotely, and to Veteran Directed Care program, which provides veterans with a budget to manage their own care, as approaches that could provide care to veterans in rural areas with limited access to VA provided or purchased care. GEC officials have also identified potential strategies to address the issue; for example, GEC s strategic planning includes a proposal to expand telehealth geriatrics services to reach more veterans, although officials told us this effort is currently unfunded. Further, VA officials from the Office of Policy and Planning said an ongoing market assessment project will provide information that will help VA align its provided and purchased care with where veterans live to better meet veteran needs. Meeting needs for specialty care. According to VA, the agency faces challenges meeting some specialty care needs for veterans in long-term care. Specifically, it can be difficult to find appropriate long- term care settings for veterans with dementia, behavioral issues, and for veterans requiring a ventilator. Meeting specialty care needs is also a challenge for other health care systems; for example, a 2017 study from the RAND Corporation found that the U.S. health system does not have sufficient capacity to care for a growing number of people with Alzheimer s disease. Challenges in providing this type of care are not new for VA. For example, in 2013 we reported that VA officials told us that while in certain geographic areas [community living centers] provide certain services that are not available in the community, such as dementia care, behavioral health services, and care for ventilator-dependent residents, in other areas these specialized services might not be available in a [community living center] and instead might be available at a community nursing home. As previously mentioned, VA has developed some programs to provide specialty care (e.g. VA s Spinal Cord Injury and Disability Care program and the agency s efforts to educate home caregivers on how to better serve veterans with dementia). While GEC recognizes and has taken some steps to address the challenges it faces in meeting the demand for long-term care, our review of GEC s most recently approved strategic planning document from March 2019 shows that GEC has not established measurable goals for its efforts to address these three key challenges. GEC has not established measurable goals for its efforts to address workforce shortages, such as specific staffing targets necessary to address the waitlist for the Home-Based Primary Care program, or defining the number of rural providers it expects to train through the Geriatrics Scholar program. GEC has not established measurable goals for its efforts to address the geographic alignment of care, such as specific targets for providing long-term care within the Home Telehealth and Veteran Directed Care programs. GEC has not established measurable goals for its efforts to address difficulties meeting veterans needs for specialty care, such as specific targets for the number of available ventilators or the number of caregivers educated to help veterans with dementia. According to GAO s body of work on effectively managing performance under the Government Performance and Results Act of 1993 (GPRA), as enhanced by the GPRA Modernization Act of 2010, federal agencies should clarify and clearly define measurable outcomes for each strategic objective and assess progress towards those goals. VA officials told us that competing priorities, including implementation of the VA MISSION Act of 2018, have affected GEC s ability to effectively address challenges to meeting veterans long-term care needs. Without measurable goals, however, VA is limited in its ability to better plan for and understand progress towards addressing the challenges it faces meeting veterans long-term care needs. As VA works to address these challenges, it does so along with other health care systems, and VA has opportunities for leveraging outside experience through VA s Geriatrics and Gerontology Advisory Group. For example, the Advisory Group recently acknowledged workforce challenges and recommended that VA devise strategies to create incentives and identify and remove barriers for the recruiting and retaining the health care workforce needed to care for VA s growing geriatric veteran population. In addition to the key challenges that VA and many other health care systems face, VA has identified, but has not planned to take steps to fully address, challenges at the VAMC level that affect its ability to meet veterans long-term care needs. Specifically, VA has identified issues with inconsistency in the management of the 14 long-term care programs at the VAMC level that could lead to inefficient and inequitable decisions about long-term care across VA. While VA has identified the steps it can take to address these issues, it has not implemented these steps. First, VA identified that VAMCs do not have a consistent approach to managing VA s 14 long-term care programs. GEC officials told us that fragmentation of the long-term care programs within the VAMCs that is, where programs could be run by one or more departments within the VAMC, for example the Nursing department or the Social Work department at VAMCs where there are not GEC staff hinders standardization and the ability to get veterans the right care. Similarly, the HAIG report found that VAMCs organize their long-term care programs differently and recommended that to efficiently, reliably, and equitably serve veterans VA align GEC programs at all VISNs and eventually VAMCs nationwide. GEC strategic planning documents outline a goal of alignment within the VISNs, and officials said alignment has been established within the VISNs. However, VA officials told us that, as of October 2019, they had not taken action to pursue VAMC-level alignment with a GEC point of contact at each VAMC that could provide consistency across long-term care programs at the VAMC level. Second, GEC has developed a tool to improve the consistency with which VAMCs determine the amount of services needed for veterans based on their specific health issues. However, as of October 2019, VA has not required the tool be used in all VAMCs. VA has identified that VAMCs do not have a consistent approach to determining the amount of noninstitutional long-term care services veterans need. VA officials told us that, as of October 2019, VAMCs used different methods to assess the amount of noninstitutional long-term care services veterans need for example, how many hours of in-home care veterans need. As a result, decisions about the amount of services veterans receive may vary by VAMC. The HAIG report recommended that VA use a standardized approach to ensure the balance of noninstitutional care programs, program reliability, and equity of resource distribution. GEC officials said the tool they developed is currently being used by some VAMCs, and they expect VA will require the tool to be used by all VAMCs sometime in the next year. However, VA has not set time frames for this requirement. One of VA s performance goals is to provide highly reliable and integrated care and support and excellent customer service. Furthermore, federal internal controls dictate that federal agencies should exercise oversight responsibility, for example by overseeing the remediation of deficiencies as appropriate and providing direction to management on appropriate time frames for correcting these deficiencies. Although VA has identified steps it can take to improve consistency in long-term care programs, according to officials, it has not prioritized their implementation. Without a reliably consistent approach to administering long-term care programs across its VAMCs, VA may not consistently and equitably meet veteran preferences and needs. <4. Conclusions> VA currently faces difficult challenges meeting the demand for long-term care. These challenges such as addressing workforce shortages, aligning care geographically, and meeting specialty care needs are likely to intensify as veterans demand for long-term care grows. However, a lack of measurable goals in the strategic planning efforts of VA s GEC, which has the lead responsibility for managing VA s 14 long- term care programs, affects VA s ability to appropriately plan for and understand its progress towards addressing long-term care challenges. In addition to these key challenges, VA has identified, but not yet fully addressed, inconsistencies in the management of the 14 long-term care programs at the VAMC level. These inconsistencies in determining both the best program for veterans and the amount of noninstitutional care veterans need can lead to inefficient and inequitable experiences with VA s long-term care programs. <5. Recommendations for Executive Action> We are making the following three recommendations to VA: The Secretary of VA should direct GEC leadership to develop measurable goals for its efforts to address key long-term care challenges: workforce shortages, geographic alignment of care, and difficulty meeting veterans needs for specialty care. (Recommendation 1) The Secretary of VA should direct GEC leadership to set time frames for and implement a consistent GEC structure at the VAMC level. (Recommendation 2) The Secretary of VA should direct GEC leadership to set time frames for and implement a VAMC-wide standardization of the tool for assessing the noninstitutional program needs of veterans. (Recommendation 3) <6. Agency Comments> We provided a draft of this report to VA for review and comment. In its comments, reproduced in appendix IV, VA concurred with our three recommendations and identified actions it is taking to implement them. Specifically, VA said that it will: (1) take steps to incorporate measurable goals and defined timelines into its strategies to meet the long-term care challenges; (2) work to establish a time frame for the execution of a uniform GEC structure at the VAMC level; and (3) work to establish a time frame for the execution of a VAMC-wide standardized tool for evaluating non-institutional care needs for veterans. VA also provided technical comments that we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Veterans Affairs, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at silass@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Department of Veterans Affairs (VA) Institutional and Noninstitutional Long- Term Care Program Descriptions VA provides or pays for long-term services and supports, or long-term care, for eligible veterans through a range of three institutional and 11 noninstitutional programs. VA covers the full or partial cost of nursing home care for eligible veterans who require skilled nursing home care in an institutional program. Specifically, VA covers the full cost of nursing home care for veterans who need this care for a service-connected disability which is an injury or disease that was incurred or aggravated while on active duty and for veterans with service-connected disabilities rated at 70 percent or more. To the extent resources allow, VA may cover this care for certain other veterans, such as former prisoners of war and those awarded the Purple Heart. For all other veterans, VA may cover nursing home care to the extent resources and capacity allow and with the veteran s agreement to share certain costs. (See table 2 for more information about these programs.) In addition, all veterans enrolled in the health care system are eligible for VA s basic medical benefits package, which covers, among other things, a comprehensive array of medically necessary home- and community- based health services. While a veteran s priority for care generally determines whether these services are provided at full or partial cost, the VA may not charge a copay for home hospice care and may waive copays for home telehealth services. (See table 3 for more information about these programs.) A veteran s placement in a particular program may depend on their clinical needs, preferences, and the availability of VA funding and programs. Appendix II: Utilization and Obligations for Department of Veterans Affairs (VA) Long- Term Care Programs, Fiscal Years 2014 to 2018 <7. Subtotal institutional programs Noninstitutional programs (workload units in thousands) Homemaker Home Health Aide 9,999 8,328 11,136 Home-Based Primary Care 1,671 1,755 1,599 Purchased Skilled Home Care Home Telehealth> payment for a Home Hospice Care program visit from a community provider. The units for each program may differ. VA officials told us that these data do not include non-veterans and may differ from data included in VA s congressional budget justification for a variety of reasons, including the timing of when they looked at the data, the inclusion of additional data, and that VA used a standard definition of services for all years. In addition to these programs, VA may provide stipends or other services to caregivers for veterans who were seriously injured in the line of duty through the Caregiver Support program. Disabled veterans may also be eligible for increased compensation benefits from the Veterans Benefits Administration. <8. Program Institutional programs VA Community Living Centers Community Nursing Homes> Appendix III: Projected Utilization and Expenditures for Department of Veterans Affairs (VA) Long-Term Care Programs, Fiscal Years 2017 through 2037 Appendix III: Projected Utilization and Expenditures for Department of Veterans Affairs (VA) Long-Term Care Programs, Fiscal Years 2017 through 2037 these data do not include non-veterans and may differ from data included in VA s budget request for a variety of reasons, including the timing of when they looked at the data, the inclusion of additional data, and that VA used a standard definition of services for all years. <9. Program Institutional programs VA Community Living Centers> Appendix IV: Comments from the Department of Veterans Affairs Appendix V: GAO Contact and Staff Acknowledgments <10. GAO Contact Staff Acknowledgments> Sharon M. Silas, (202) 512-7114 or silass@gao.gov In addition to the contact named above, Karin Wallestad (Assistant Director), Luke Baron (Analyst-In-Charge), Kye Briesath and Corinne Quinones made key contributions to this report. Also contributing were Laurie Pachter, Vikki Porter, Jennifer Rudisill, and Selah Myers. Related GAO Reports Veterans Affairs: Sustained Leadership Attention Needed to Address Long-Standing Workforce Problems, GAO-19-720T (Washington, D.C.: Sept. 18, 2019). Veterans Health Care: VA Needs to Improve Its Allocation and Monitoring of Funding, GAO-19-670 (Washington, D.C.: Sept. 23, 2019). VA Health Care: Actions Needed to Improve Family Caregiver Program, GAO-19-618 (Washington, D.C.: Sept. 16, 2019). Veterans Health Care: Opportunities Remain to Improve Appointment Scheduling within VA and through Community Care, GAO-19-687T (Washington, D.C.: July 24, 2019). VA Health Care: Estimating Resources Needed to Provide Community Care, GAO-19-478 (Washington, D.C.: June 12, 2019). VA Real Property: Improvements in Facility Planning Needed to Ensure VA Meets Changes in Veterans Needs and Expectations, GAO-19-440 (Washington, D.C.: June 13, 2019). VA Nursing Home Care: VA Has Opportunities to Enhance Its Oversight and Provide More Comprehensive Information on Its Website, GAO-19-428 (Washington, D.C.: July 3, 2019). Long-Term Care Workforce: Better Information Needed on Nursing Assistants, Home Health Aides, and Other Direct Care Workers, GAO-16-718 (Washington, D.C.: Aug. 16, 2016). VA Mental Health: Clearer Guidance on Access Policies and Wait-Time Data Needed, GAO-16-24 (Washington, D.C.: Oct. 28, 2015). VA Nursing Homes: Reporting More Complete Data on Workload and Expenditures Could Enhance Oversight, GAO-14-89 (Washington, D.C.: Dec. 20, 2013). Older Americans: Continuing Care Retirement Communities Can Provide Benefits, but Not Without Some Risk, GAO-10-611 (Washington, D.C.: June 21, 2010). VA Health Care: Long-term Care Strategic Planning and Budgeting Need Improvement, GAO-09-145 (Washington, D.C.: Jan. 23, 2009). | Why GAO Did This Study
Veterans rely on long-term care to address a broad spectrum of needs, from providing occasional help around the house to daily assistance with eating or bathing to round-the-clock clinical care. Veterans' eligibility for this care is primarily based on their service-connected disability status, among other factors. Congress included a provision in statute for GAO to review VA's long-term care programs. This report (1) describes the use of and spending for VA long-term care and (2) discusses the challenges VA faces in meeting veterans' demand for long-term care and examines VA's plans to address those challenges. GAO reviewed VA documents, such as strategic planning documents for long-term care programs and analyzed VA utilization and expenditure data for fiscal years 2014 through 2018 (the latest available at the time of the review) and projected data through 2037. GAO also interviewed officials from VA, including officials from VA's GEC, which is responsible for overseeing long-term care programs; and from Veterans Service Organizations.
What GAO Found
The Department of Veterans Affairs (VA) provides or purchases long-term care for eligible veterans through 14 long-term care programs in institutional settings like nursing homes and noninstitutional settings like veterans' homes. From fiscal years 2014 through 2018, VA data show that the number of veterans receiving long-term care in these programs increased 14 percent (from 464,071 to 530,327 veterans), and obligations for the programs increased 33 percent (from $6.8 to $9.1 billion). VA projects demand for long-term care will continue to increase, driven in part by growing numbers of aging veterans and veterans with service-connected disabilities. Expenditures for long-term care are projected to double by 2037, as shown below. According to VA officials, VA plans to expand veterans' access to noninstitutional programs, when appropriate, to prevent or delay nursing home care and to reduce costs.
VA currently faces three key challenges meeting the growing demand for long-term care: workforce shortages, geographic alignment of care (particularly for veterans in rural areas), and difficulty meeting veterans' needs for specialty care. VA's Geriatrics and Extended Care office (GEC) recognizes these challenges and has developed some plans to address them. However, GEC has not established measurable goals for these efforts, such as specific staffing targets for programs with waitlists or specific targets for providing telehealth to veterans in rural areas. Without measurable goals, VA is limited in its ability to address the challenges it faces meeting veterans' long-term care needs.
What GAO Recommends
GAO is making three recommendations, including that VA develop measurable goals for its efforts to address key challenges in meeting the demand for long-term care. VA concurred with GAO's recommendations and identified actions it will take to implement them. |
gao_GAO-19-469T | gao_GAO-19-469T_0 | <1. Background> <1.1. Oversight Agencies> FTC and, most recently, CFPB, are the federal agencies primarily responsible for overseeing CRAs. FTC has authority to investigate most organizations that maintain consumer data and to bring enforcement actions for violations of statutes and regulations that concern the security of data and consumer information. CFPB, created in 2010 by the Dodd- Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act), has enforcement authority over all CRAs for violations of certain consumer financial protection laws. In general, it also has the authority to issue regulations and guidance for those laws. CFPB has supervisory authority over larger market participants in the consumer reporting market. In 2012, CFPB defined larger market participant CRAs as those with more than $7 million in annual receipts from consumer reporting. CFPB s supervision of these companies includes monitoring, inspecting, and examining them for compliance with the requirements of certain federal consumer financial laws and regulations. As discussed below, these laws include most provisions of the Fair Credit Reporting Act (FCRA); several provisions of the Gramm-Leach-Bliley Act (GLBA); and provisions of the Dodd-Frank Act concerning unfair, deceptive, or abusive acts or practices. <1.2. Data Breaches and the Equifax Breach> Although there is no commonly agreed-upon definition of data breach, the term generally refers to an unauthorized or unintentional exposure, disclosure, or loss of sensitive information. This information can include personally identifiable information such as Social Security numbers, or financial information such as credit card numbers. A data breach can be inadvertent, such as from the loss of an electronic device; or deliberate, such as the theft of a device or a cyber-based attack by individuals or groups, including an organization s own employees, foreign nationals, or terrorists. Data breaches have occurred at all types of organizations, including private, nonprofit, and federal and state entities. In the Equifax data breach, Equifax system administrators discovered on July 29, 2017, that intruders had gained unauthorized access via the Internet to a server housing the company s online dispute portal. The breach compromised the personally identifiable information of at least 145.5 million individuals, and included names, addresses, and birth dates; and credit card, driver s license, and Social Security numbers. Equifax s investigation of the breach identified the following factors that led to the breach: software vulnerabilities, failure to detect malicious traffic, failure to isolate databases from each other, and inadequately limiting access to sensitive information such as usernames and passwords. Equifax s public filings after the breach noted that the company took steps to improve security and notify individuals about the breach. Our August 2018 report provides more information on the breach and Equifax s response. <2. FTC Has Taken Enforcement Measures against CRAs but Lacks Civil Penalty Authority for GLBA Data Protection Provisions> FTC enforces compliance with consumer protection laws under authorities provided in FCRA, GLBA, and the FTC Act. As we reported in February 2019, according to FTC, in the last 10 years, it has brought 34 enforcement actions for FCRA violations, including 17 against CRAs. In addition, FTC said that it has taken 66 actions against companies (not just in the last 10 years), including CRAs, that allegedly engaged in unfair or deceptive practices relating to data protection. In some circumstances, FTC enforcement authority can include civil money penalties monetary fines imposed for a violation of a statute or regulation. However, FTC s civil penalty authority does not extend to initial violations of GLBA s privacy and safeguarding provisions. These provisions require administrative, physical, and technical safeguards with an emphasis on protection against anticipated threats and unauthorized access to customer records. For violations of GLBA provisions, FTC may seek an injunction to stop a company from violating these provisions and may seek redress (damages to compensate consumers for losses) or disgorgement (requirement for wrongdoers to give up profits or other gains illegally obtained). Determining the appropriate amount of consumer compensation requires FTC to identify the consumers affected and the amount of monetary harm they suffered. In cases involving security or privacy violations resulting from data breaches, assessing monetary harm can be difficult. In addition, consumers may not be aware that their identities have been stolen as a result of a breach and or identity theft, and related harm may occur years in the future. It can also be difficult to trace instances of identity theft to specific data breaches. According to FTC staff, these factors can make it difficult for the agency to identify which individuals were victimized as a result of a particular breach and to what extent they were harmed and then obtain related restitution or disgorgement. Having civil penalty authority for GLBA provisions would allow FTC to fine a company for a violation such as a data breach without needing to prove the monetary harm to individual consumers. FTC staff noted that in the case of a data breach, each consumer record exposed could constitute a violation; as a result, a data breach that involved a large number of consumer records could result in substantial fines. In 2006, we suggested that Congress consider providing FTC with civil penalty authority for its enforcement of GLBA s privacy and safeguarding provisions. We noted that this authority would give FTC a practical tool to more effectively enforce provisions related to security of data and consumer information. Following the 2008 financial crisis, Congress introduced several bills related to data protection and identity theft, which included giving FTC civil penalty authority for its enforcement of GLBA. However, in the final adoption of these laws, Congress did not provide FTC with this authority. Since that time, data breaches at Equifax and other large organizations have highlighted the need to better protect sensitive personal information. Accordingly, we continue to believe FTC and consumers would benefit if FTC had such authority, and we recommended in our February 2019 report that Congress consider providing FTC with civil penalty authority for the privacy and safeguarding provisions of GLBA to help ensure that the agency has the tools it needs to most effectively act against data privacy and security violations. <3. CFPB Enforces and Examines CRAs for Compliance with Consumer Protection Laws but Does Not Fully Consider Data Security in Prioritizing Examinations> CFPB enforces compliance with most provisions of FCRA; several provisions of GLBA; and the prohibition of unfair, deceptive, or abusive acts or practices under the Dodd-Frank Act. In our February 2019 report, we noted that since 2015, CFPB has had five public settlements with CRAs. Four of these settlements included alleged violations of FCRA, and three included alleged violations of provisions related to unfair, deceptive, or abusive practices. CFPB also has an ongoing investigation of Equifax s data breach. Under its existing authority, CFPB has examined several larger market participant CRAs, but may not be identifying all CRAs that meet the $7 million threshold. CFPB staff told us that as of October 2018, they were tracking between 10 and 15 CRAs that might qualify as larger market participants. CFPB staff told us that they believe the CRA market is highly concentrated and there were not likely to be many larger market participants beyond the 10 to 15 they are tracking. However, CFPB staff said that the 10 to 15 CRAs may not comprise the entirety of larger market participants, because CRAs receipts form consumer reporting may vary from year to year, and CFPB has limited data to determine whether CRAs meet the threshold. Our January 2009 report on reforming the U.S. financial regulatory structure noted that regulators should be able to identify institutions and products that pose risks to the financial system, and monitor similar institutions consistently. CFPB could identify CRAs that meet the larger market participant threshold by requiring such businesses to register with it, subject to a rulemaking process and cost-benefit analysis of the burden it could impose on the industry. Another method CFPB could use to identify CRAs subject to its oversight would be to leverage information collected by states. We recommended in February 2019 that CFPB identify additional sources of information, such as through registering CRAs or leveraging state information, that would help ensure the agency is tracking all CRAs subject to its authority. CFPB neither agreed nor disagreed with our recommendation. Each year CFPB determines the institutions (for example, banks, credit unions, non-bank mortgage servicers, and CRAs) and the consumer product lines that pose the greatest risk to consumers, and prioritizes these for examinations. CFPB segments the consumer product market into institution product lines, or specific institutions offerings of consumer product lines. CFPB then assesses each institution product line s risk to consumers at the market level and institutional level. To assess risk at the market level, CFPB considers market size and other factors that contribute to market risk. To assess risk at the institution level, CFPB considers an institution s market share within a product line, as well as field and market intelligence. Field and market intelligence includes quantitative and qualitative information on an institution s operations for a given product line, including the strength of its compliance management systems, the number of regulatory actions directed at the institution, findings from prior CFPB examinations, and the number and severity of consumer complaints CFPB has received about the institution. CFPB then determines specific areas of compliance to assess by considering sources such as consumer complaints, public filings and reports, and past examination findings related to the same or similar products or institutions. Most recently, CFPB examinations of CRA s consumer reporting have focused on issues such as data accuracy, dispute processes, compliance management, and permissible purposes. Although CFPB s examination prioritization incorporates several important factors and sources, the process does not routinely include assessments of data security risk, such as how institutions detect and respond to cyber threats. CFPB staff said the bureau cannot examine for or enforce compliance with the data security standards in provisions of GLBA and FCRA or FTC s implementing rules, even at larger participant CRAs. After the Equifax breach, however, CFPB used its existing supervisory authority to develop internal guidelines for examining data security, and conducted some CRA data security examinations. CFPB staff said that they do not routinely consider data security risks during their examination prioritization process and have not reassessed the process to determine how to incorporate such risks going forward. Statute requires CFPB to consider risks posed to consumers in the relevant product and geographic markets in its risk-based supervision program. In addition, federal internal control standards state that agencies should identify, analyze, and respond to risks related to achieving defined objectives. This can entail considering all significant internal and external factors to identify risks and their significance, including magnitude of impact, likelihood of occurrence, nature of the risk, and appropriate response. In light of the Equifax breach, as well as CFPB s acknowledgment of the CRA market as a higher-risk market for consumers, it is important for CFPB to routinely consider factors that could inform the extent of CRA data security risk such as the number of consumers that could be affected by a data security incident and the nature of potential harm resulting from the loss or exposure of information. In our February 2019 report, we recommended that CFPB assess whether its process for prioritizing CRA examinations sufficiently incorporates the data security risks CRAs pose to consumers, and take any needed steps identified by the assessment to more sufficiently incorporate these risks. CFPB neither agreed nor disagreed with our recommendation. <4. Regulators Inform Consumers about Protections Available and Consumers Can Take Some Actions after a CRA Data Breach> In our February 2019 report, we noted that FTC and CFPB provide educational information for consumers on ways to mitigate the risk of identity theft. In addition, after a breach, FTC and CFPB publish information specific to that breach. For example, shortly after Equifax s announcement of the breach, FTC published information on when the breach occurred, the types of data compromised, and links to additional information on Equifax s website. Similarly, CFPB released three blog posts and several social media posts that included information on ways that consumers could protect themselves in the wake of the breach and special protections and actions for service members. At any time, consumers can take actions to help mitigate the risk of identity theft. For example, consumers can implement a credit freeze free of charge, which can help prevent new-account fraud by restricting potential creditors from accessing the consumer s credit report. Similarly, implementing a free fraud alert with a credit bureau can help prevent fraud because it requires a business to verify a consumer s identity before issuing credit. However, consumers are limited in the direct actions they can take against a CRA in the event of a data breach, for two primary reasons. First, consumers generally cannot determine the source of the data used to commit identity theft. As a result, it can be difficult to link a breach by a CRA (or any other entity) to the harm a consumer suffers from a particular incidence of identity theft, which makes it challenging to prevail in a legal action. Second, unlike with many other products and services, consumers generally cannot exercise choice if they are dissatisfied with a CRA s privacy or security practices. Specifically, consumers cannot choose which CRAs maintain information on them. In addition, consumers do not have a legal right to delete their records with CRAs, according to CFPB staff, and therefore cannot choose to remove themselves entirely from the CRA market. FTC and CFPB have noted that the level of consumer protection required can depend on the consumer s ability to exercise choice in a marketplace. For example, when determining whether a practice constitutes an unfair practice, FTC considers whether the practice is one that consumers could choose to avoid. Similarly, according to CFPB staff, the consumer reporting market may pose higher risk to consumers because consumers cannot choose whether or which CRAs possess and sell their information. Chairman Krishnamoorthi, Ranking Member Cloud, and Members of the Subcommittee, this concludes my prepared remarks. I would be happy to answer any questions that you may have. <5. GAO Contact and Staff Acknowledgment> If you or your staff have any questions about this statement, please contact Michael Clements at (202) 512-8678 or clementsm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact named above, John Forrester (Assistant Director), Winnie Tsen (Analyst-in-Charge), and Rachel Siegel made key contributions to the testimony. Other staff who made key contributions to the report cited in the testimony are identified in the source product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
CRAs collect, maintain, and sell to third parties large amounts of sensitive data about consumers, including Social Security numbers and credit card numbers. Businesses and other entities commonly use these data to determine eligibility for credit, employment, and insurance. In 2017, Equifax, one of the largest CRAs, experienced a breach that compromised the records of at least 145.5 million consumers.
This statement is based on GAO's February 2019 report on the CRA oversight roles of FTC and CFPB. This statement summarizes (1) measures FTC has taken to enforce CRA compliance with requirements to protect consumer information, (2) measures CFPB has taken to ensure CRA protection of consumer information, and (3) actions consumers can take after a breach.
What GAO Found
In its February 2019 report, GAO found that since 2008, the Federal Trade Commission (FTC) has settled 34 enforcement actions against various entities related to consumer reporting violations of the Fair Credit Reporting Act (FCRA), including 17 actions against consumer reporting agencies (CRA). Some of these settlements included civil penalties—fines for wrongdoing that do not require proof of harm—for FCRA violations or violations of consent orders. However, FTC does not have civil penalty authority for violations of requirements under the Gramm-Leach-Bliley Act (GLBA), which, unlike FCRA, includes a provision directing federal regulators and FTC to establish standards for financial institutions to protect against any anticipated threats or hazards to the security of customer records. To obtain monetary redress for these violations, FTC must identify affected consumers and any monetary harm they may have experienced. However, harm resulting from privacy and security violations can be difficult to measure and can occur years in the future, making it difficult to trace a particular harm to a specific breach. As a result, FTC lacks a practical enforcement tool for imposing civil money penalties that could help to deter companies, including CRAs, from violating data security provisions of GLBA and its implementing regulations.
Since 2015, the Consumer Financial Protection Bureau (CFPB) has had five public settlements with CRAs. Four of these settlements included alleged violations of FCRA; and three included alleged violations of unfair, deceptive, or abusive practices provisions. CFPB is also responsible for supervising larger CRAs (those with more than $7 million in annual receipts from consumer reporting) but lacks the data needed to ensure identification of all CRAs that meet this threshold. Identifying additional sources of information on these CRAs, such as by requiring them to register with the agency through a rulemaking or leveraging state registration information, could help CFPB ensure that it can comprehensively carry out its supervisory responsibilities. After the Equifax breach, CFPB used its existing supervisory authority to examine the data security of certain CRAs. CFPB's process for prioritizing which CRAs to examine does not routinely include an assessment of companies' data security risks, but doing so could help CFPB better detect such risks and prevent the further exposure or compromise of consumer information.
Consumers can take actions to mitigate the risk of identity theft—such as implementing a fraud alert or credit freeze—and can file a complaint with FTC or CFPB. However, consumers are limited in the direct actions they can take against CRAs. Consumers generally cannot exercise choice in the consumer reporting market—such as by choosing which CRAs maintain their information—if they are dissatisfied with a CRA's privacy or security practices. In addition, according to CFPB, consumers cannot remove themselves from the consumer reporting market entirely.
What GAO Recommends
In its February 2019 report, GAO recommended that Congress consider giving FTC civil penalty authority to enforce GLBA's safeguarding provisions. GAO also recommended that CFPB (1) identify additional sources of information on larger CRAs, and (2) reassess its prioritization of examinations to address CRA data security. CFPB neither agreed nor disagreed with GAO's recommendations. |
gao_GAO-19-666 | gao_GAO-19-666_0 | <1. DOD s Plan Generally Addresses Requirements of Section 921, but Assessing Feasibility of Reforms Is Difficult> DOD s 921 plan identifies eight initiatives across the covered activities and generally addresses most of the elements required under section 921. Specifically, section 921 required the CMO to provide a plan, schedule, and cost estimate for conducting its reforms of the covered activities. DOD s plan provides a schedule for all eight efforts, and provides a cost estimate for all but one, which OCMO officials indicated was still under development. The plan identifies costs of at least $116.3 million to $116.8 million to implement these initiatives through fiscal year 2021. We discuss DOD s funding of these costs later in this report. According to DOD s plan, the eight initiatives have the following objectives: Civilian hiring improvement. Shorten the time needed to hire civilian employees, improve the matching of enterprise needs to employee competencies, and establish standard metrics and reports on performance of an improved hiring process. Human resources regulatory reform. Develop a new proposed legal authority that allows the department to simplify, streamline, and standardize civilian personnel policies. In addition, use regulatory reform to better recruit, compensate, and retain a qualified civilian workforce at DOD. Human resources service delivery. Establish a common human resources business and service delivery model, a standard set of performance measures, and a cost accountability structure that will be applied to all human resources service providers, with a focus on certain defense agencies and field activities. Strategic sourcing of sustainment and commodity procurement. Improve the buying power of the department, increasing data transparency related to sustainment and commodity procurement, and apply best-in-class cost and contract management practices with suppliers to drive higher performance and lower cost. Maintenance work packages and bills of material. Improve the accuracy of depot maintenance work packages and related bills of material and develop recommendations for process improvements. Munitions readiness. Produce an integrated tool capable of providing senior leaders with an effective assessment of all the variables associated with the health and readiness of the munitions inventory and the ability to assess options for correcting negative trends. Service requirements review boards. Expand the use of service requirement review boards which review, validate, prioritize, and approve contracted services requirements to accurately inform the budget and acquisition process. Category management. Implement best practices for purchasing goods and services, such as consolidating separate requirements into single contracts, allowing DOD to achieve savings from volume discounts and develop tools aimed at focusing spending on contracts that meet certain best practices for management. Several of these initiatives address aspects of our prior recommendations related to the objectives of the initiatives. How findings and recommendations from GAO and agency inspectors general have been addressed in proposed reforms is among the key questions GAO has previously identified for assessing agency reform efforts. We found that DOD s initiatives address aspects of our findings and recommendations, but in some cases do not fully address them. For example: In September 2018, we reported that at least six organizations within DOD, including three defense agencies and field activities and the three military departments, provide human resources services to other defense agencies or organizations. All perform the same types of human resources services, such as those related to civilian workforce hiring across DOD. We also reported that there is fragmentation and overlap within the defense agencies and field activities that provide human resources services to other defense agencies or organizations within DOD. This fragmentation and overlap has resulted in negative effects, such as inconsistent performance information regarding hiring, fragmented information technology systems, and inefficiencies associated with overhead costs. We recommended, and DOD concurred, that DOD collect consistent performance information and comprehensive overhead cost information as well as establish time frames and deliverables for key reform efforts. DOD s human resource service delivery initiative is intended, in part, to address our recommendations. This initiative, however, is focused only on the defense agencies and field activities responsible for human resources service delivery, and does not include all human resources service providers we highlighted in our September 2018 report. In June 2016, we reported that the Defense Logistics Agency and the military services have some internal efficiency measures for supply and depot operations; however, they generally have not adopted metrics that measure the accuracy of planning factors that are necessary to plan efficient and effective support of depot maintenance. Additionally, the Defense Logistics Agency and the services do not track the potentially significant costs to supply and depot maintenance operations that are created by backorders. Further, we reported that without relevant metrics on cost and planning factors, DOD, the Defense Logistics Agency, and the services are unable to optimize supply and maintenance operations and may miss opportunities to improve the efficiency and effectiveness of depot maintenance. We recommended, and DOD concurred, that DOD, the Defense Logistics Agency, and the services develop metrics to monitor costs and accuracy of demand planning factors. DOD s initiative on maintenance work packages and bills of material includes steps that may, in part, address these recommendations. Specifically, the initiative plans to assess the accuracy of bills of material, one of the planning factors we recommended DOD develop and implement metrics for, but does not include assessing the accuracy of other planning factors. In August 2017, we reported that DOD s service requirement review boards were intended to prioritize and approve contracted services in a comprehensive portfolio-based manner to achieve efficiencies, but the military commands we reviewed did not do so. Instead, commands largely leveraged existing contract review boards that occurred throughout the year and focused on approving individual contracts. As a result, the review boards at these commands had minimal effect on supporting decisions within and across service portfolios or capturing efficiencies that could inform the commands programming and budgeting decisions. We recommended, and DOD concurred, that DOD clarify policies concerning the purpose and timing of the review board process. DOD s initiative on service requirements review boards expands the use of these boards, and indicates that they are timed to inform budgets for the following fiscal year, but does not indicate whether guidance to do so has been provided. In its concurrence, DOD stated it would update the relevant DOD instruction to include this guidance, but, as of June 2019, DOD has not issued an updated instruction that includes this guidance. Although these initiatives intend to address aspects of our prior recommendations, assessing the feasibility of DOD s reform effort is difficult because many of the planned initiatives entail collecting information that will lay the groundwork for later reforms. For example, the human resources service delivery initiative tasks the reform team to draft a project charter, collect and analyze information on human resources service providers within DOD, and eventually develop recommended courses of action for reform by fiscal year 2020. Similarly, the initiative on maintenance work packages and bills of material tasks the reform team to identify opportunities to improve processes, make recommendations to address deficiencies, improve efficiency, and improve material availability and then to develop an implementation plan for the recommendations by the end of fiscal year 2019, with implementation beginning in fiscal year 2020. <2. DOD Provided Limited Documentation of Progress in Implementing Its 921 Plan and Achieving Cost Savings, and Has Not Fully Funded Some Plan Initiatives> <2.1. DOD Provided Limited Documentation of Progress in Implementing Its 921 Plan> OCMO officials told us that DOD is making progress in implementing the 921 plan s initiatives according to the schedules contained in the plan, and they provided summary documentation stating that progress has been made on five of the eight initiatives. However, OCMO did not provide sufficiently detailed documentation for us to independently assess progress on any of the initiatives. Specifically, OCMO provided us some documentation on the progress of the eight initiatives, but this information varied by initiative and was limited. As a result, we were unable to independently assess and verify DOD s progress in implementing its initiatives. Specifically: For the human resources regulatory reform, civilian hiring improvement, and human resources service delivery initiatives, OCMO provided briefing materials on the status of each milestone under the initiatives, indicating that those initiatives are progressing according to the schedule in the plan. However, DOD did not provide separate underlying documentation for each milestone. For example, under the plan, the teams conducting these initiatives were to have established by June 2019 a common DOD process and metrics for civilian hiring, prepared drafts of updated DOD policies and fiscal year 2020 2021 talent management guidance, and collected and mapped different human resources service delivery models. However, OCMO did not provide documentation of the common DOD process and metrics for civilian hiring, drafts of updated policies and guidance, or human resources service delivery model maps. For the service requirements review boards initiative, OCMO provided documentation stating that the service requirements review boards had largely been completed on schedule, but did not provide information on the outcomes of these boards. OCMO officials told us that delays in completing 3 of 69 boards had prevented them from fully meeting planned deadlines. For the category management initiative, OCMO officials told us that the first two quarterly sprints reviews of different contracts or categories of goods or services to identify savings for fiscal year 2019 had been completed and the third was in progress, but did not provide documentation to support this assertion. For example, OCMO did not provide information on the outcomes of the sprints. For the strategic sourcing of sustainment and commodity resources, maintenance work packages and bills of material, and munition readiness initiatives, DOD did not provide any documentation on the progress of the initiatives. While most of DOD s initiatives included in its plan identify either performance metrics or targets, five of the eight initiatives also state that part of the work of the initiatives will be to establish such metrics or targets. Among our key questions for assessing agency reform efforts is the extent to which the agency has established clear outcome-oriented goals and performance measures for the proposed reforms, and whether the agency has put processes in place to collect the needed data and evidence that will effectively measure the reform s goals. Identifying and collecting this information can lay the groundwork for further reform efforts. Moreover, we found that objectives for some of the initiatives in DOD s plan are similar to those presented in prior plans with deadlines that have already passed, suggesting that progress on some initiatives is going more slowly than the department originally anticipated. For example, DOD s August 2017 report to Congress on restructuring the CMO organization included an initiative to create a single civilian personnel system and rating system for certain employees by the middle of fiscal year 2018. DOD s 921 plan contains a similar initiative on human resources regulatory reform, which aims to develop standardized civilian personnel policies and processes. Development of the initiative is not scheduled to be completed until the end of fiscal year 2019, and implementation would not occur until fiscal year 2020, at the earliest, compared to the original fiscal year 2018 deadline for the initiative. <2.2. DOD Reported Cost Savings from Broader Reform Efforts but Provided Limited Documentation of Those Savings> DOD has stated that its business operations reform efforts which are not limited to the covered activities under section 921 will produce cost savings; however, DOD did not provide underlying documentation to allow us to independently validate the savings. Specifically, in its budget materials for fiscal year 2020, released in March 2019, DOD reported that its reform efforts had saved $4.7 billion in fiscal years 2017 and 2018, and are expected to save $6.0 billion in fiscal year 2019 and $7.7 billion in fiscal year 2020, the first year of required savings under section 921. Of those $7.7 billion in expected savings for fiscal year 2020, about $2.6 billion were in business process and systems improvements. According to OCMO and Office of the Under Secretary of Defense (OUSD) (Comptroller) officials, the OUSD (Comptroller) has validated these savings and the savings have been programmed or budgeted in the fiscal years reported. Specifically, according to OUSD (Comptroller) officials, all of the savings reported in DOD s budget materials have been validated against OUSD Comptroller s own systems that record budget information and decisions that are incorporated into DOD s programming and budgeting process. OUSD (Comptroller) provided a spreadsheet detailing the various reforms and savings DOD cited in its budget materials, but did not provide the underlying support to allow us to independently validate the savings, such as documentation of budgetary decisions that reflect the savings. Our prior work over the past 7 years has found repeated shortcomings in DOD s ability to demonstrate that it has achieved its goal for savings from reform efforts. Most recently, in September 2018, we reported that DOD could not demonstrate that it met several cost savings requirements mandated by the NDAA for Fiscal Year 2016, in part because there were no baseline costs established to measure any reductions against and documentation supporting cost savings estimates from other efficiencies was not detailed enough. DOD is taking steps to address this challenge and report on its cost baseline to perform all covered activities by January 1, 2020, as required by section 921. Specifically, in March 2019, we reported that OCMO is taking steps to establish cost baselines for DOD s major lines of business through the fiscal year 2019 2020 timeframe. According to OCMO officials, they are also regularly adjusting the fiscal year 2019 baseline to reflect savings identified during the fiscal year. As of June 2019, OCMO is reviewing its approach for reporting the savings required by section 921 and plans to complete the review by October 2019. OCMO is coordinating with OUSD (Comptroller) on both establishment of the baseline and reporting of savings. <2.3. DOD Has Not Fully Funded Some of the Initiatives in Its 921 Plan> While DOD has already funded some of the initiatives included in its plan through its annual budget request process, it continues to face challenges obtaining funding for others. According to DOD s plan, four of the eight initiatives had no costs associated with them or the initiative has been funded to date using existing resources through the regular budget process, and DOD does not anticipate any additional costs for the initiatives. Funding needs for the remaining four initiatives have not been fully determined or met. Specifically: 1. Funding needs for the human resources service delivery initiative have not yet been determined. OCMO expects to fund the cost of this initiative as a part of the initial stand-up costs for OCMO s Office of Fourth Estate Management in fiscal year 2020. OCMO officials told us they are reviewing baseline needs for the office and anticipate realigning resources to support the new office. 2. Funding needs for the human resources regulatory reform initiative have been determined, but OCMO has not confirmed that funding has been obtained. DOD s plan states that future costs for the initiative may include approximately $500,000 for research and studies. To the extent possible, the plan states, DOD will use funds from the OUSD for Personnel and Readiness for studies, but DOD has not indicated that those funds have been obtained. 3. Funding needs for the strategic sourcing of sustainment and commodity procurement initiative have not been determined. According to OCMO, the Defense Logistics Agency and the military services are developing a detailed cost estimate for this initiative. However, neither the plan nor OCMO officials we spoke with identified where any funding that may be needed will come from once the costs are determined. 4. Funding needs for the plan s category management initiative to conduct reviews of contracts and categories of goods and services have not been fully met. The initiative includes quarterly sprints reviewing different contracts or categories of goods or services to identify savings. According to DOD s plan, each sprint is assisted by consulting firms and industry analyses and is estimated to cost about $11 million. DOD plans to complete a total of 10 sprints, at a total cost of $110 million. According to OCMO, limited funding has hindered execution of two of the sprints so far. OCMO has requested $12 million in its budget request for fiscal year 2020 to support this effort and expects the remaining sprints to be funded by savings identified through earlier sprints. However, in January 2019, we reported on problems associated with this approach. Specifically, we reported that OCMO officials told us the department initially planned to use available funding from OCMO or the savings generated by reform initiatives to fund development of other initiatives, but has since recognized that additional funding is needed. Among the key questions we previously identified for assessing agency reform efforts is the extent to which the agency has considered how the upfront costs of proposed reforms will be funded. In January 2019, we reported that some reform teams lacked resources to fully implement approved initiatives. We recommended, and DOD concurred, that DOD establish a process to identify and prioritize funding for implementing its cross-functional teams business reform initiatives. An OCMO official told us OCMO updated its reform management framework the process it uses for managing its business reform efforts in part to address this recommendation. However, in light of the continued challenges related to funding that we identified as part of this review, the effectiveness of changes to this framework at this time is unclear. As a result, we will continue to monitor the extent to which OCMO s adjustments to its processes have addressed this recommendation as OCMO continues to implement its business reforms. <3. Agency Comments> We provided a draft of this report to DOD for review and comment. In response, DOD officials told us they concurred and had no comments on the report. We are sending copies of this report to the appropriate congressional committees and to the Secretary of Defense and Deputy Chief Management Officer. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2775 or fielde1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this are listed in appendix III. Appendix I: Changes to DOD s Reform Teams and Processes Since we last reported on the Department of Defense s (DOD) business reform efforts in January 2019, the Office of the Chief Management Officer (OCMO) has, among other things, changed the composition of the teams and the framework it is using to manage the efforts. Specifically, OCMO has disestablished the teams on real property management, human resources, and testing and evaluation, split the team on information technology and business systems into two separate teams for information technology and business systems, and made changes to the leadership or composition of each of the remaining teams. See table 1 for a summary of these changes. According to an OCMO official responsible for OCMO s management of the reform efforts, OCMO has not removed any initiatives from the business reform efforts as a result of the changes to these teams, but some teams initiatives were absorbed into other business reform teams or organizations that OCMO believed were more appropriate for leading the initiatives, such as the relevant DOD principal staff assistant. For example, the category management team assumed responsibility for the real property management team s initiatives. According to the same official, OCMO s new Fourth Estate Management Office and components of the Office of the Under Secretary of Defense (OUSD) for Personnel and Readiness assumed responsibility for some of the human resources team s initiatives. In addition, an OCMO official told us OCMO revised its business reform management framework the process it uses for managing its business reform efforts. According to an overview of the new framework provided by OCMO, the new process is designed to establish a simplified, standardized, and repeatable process for managing these reforms and identifying and prioritizing funding for reform initiatives. An OCMO official told us that one of the goals of the updated process is to improve the uniformity of documentation across business reform teams and initiatives. That official told us the updated process also reduced the number of decision points through which reform teams receive approval from DOD s Reform Management Group to proceed with an initiative from five to two. Further, OCMO introduced new processes for estimating and tracking the costs and potential savings resulting from reform initiatives. Among other things, the updated framework includes input from the OUSD (Comptroller). Specifically, according to OCMO documentation and OUSD (Comptroller) officials, OUSD (Comptroller) officials review estimates of the costs and potential savings recorded in OCMO s reform management portal a database OCMO uses to monitor business reform initiatives. OUSD (Comptroller) assigns a confidence score based on the degree to which each initiative has been developed. According to an OUSD (Comptroller) official, initiatives that are less developed will have a lower confidence score because they are further from full implementation and subject to more unknowns than those that are closer to implementation. OUSD (Comptroller) officials told us OUSD (Comptroller) uses confidence scores to adjust estimates of potential savings, and to lower potential savings associated with newer initiatives. According to OUSD (Comptroller) officials, these estimates of potential savings are not included in any savings amounts the department reports externally, such as in DOD budget materials, until they are actually programmed or budgeted. Appendix II: GAO Contact and Staff Acknowledgments <4. GAO Contact> <5. Staff Acknowledgments> In addition to the contact named above, Margaret Best (Assistant Director), Daniel Ramsey (Analyst-in-Charge), Sierra Hicks, Alexa Kelly, and Richard Powelson made key contributions to this report. Other contributors included Bonnie Anderson, Tracy Barnes, Arkelga Braxton, Timothy J. DiNapoli, Michael Holland, Richard Larsen, Ned Malone, Ron Schwenn, Anne Stevens, John Van Schaik, and Sarah Veale. | Why GAO Did This Study
DOD spends billions of dollars each year to maintain key business operations intended to support the warfighter. The John S. McCain National Defense Authorization Act for Fiscal Year 2019 established requirements for DOD to reform its enterprise business operations. Section 921 of the act required the Secretary of Defense, acting through the Chief Management Officer, to submit to the congressional defense committees by February 1, 2019, a plan, schedule, and cost estimate for reforms of DOD's enterprise business operations to increase effectiveness and efficiency of mission execution.
Section 921 also requires GAO to provide a report assessing the feasibility of the plan. GAO's objectives were to assess (1) DOD's 921 plan, including its feasibility in reforming DOD's business operations, and (2) the extent to which DOD has made progress in implementing the plan and its broader reform efforts.
GAO reviewed DOD's plan and associated documentation and interviewed DOD officials on efforts to reform business operations of the department, including the development and implementation of the plan. GAO also reviewed its past work on DOD reform efforts and the specific subject areas covered by DOD's reform initiatives.
GAO has previously made eight recommendations related to DOD's reform initiatives from three prior reports. DOD concurred with those recommendations and is working to address them, in part through the initiatives GAO discusses.
What GAO Found
The Department of Defense's (DOD) April 2019 plan for business reform identifies eight initiatives related to civilian resources management, logistics management, services contracting, and real estate management. According to the plan, these initiatives will cost at least $116 million to implement through fiscal year 2021. GAO found that the plan generally contains the elements required under section 921—a schedule and cost estimate—and that several initiatives address aspects of GAO's prior recommendations. However, because many of the planned initiatives entail collecting information that will lay the groundwork for later reforms, assessing the feasibility of DOD's reform effort is difficult. For example, one logistics reform initiative plans to identify opportunities to improve processes, make recommendations, and develop an implementation plan for the recommendations by the end of fiscal year 2019.
Although DOD officials told GAO that the department is making progress implementing the plan's initiatives and achieving cost savings on its broader efforts, DOD provided limited documentation of that progress. As a result, GAO could not independently assess and verify this progress. For example:
Office of the Chief Management Officer (OCMO) officials provided briefing charts on the status of milestones for DOD's three human resource–related initiatives stating that those initiatives are progressing according to the schedule, but did not provide underlying documentation for each milestone.
According to DOD, its broader reform efforts have saved or are expected to save about $18.4 billion between fiscal years 2017 and 2020. According to Under Secretary of Defense (Comptroller) officials, they have validated these savings. However, DOD did not provide any supporting documentation that would allow GAO to independently validate these savings. GAO's prior work has found repeated shortcomings in DOD's ability to demonstrate that it has achieved its goals for savings from reform efforts. DOD is taking steps to address these challenges, including establishing cost baselines for DOD's major lines of business and incorporating Comptroller input into estimates of the costs and potential savings from initiatives as they are developed.
Further, according to the plan, DOD has provided funding through its annual budget process for four of the eight initiatives included in its plan. For the four remaining initiatives, OCMO has identified a source of funding but not obtained that funding for two initiatives, is awaiting a cost estimate for one initiative, and has identified only partial funding for one initiative, which is designed to review contracts and categories of goods or services on a quarterly basis to identify savings. OCMO anticipates that savings identified in earlier rounds of this initiative will fully fund later rounds. However, in January 2019, GAO reported that, according to OCMO, DOD initially planned to fund its reform initiatives in part with savings generated by other initiatives, but recognized that this approach did not work because additional funding was needed. GAO recommended that DOD establish a process to identify and prioritize funding for implementing its initiatives. OCMO has updated its processes for managing its reform efforts in part to address this issue, but the effects of this update at this time are unclear. |
gao_GAO-20-320 | gao_GAO-20-320_0 | <1. Background> <1.1. Air Force Use of RPAs and Basing Locations> The Air Force operates several types of RPAs: the MQ-9 Reaper; RQ-4 Global Hawk; and RQ-170 Sentinel. The MQ-9 Reaper RPA community has about four times the number of pilots and eight times the number of sensor operators assigned as compared to the next largest RPA community (the RQ-4 Global Hawk). Additionally, the MQ-9 Reaper RPA provides persistent intelligence, surveillance, and reconnaissance and strike capabilities against high-value, fleeting, and time-sensitive targets. It is operated by an aircrew that includes an officer pilot and enlisted sensor operator. See figure 1. The Air Force RPAs operate remote split operations, which divides the control of the RPA among geographically separated units. Remote split operations employ a launch and recovery ground control station unit aircrew who controls the RPA s take-off and landing at an overseas operating location while a crew based in the continental United States (i.e., the Mission Control Element unit) flies the RPA the remainder of the mission via electronic links. Remote split operations result in fewer personnel deployed overseas, consolidates flying multiple aircraft from one location, and as such, simplifies command and control functions as well as the logistical supply challenges for the weapon system. RPA operations include Active Duty and Air National Guard personnel and locations. Figure 2 shows the location of bases involved in RPA training and MQ-9 Reaper RPA operational locations with the active-duty sites bolded. <1.2. Demand for RPA Capabilities> Over nearly two decades, the number of combat lines and flying hours for RPAs has grown substantially. Specifically, in 2008, the Air Force flew 33 RPA combat lines but in 2015, the number had increased to 60 RPA combat lines. A combat line is the measure of the capability to provide near-continuous 24-hour flight presence of an RPA over a specific region on Earth, to include time flying to and from a specific target area. In doing so, the RPA can provide air action against hostile targets that are in close proximity to friendly forces, gather intelligence, or, if necessary, employ its weapons to strike identified targets. Additionally, the number of combat flying hours has also increased from calendar year 2000, as shown in figure 3 below, and reached 4 million cumulative combat hours in March 2019. In March 2016, General Herbert J. Carlisle, then-commander of Air Combat Command, testified to the Senate Armed Services Committee s Subcommittee on Airland that the RPA enterprise has been a victim of its own success with an insatiable demand for RPA forces that was taxing the capability of the community. To meet the demand for RPA pilots, the Air Force has pursued efforts to increase the number of RPA pilots. For example, the Air Force trained traditional manned-aircraft pilots to fly RPAs and placed graduates of manned-aircraft pilot training into RPA training rather than in advanced manned-aircraft training. In 2010, the Air Force created a dedicated RPA pilot career field (i.e., 18X specialty code) and developed a training program for pilots who specialize in flying RPAs. In December 2013, there were 1,366 Air Force RPA pilots, of which 249 were dedicated RPA pilots (18 percent). Six years later, in December 2019, the number of total Air Force RPA pilots had grown to 1,768, with 1,127 of those being dedicated RPA pilots (64 percent). <1.3. Training Process> MQ-9 Reaper RPA pilots and sensor operators complete multiple phases of training designed to generate combat mission capable aircrews within approximately a year of starting training. First, the pilots initially attend RPA Flight Training in Pueblo, Colorado, and then Undergraduate RPA Training at Randolph Air Force Base, Texas, which includes instrument qualification in simulators and an RPA fundamentals course. Second, they complete MQ-9 Initial Qualification Training at the formal training unit at either Holloman Air Force Base in New Mexico, March Air Reserve Base in California, or Hancock Field Air National Guard Base near Syracuse, New York. Finally, they are assigned to an operational squadron, where they complete unit-specific Mission Qualification Training that can vary in length. According to officials at two RPA bases, their respective Mission Qualification Training was taking between six to 10 weeks or as much as 17 weeks to complete. MQ-9 Reaper RPA sensor operators go through a similar pipeline. They complete courses on aircrew fundamentals and the basics of being a sensor operator at Lackland Air Force Base, Texas, and Randolph Air Force Base, Texas, respectively. Then, they complete training at the MQ- 9 Reaper RPA formal training unit at Holloman Air Force Base, New Mexico; March Air Reserve Base, California; or Hancock Field, Syracuse, New York. Finally, they complete unit-specific Mission Qualification Training in the operational unit at which they are assigned after graduation. Figure 4 shows the MQ-9 Reaper RPA aircrew training pipeline. <2. The Air Force Has RPA Pilot and Sensor Operator Staffing Shortages and Does Not Track Its Progress toward Implementing Its Combat-to-Dwell Policy as Planned> The Air Force does not have enough RPA pilots and sensor operators to meet its staffing targets, and it does not track its overall progress to access and retain sufficient quantities of RPA personnel that is needed to implement its combat-to-dwell policy as planned. More specifically, the Air Force has not consistently met its accession targets for RPA pilots and sensor operators and has had fewer RPA pilots and sensor operators than it has needed for most years between fiscal years 2016 through 2019. The Air Force has offered financial retention incentives to RPA pilots and sensor operators; however, it does not directly measure RPA pilot and sensor operator retention rates and retention concerns exist. Moreover, the Air Force does not track the overall progress being made from its accession and retention efforts to maintain a sufficient quantity of RPA pilots and sensor operators needed to implement as planned its combat-to-dwell policy a policy intended to better balance RPA units time in combat operations with time spent away from those operations to accomplish other activities such as training. <2.1. The Air Force Has Experienced Staffing Shortages in RPA Pilots and Sensor Operators> <2.1.1. The Air Force Has Not Consistently Met Accession Targets for RPA Pilots and Sensor Operators> The Air Force met its accession targets for its RPA pilots in only one year during fiscal years 2015 through 2019 and it did not meet any of its sensor operator accession targets during those years. However, this is not a new trend. In 2014, we reported that the Air Force did not achieve its accession targets for RPA pilots in fiscal years 2012 and 2013 and recommended that the Air Force develop a tailored accession strategy for RPA pilots to help ensure that it can meet and maintain required staffing levels to meet its mission. The Air Force concurred with the recommendation and took steps to address accession issues for RPA pilots, such as having officers with RPA pilot experience serve at the U.S. Air Force Academy as instructors and as ROTC detachment commanders and instructors at several large, nationally recognized universities, thus giving attention to the career field among future Airmen. Because of these actions to address RPA accessions, the Air Force met the intent of our recommendation. Since then, however, the Air Force has not consistently met its annual accession targets from fiscal years 2015 through 2019, as shown in figure 5. As shown in figure 5, for the 5-year period between fiscal years 2015 and 2019, the average accession target fill rates for pilots and sensor operators were 95 and 88 percent, respectively. Air Force officials told us that they do not believe the RPA pilot career field is facing an accessions problem and thus there is no need to offer an accession bonus because the overall population of RPA pilots has been steadily growing year after year. These officials attribute the trend to the appealing RPA mission. Participants in 12 of 14 focus groups we conducted agreed that the ability to affect front line combat operations and missions every day was a positive aspect of the job. For sensor operators, Air Force officials told us that the number entering active-duty service reflects the number who had finished Basic Military Training and their first RPA-specific training course. These numbers would have been higher but Air Force officials stated they have determined that about 11 percent are disqualified during Basic Military Training sensitive skills screening. This screening involves identifying individuals upon entry into the service with behavioral or mental health issues and is used for, among other things, determining a trainee s job classification and qualification for sensitive occupations. According to Headquarters Air Force officials, the 711th Human Performance Wing at Wright Patterson Air Force Base, Ohio, has ongoing research to help better identify the right types of airmen for RPA positions beyond the vocational aptitude battery test given to determine how qualified an enlistee is for certain occupations. They said that they expect the results of that research to be disseminated in early fiscal year 2021. <2.1.2. The Air Force Generally Has Had Fewer RPA Pilots and Sensor Operators Than It Has Needed since 2016> According to Air Force data, the service has had fewer RPA pilots and sensor operators as compared to both their respective requirements and authorizations for almost the entire time between fiscal years 2016 through 2019. More specifically, the number of RPA pilot and sensor operator requirements has increased every year in support of the Air Force s plan to create a new wing by 2024 that is needed to implement the combat-to-dwell policy. These Air Force requirements represent minimum essential resources needed to accomplish approved missions and functions that are valid, unconstrained, and realistic. After establishing the number of required positions, the Air Force fills these required positions to the extent possible based first on the number of those positions funded by Congress (i.e., authorizations) and then the number of trained and qualified personnel available to assign to those positions. Since fiscal year 2016, the overall number of authorized and assigned Air Force RPA pilots and sensor operators has increased. However, for a majority of the time in fiscal years 2016 through 2019, the Air Force s number of assigned RPA pilots and sensor operators were less than both of their respective authorizations and requirements, as shown in figures 6 and 7. The overall number of assigned RPA pilots has increased; however, this trend has not been enough to meet the increased number of authorized positions in this RPA career field. For example, for RPA pilots, there was a 22-percent gap between authorizations (1,168) and assigned (908) in August 2015 which was similar to the 20-percent gap between authorizations (1,652) and assigned (1,320) in September 2019. The Air Force s Rated Officer Retention Analysis report for fiscal year 2019 states that each of the four rated groups (pilots, combat system officers, air battle managers and RPA pilots) ended fiscal year 2019 in a deficit. Current projections indicate that the pilot deficit will continue into the near future. The report went on to say that while the number of assigned RPA pilots actually grew in fiscal year 2019, increases in the requirements for this career field reduced or negated the effect of the increase. Additionally, there was less than a 10 percent gap between the number authorized and assigned sensor operators during fiscal year 2016. However, by September 2019, a gap of 28 percent had developed (1,277 authorizations versus 919 assigned). <2.1.3. The Air Force Has Provided Financial Incentives to Retain RPA Personnel but Does Not Directly Measure RPA Pilot and Sensor Operator Retention Rates and Retention Concerns Exist> To encourage the retention of RPA pilots and sensor operators, the Air Force has provided financial incentives for many years. For example, the National Defense Authorization Act for Fiscal Year 2017 authorized RPA pilots to receive aviation incentive pay up to $1,000 a month and an aviation retention bonus up to $35,000 to those who are willing to extend their service. In addition, the Air Force has offered a number of financial incentives to RPA sensor operators. At various times in January 2010 through November 2019, RPA sensor operators were eligible for monthly aviation incentive pay, critical skills incentive pay, or special duty assignment pay to address retention issues and have occasionally been eligible for Selective Retention Bonuses. In November 2019, the Air Force offered a Selective Retention Bonus to RPA sensor operators who were eligible to reenlist and had between 17 months to 6 years of military service. To measure long-term retention trends among pilots other than RPA pilots, the Air Force calculates two retention metrics the Cumulative Continuation Rate and the Total Active Rated Service rate. However, the number of RPA pilots (i.e., Air Force Specialty Code 18X pilots) is still too few to have enough data to calculate reliably these standard retention metrics since the career field was not established until 2010. Officials at Headquarters Air Force and Air Combat Command told us that to calculate the Total Active Rated Service metric, the Air Force would need about 20 years of data; however, the RPA pilot career field is too new to have that amount of data. These RPA pilots have a 6 year Active Duty Service Commitment, which begins at the end of their undergraduate RPA training at Randolph Air Force Base. According to Air Force officials, the first group of 18X pilots service commitments ended in fiscal year 2019. Senior leaders at an RPA base we visited said that due to the newness of the RPA pilot 18X career field, the Air Force does not currently have enough historical data to help predict retention trends going forward. They also noted that until the combat-to-dwell policy is implemented, it is unknown what effect it will have on RPA personnel retention. According to Air Force officials, the Air Force tries to retain about 60 to 65 percent of those who have completed their initial service commitment and are eligible to be retained. However, this target is based on the average aviation retention bonus acceptance rates (i.e., the percentage of pilots accepting the retention bonuses) for healthy and established career fields where the number of required positions are not substantially increasing and which are able to meet between 95 to 100 percent of their staffing requirements. However, as previously discussed, RPA pilot requirements have increased about 74 percent in the 5 years from fiscal years 2015 through 2019. Therefore, these Headquarters Air Force officials stated that use of the 60 to 65 percent target may not be an appropriate target for RPA pilot retention. In the case of RPA pilots, if the Air Force met that target, Air Force officials said the service would still be understaffed due to the growing requirements, so the retention target would need to be higher. Further, they stated that while aviation retention bonus acceptance rates are leading indicators of retention, they are not measures of actual retention rates and there are limitations to using this approach. For example, one limitation is that pilots may choose to stay in the Air Force but not take the aviation retention bonus to exercise more control and flexibility over their career. In these cases, actual retention would be higher than the aviation retention bonus acceptance rate suggests. According to the Air Force s annual Rated Officer Retention Analysis reports we reviewed, the combined aviation retention bonus acceptance rates for RPA pilots both with and without previous manned aircraft experience completing their initial service commitment were approximately 55 percent in fiscal year 2016, 64 percent in fiscal year 2017, and 60 percent in fiscal years 2018 and 2019. Our comparison of the aviation retention bonus acceptance rates for RPA pilots with previous manned aircraft experience to those without that experience suggests that the pilots without that experience have consistently had lower bonus acceptance rates, as shown in table 1. As far back as April 2014, we reported that there were indications the Air Force could be facing challenges retaining RPA pilots in the future. Despite the existence of incentive payments, pilots in seven of the 10 focus groups we conducted at that time indicated that retention of RPA pilots was or would be a challenge. We recommended that the Air Force develop a retention strategy that was tailored to the needs and challenges of the RPA pilots to help ensure the Air Force could meet and retain required staffing levels to meet its mission. The Air Force took some steps to address RPA pilot retention, such as expanding RPA operations to an additional base to increase assignment choices and decreasing the number of combat lines that RPA aircrews were flying to reduce their workload. Further, in July 2018, officials said that the Air Force established a new division at Headquarters to serve as a focal point for overseeing RPA personnel matters for the service. Because of these actions to address RPA retention, the Air Force met the intent of our recommendation. However, in our current review, we found indicators of concern regarding RPA pilot retention. For example, officials in varying leadership positions in the Air Force raised concerns about RPA pilot retention. Air Combat Command officials stated that they assume that about 30 percent of RPA pilots each year will have to be replaced due to attrition. Senior leaders at one RPA base that we visited told us that not having dwell time as a break from constant combat operations negatively impacts RPA personnel resiliency and retention. They said that to get a break from combat operations, RPA personnel turn to the Air National Guard or separate. They noted that people join the Air Force to see and do things, not to be exposed to constant combat operations in less than appealing locations. Further, according to RPA officials, personnel stated in exit interviews that they wanted more temporary duty opportunities, deployments, exercises, and other opportunities for better career development. Similarly, senior leaders at another location we visited said that the lack of training and leadership opportunities affects retention. They noted that there are hundreds of pilots at Creech Air Force Base, but only one wing commander, and this has a chilling effect given the limited leadership opportunities available. With regard to RPA sensor operators, Headquarters Air Force officials stated that the Air Force does not have an RPA-specific sensor operator retention goal, but rather it generally aims to retain about the same amount as other career enlisted aviator career fields have historically retained, which is about 70 percent. However, according to a February 2017 memorandum, the RPA sensor operators experienced a steady decline in retention since 2012. This memorandum requested Special Duty Assignment Pay for RPA sensor operators stating that airmen in this career field were placed under enormous personal and professional demands. It also stated that in a 2-year sample, 2014-2016, the Air Force Personnel Center reported a 31 percent reenlistment decrease for first term RPA sensor operators, a 7 percent decrease for second term RPA sensor operators, and a 16 percent decrease for career RPA sensor operators. Specifically, the memorandum said that in 2016 the reenlistment rates for RPA sensor operators were 44 percent, 54 percent, and 74 percent for first-term, second-term, and career RPA sensor operators, respectively. In comparison, these rates were 19 percent, 22 percent, and 16 percent lower than the average rate across all Air Force Career Enlisted Aviators. The Air Force approved this Special Duty Assignment Pay for RPA sensor operators effective in November 2017. Additionally, effective October 2018 and again in July 2019 and November 2019, RPA sensor operators were eligible to receive Selective Retention Bonuses. Coinciding with the start of these financial incentives in fiscal year 2018, Air Force data showed increases in RPA sensor operator reenlistment rates as compared to fiscal year 2017 reenlistment rates (see table 2). While Air Force data show improvements in RPA sensor operator reenlistment rates, officials we spoke with shared concerns about retention-related issues specifically regarding sensor operators. For example, a senior leader at one RPA base we visited said that there is an acknowledged retention problem within the sensor operator community citing one of the factors being the perception among sensor operators that private contractors pay more than the Air Force. An Air Force document justifying the Selective Retention Bonus states that contractors are targeting experienced RPA sensor operators for six-figure salaries of greater than $100,000 per year. Similarly, a senior leader at one RPA base we visited stated that contractors are paying sensor operators 2 to 4 times as much as the Air Force does, essentially making the Air Force a pipeline for RPA personnel to become government contractors. Moreover, participants in each of the senior RPA sensor operators (i.e., E5-E9) focus groups that we conducted told us that they thought the retention bonuses and financial incentives were too small to matter in their retention decision- making. In a questionnaire we administered to the 105 participants across the 14 focus groups, nearly half (19 of 41) of the sensor operators responded they were somewhat dissatisfied or very dissatisfied with their total compensation versus 20 percent (13 of 64) of pilots who responded they were somewhat dissatisfied or very dissatisfied. <2.2. The Air Force Does Not Track Its Progress in Implementing Its Combat- to-Dwell Policy within Its Projected Timeframe> The Air Force does not track its overall progress of accessing and retaining sufficient quantities of RPA pilots and sensor operators needed to achieve its goal of implementing the combat-to-dwell policy in fiscal year 2024. Specifically, in a February 2018 briefing to Congress, the Air Force stated it planned to fully implement the combat-to-dwell policy in fiscal year 2024. Headquarters Air Force officials stated that in order to meet this 2024 goal, the Air Force is working to increase the number of trained RPA pilots and sensor operators through its accession, training, and retention efforts because they said it cannot implement the combat- to-dwell policy if it lacks sufficient quantities of available personnel. Several senior leaders at each of the locations we visited discussed the importance of achieving and sustaining a sufficient level of staffing that is needed to implement the dwell policy. One senior leader emphasized that the Air Force made getting to dwell its cornerstone promise. Officials stated that pilots and sensor operators are currently only able to accomplish training that can be done while completing combat missions because the RPA personnel are currently flying 24/7 combat missions. The January 2017 combat-to-dwell policy emphasized the need for the implementation of dwell time within the RPA community to allow these units to focus on either combat operations or training, but not both at the same time. This policy states that it is essential for preventing future risk to the mission and preserving the combat capability of the RPA force. Headquarters Air Force officials stated that they were hopeful that implementing the combat-to-dwell policy would improve quality of life and reduce burnout among RPA personnel by allowing them to take a break from combat operations to give them time to rest and train. Officials acknowledged that poor quality of life conditions for RPA personnel negatively affects retention. According to an Air Force instruction related to the RPA community, it is important to build a sustainable and healthy force and retention affects virtually all aspects of the Air Force s effort to meet its goal of attaining the proper number of aircrew personnel. Further, it states that understanding the connection between the accession of new recruits, the training and production requirements of new aircrew members, and the ability of units to absorb newly trained aircrews into the structure and operations of the forces is critical to maintaining a healthy aircrew force and to achieve Air Force goals. However, the Air Force does not know its overall progress toward achieving its goal of having sufficient quantities of RPA pilots and sensor operators to implement the combat-to-dwell policy in fiscal year 2024 as planned. Thus far, Headquarters Air Force officials said that the Air Force has been focused on retaining as many RPA pilots and sensor operators as possible in an effort to meet the increasing staffing authorizations. The Standards for Internal Control in the Federal Government states that management should track achievements and actual performance, compare to plans, goals and objectives and analyze significant differences. Specifically, officials explained that it does not have a comprehensive metric (or set of metrics) which allows them to track changes in the number of its RPA pilots and sensor operators from its combined accession and retention efforts over a projected timeline. This prevents the Air Force from being able to compare its progress against its goal of having sufficient numbers of RPA pilots and sensor operators to fully implement the policy as planned by fiscal year 2024. The Air Force RPA officials stated that the Air Force does not have a metric (or set of metrics) that measures a glide path to health and stability of the RPA workforce by balancing both accessions and retention of RPA personnel in order to know when changes might be needed over time to achieve the goal of implementing the combat-to-dwell policy. Without such a metric (or a set of metrics), it is unclear whether the Air Force is on track to have enough RPA pilots and sensor operators to achieve implementation of its combat-to-dwell policy or to know if adjustments are needed to its accession and retention efforts or to the policy s implementation timeframe. Taking such action is critical for the Air Force to be able to position itself to address long-standing RPA pilot and sensor operator shortages and documented challenges in the management of these communities through its combat-to-dwell policy. Absent such action, a key component of the Air Force s workforce will not be well-positioned to meet its mission for the nation. <3. The Air Force Has Not Fully Identified the Number of Instructor Positions Needed and Has Experienced Training Unit Staffing Shortages> <3.1. The Air Force Has Not Fully Identified Its Pilot and Sensor Operator Instructor Positions Needed at Its Holloman Air Force Base Formal Training Unit> The number of active-duty RPA pilot and sensor operator instructor positions required at the Holloman formal training unit are understated and do not reflect the current training instructor needs. More specifically, the number of instructor positions needed were developed using a 2009 program of instruction with a length of 49 training days and were never updated to reflect changes to the syllabus length, which as of July 2019, was 83 training days. Air Force documentation showed that if 100 percent of the formal training unit s currently identified active-duty instructor positions were filled, they could provide only 47 percent of the total course instruction currently identified. To provide the rest of the course instruction, the formal training unit relies heavily on contractors. Air Force information shows that, as of July 2019, contractors provided 53 percent of instruction, active-duty personnel provided 27 percent, and 20 percent remained unaccomplished (i.e., not provided). The Standards for Internal Control in the Federal Government states that management should use quality information to make informed decisions to achieve its objectives. Quality information is, among other things, current, complete, and accurate. Further, a 2017 report to Congress on the implementation progress of the Air Force s actions to ensure a sustainable RPA operational force stated having maximum instructor staffing was critical to generating new RPA pilots. However, the Air Force continues to use the out-of-date, inaccurate, and incomplete number of active-duty RPA pilot and sensor operator instructor position requirements that were originally developed based on the 2009 program of instruction. Without using quality information, the Air Force does not fully know the number of active-duty RPA pilot and sensor operator instructor positions necessary for sufficiently training RPA aircrews. As such, it may not be fully addressing the challenges affecting the training unit s staffing and ability to produce the needed number of aircrews to support the continued demand for RPAs and the implementation of its combat-to-dwell policy as planned. <3.2. The Air Force Has Experienced Staffing Shortages at Its Holloman Formal Training Unit since Fiscal Year 2016> Since fiscal year 2016, the Holloman formal training unit has been unable to meet the authorized instructor position staffing levels even though the numbers of those positions are based on an out-of-date number of training days from the 2009 program of instruction that underestimates actual instructor requirements. In 2015, top senior Air Force leaders developed the Get Well Plan, and the Secretary of the Air Force and other top senior leadership helped develop the plan s two goals to staff 100 percent of the positions for (1) instructors at the RPA pilot school and (2) combat RPA pilots. In the March 2017 report to Congress, the Air Force again emphasized that maximum instructor staffing was critical to generating new RPA pilots and that it had achieved this goal as planned and it would stabilize and sustain the Get Well Plan s goals into the future. We found that both the number of RPA pilot and sensor operator instructors assigned peaked at the end of 2016 and early 2017 in accordance with this Air Force goal. However, the assigned numbers of both RPA pilot and sensor operator instructors have not stabilized or been sustained and have fallen since that time as shown in figures 8 and 9. Specifically, authorized RPA pilot instructor positions within the three RPA training squadrons at Holloman Air Force Base (i.e., the 6th, 9th, and the 29th squadrons) were filled at 75 percent (110 of 147) as of September 2019. That fill rate is almost 20 percent less than the highest fill rate for these positions in March 2017 (137 of 147, or 93 percent). Similarly, authorized RPA sensor operator instructor positions within these same training squadrons as of September 2019 were filled at 58 percent (82 of 141), down from the highest fill rate of 91 percent (128 of 141) in November 2016. A training official explained that the inability to maintain the level of staffing, even when considering it was an underestimation of the true requirement, is an example of the issues experienced in the RPA community. He stated that when RPA pilots and sensor operators at squadrons leave the Air Force that means there are fewer of them overall available to conduct the missions and to be sent to the formal training unit to serve as instructors. Fewer instructors at the training unit means a greater workload on the instructors already there, which affects the morale of the instructors and may result in those individuals leaving the Air Force. It also limits the ability of the formal training unit to meet the expectations of producing newly trained aircrews that are supposed to fill the staffing need at the squadrons. Overall, this cycle contributes to the challenge the Air Force faces in being able to retain and produce RPA pilots and sensor operators. Moreover, the gap in instructor staffing is compounded by a majority of instructors arriving at the Holloman formal training unit not having prior operational squadron-level instructor experience, according to training officials. According to an Air Force instruction regarding RPA training, any aircrew member designated for instructor duties at a formal training unit should already be an instructor in the applicable aircraft. However, for example, at Holloman s formal training unit, officials told us that for the training session from August 2019 to May 2020, 17 of 25 of the new incoming instructors did not have previous squadron-level instructor or evaluator experience. In these instances, they said the new instructors would need additional training to qualify them fully to teach certain classes. According to training officials, being an instructor at a formal training unit is not the same as being an instructor at an operational squadron. For example, in an operational squadron, an instructor is expected to take an individual that is fully qualified in the aircraft and get them up to speed on the squadron s specific mission and to assist in increasing the squadron s overall level of efficiency through continued supervised training. At the formal training unit, however, instructors are laying the foundation for new aircrew students that are not familiar with the aircraft, its operation, or its various mission sets. Officials stated that because the formal training unit is receiving inexperienced instructors rather than fully qualified ones, the training unit must provide more upgrade training to these student instructors to qualify them to teach any classes. While the instructors are going through the upgrade and any other training needed to become fully qualified, they are filling an instructor staff position but not fully contributing to the development of new RPA pilots or sensor operators. Air Force training officials acknowledge that staffing at its Holloman formal training unit is a concern and that they need more instructors. They said that shortening the length of training was one approach to addressing the instructor gap and, in June 2019, the commander of the 19th Air Force (Air Education and Training Command) directed syllabus modifications. According to training officials, the modifications suspended about 15 percent of the training and thereby, shortened the length of the course. These modifications are scheduled until the end of October 2020 unless deemed necessary to extend them into fiscal year 2021. <4. The Air Force Has Not Fully Implemented the Initiatives It Developed to Address Quality of Life Issues Affecting the RPA Community and Long-Standing Concerns Remain> In 2015, the Air Force developed over 140 initiatives to address quality of life challenges facing its RPA units but has not fully implemented them. While the Air Force has been aware that the RPA community faces such issues as work-related physical and mental ailments, lack of base services, and other challenges to its quality of life, long-standing concerns we have identified previously, as well as others, remain. <4.1. The Air Force Has Not Fully Implemented the Initiatives It Developed to Address Quality of Life Issues Affecting the RPA Community> The Air Force s Air Combat Command established the Culture and Process Improvement Program (CPIP) in 2015 to identify and address stress and quality of life issues within the Air Force s MQ-1 Predator and MQ-9 Reaper RPA communities. This effort collected nearly 2,500 inputs from the RPA community through surveys and in-person engagement. Following this input, the Air Force developed over 140 initiatives to address concerns in eight different areas, such as missions, quality of life, locations and basing options, and training. These initiatives varied widely in scope and specificity and they addressed the RPA enterprise, such as pilots, sensor operators, intelligence personnel, and maintainers across active-duty personnel and the Reserve component. In February 2018, the Air Force briefed Congress, reporting that 57 percent of CPIP initiatives were complete and 43 percent were ongoing. According to Air Force officials, examples of initiatives completed include: expanding RPA combat operations to Shaw Air Force Base, South Carolina, to provide additional assignment options; establishing an advanced weapons instructor course specifically for redesignating MQ-9 Reaper RPA squadrons from Reconnaissance to Attack; establishing a medal to specifically recognize the contributions of personnel that operate and support the RPA enterprise; and, authorizing RPA aircrews to log combat time when flying an aircraft within designated hostile airspace, regardless of the aircrew s physical locations. The CPIP report finalized just over a year later in June 2019 states that the Air Force had achieved an almost 90 percent solution and the most significant of the initiatives had been accomplished. It went on to say that there were 17 initiatives remaining open at that time and that the Air Force would no longer track those initiatives because they had reached the point of diminishing returns. Additionally, the office established to track the CPIP initiatives was closed because Air Combat Command officials told us that the office is no longer needed and all remaining initiatives have been staffed to other offices of primary responsibility. However, in our review, we found examples of quality of life initiatives labeled complete where the objective had not yet been fully achieved. Examples we found include: an initiative to create a new MQ-9 RPA wing to be led by an RPA pilot was labeled with a status of complete even though Headquarters Air Force officials confirmed that no new MQ-9 Reaper RPA wing has yet been created; an initiative to have aircrews shiftwork schedules rotate every 4 to 6 months; however, each of the squadrons at the RPA operational bases we visited had a shift work schedule that rotated for 5 to 8 weeks; an initiative to grant appropriate clearances to allow medical and chaplain personnel into all RPA operational areas; however, at one location we visited, medical officials and a chaplain we spoke with said that they do not have the required clearance levels to meet with RPA personnel within their secured facilities; two initiatives to improve spousal opportunities, although one vaguely stated that the Air Force should think big and think flexible as it needs to consider society s shift to the two-income family and the other called for providing better family services and support. However, we found that while these services may exist at RPA bases, they are not always accessible to RPA personnel or their families for a variety of reasons, as we discuss below; an initiative to provide childcare support for workers performing 24/7 operations, although we found childcare was not available at certain facilities we visited; and, an initiative to make Creech Air Force Base its own installation, add a Missions Support Group, and improve base infrastructure and services. Creech did receive its own command authority and is no longer an auxiliary facility under Nellis Air Force Base and a Mission Support Group was established in July 2019. However, its plans to create officer and non-commissioned officer housing and an additionally medical facility are not expected to be completed until between fall 2021 and fall 2022, according to a Creech official. According to Air Force officials, an initiative marked as complete means that the Air Combat Command CPIP office had completed its portion of the initiative and another Air Force entity had taken it over for further action as necessary and may still be in process. Therefore, the 57 percent of initiatives that the Air Force reported to Congress in February 2018 as completed and the almost 90 percent solution discussed in the June 2019 CPIP final report may not present a transparent account of what has been completed and what remains to be accomplished. Reporting planned tasks as complete as the Air Force did could create perception gaps regarding the effects of CPIP. Interviews we had with senior leaders at multiple bases yielded concerns that CPIP is effectively over without accomplishing key objectives and that CPIP is going to be perceived as a failed promise by the Air Force. <4.2. Quality of Life Challenges Affecting the RPA Community are Long Standing and Still Continue> Along with the CPIP initiatives developed in 2015 as discussed above, academic studies published since 2010 and our previous 2014 report on RPA job dissatisfaction identified challenges facing the RPA community. For example, in April 2011, a study by researchers at the U.S. Air Force School of Aerospace Medicine found that there are several important operational stressors to consider when assessing the health and well- being of RPA operators. More specifically, the researchers noted, for many operators that participated in the study, the most commonly cited stressors associated with occupational stress included, but not limited to, the following: (1) long hours and low manning; (2) frequently changing shift work and shift changes; (3) geographically undesirable locations; (4) limited base resources and rural settings; and (5) human-machine interface difficulties such as poor ergonomics and temperature control of work stations. The study concluded that it stood to reason such stressors could lead to both physical and psychological distress when faced on an unending basis. Three years after the issuance of that study, in April 2014, we reported that RPA pilots faced multiple, challenging working conditions, including work shifts that frequently rotate, long hours, and increased workloads. More specifically, we reported in 2014 that In seven of the 10 focus groups conducted at that time, RPA pilots said continuously rotating to new shifts disrupted their ability to spend time with their family and friends and caused sleep problems. They said that these changes to their sleep schedules resulted in significant fatigue both at home and when they returned to work. In seven of the 10 focus groups conducted at that time, RPA pilots described working long hours because, for example, they had to perform administrative duties and attend briefings in addition to flying their combat shifts. High work demands on RPA pilots limit the time they have available for training and development and negatively affects their work-life balance. During the course of our current review, we heard various positive comments about how RPA pilots or sensor operators like the RPA mission and being able to contribute on a daily basis to combat operations. However, as discussed below, we also found examples of how long-standing challenges that others and we reported about years ago regarding the physical and mental health of RPA personnel and the availability of base support services continue to exist. <4.2.1. Physical and Mental Health Concerns> Shift Work and Sleep Issues In 12 of the 14 focus groups we conducted, participants stated that the frequent rotations are a key challenge of shift work and that their schedules rotated approximately every 5 to 8 weeks. However, members of the Human Performance Team at Creech Air Force Base stated studies have shown that it is better for individuals to stay on shifts for longer periods of time, such as 3 to 4 months, to allow their circadian rhythms to adjust. Additionally, focus group participants told us that rotating shift work is difficult for RPA personnel s relationships. Participants in 13 of the 14 focus groups indicated that shift work has negatively affected their family or social life. Additionally, rotating shifts and the limited time with family creates a dilemma on weekends for personnel, especially for those on the midnight shift that covers roughly midnight to 8 a.m. These individuals must decide whether to maintain their work sleep schedule which limits time with family, or instead to align with their family s sleep schedule which limits their ability to adapt to the work schedule. Some comments from participants include I destroy my circadian rhythm to spend time with my kids and Shift work is disruptive to lives. It is hard to be tied into the community. Shift work can be really isolating. Crew rest is compulsory for aircrew members prior to performing any aircraft operations. Aircrew members are individually responsible to ensure they obtain sufficient rest during a crew rest period. If crew rest is interrupted, individuals should immediately inform appropriate leadership and will either begin a new crew rest period or not perform flight duties. According to health officials at one of the bases, though, it is well known that RPA aircrew members often do not accurately report how much rest they get. Participants in one focus group agreed with this statement and said that they do not want to be restricted from flying and affect the mission and cause the work to fall on other squadron members. Participants in 12 of our 14 focus groups that we conducted stated that it is difficult to get adequate sleep. Sample participant comments include: I can t sleep anymore. Before the military, I could get 10 hours of sleep. Now it s like 2-4. You re physically and mentally exhausted. I feel perpetually tired. I haven t felt healthy in years. We did an internal survey of how much sleep people on nights for months at a time were getting, and it was like 3-4 hours. And they are flying combat for 8-12 hours at a time. Back, Eyes, and Other Physical Issues In 12 of 14 focus groups, participants said the working environment is harmful to health in areas such as the neck, back, eye, and hearing. Participant comments included: I ve been losing hearing over the last 6 years from computer fans, air conditioning units, the use of multiple communication devices, etc. Just sitting in the seat for 8, 10, or 12 hours affects our posture. It is bad on our backs. I didn t have lower back problems, and I work out a lot, but I started having lower back problems. My eyesight has been getting worse. See figure 10 for an example of a pilot flying a simulated mission in an RPA cockpit. During our site visits for this review, participants in 14 focus groups that we conducted said that maintaining fitness was difficult. They said they are not motivated to work out as they are frequently exhausted after flying long shifts and then completing other extra duties as well. Further, participants in 11 of 14 focus groups told us that nutrition is difficult for RPA crews. For example, participants said that they consume energy drinks, soda, and sugary foods to stay awake during the midnight shift. Studies have shown negative psychological effects on RPA aircrews. An Air Force study from 2010 of the psychological attributes critical to the performance of RPA sensor operators noted it is important that RPA sensor operators be aware prior to training that they would be targeting and destroying enemy combatants. It stated that it was likely that some candidates might choose not to become sensor operators once they fully understand their role in precision-strike operations. These motivational attributes were not deemed critical to performance, but were deemed critical to retention and job satisfaction. Participants in 10 of our 14 focus groups we conducted said that some crew members either themselves or others did not initially understand what the job entails, such as killing. One focus group participant noted the first time you know what you re getting into emotionally is the first day of training at Holloman, which is too late because you already have wings. Participants in 13 of 14 focus groups we conducted stated that witnessing or causing violence has a negative psychological impact but two-thirds of our survey respondents (66 of 105) said that the Air Force has not assessed their level of stress and fatigue related to their role as an RPA pilot or sensor operator. A study published in 2018 described how RPA aircrew members are affected by their own actions in combat as well as by connections with either people who they target or support on the ground regardless of the physical distance separating them. One focus group participant commented F-16s drop and then go. For RPA aircrews, we get in and we are there for 20 hours. We watch who we employ weapons on, then get the battle damage assessment, including seeing body parts on the ground. <4.2.2. Availability of Base Services Issues> RPA personnel stated that their base s services are not consistently available to RPA aircrews rotating shifts to conduct missions 24 hours every day or to their families as they live in remote locations. Collectively, participants in all 14 focus groups we conducted expressed concerns about the availability of services such as medical services, childcare, spouse and family support services, and base locations and housing. Some level of health care is provided at each RPA base we visited, but the extent to which these services are available varies. For example, the Cannon Air Force Base mission briefing we received in June 2019 noted some sustainability challenges such as the base s inadequate availability of specialty medical care. The briefing noted that the base had made over 2,000 referrals related to 10 areas of specialty medical care. Additionally, because these referrals were to facilities outside the local area, the base had incurred about $500,000 in travel reimbursements for this medical care the highest of all Air Force locations and about $21 million in TRICARE expenses per year, according to officials. Further, we found examples during our site visits of health services without adequate staffing. For example, during our visit to Shaw Air Force Base in May 2019, a medical technician stated that Shaw had two medical technicians for the RPA community though staffing documents state they are supposed to have six medical technicians and two doctors. At Creech Air Force Base, we visited the medical and dental facility and learned that a psychologist position had been unfilled for 9 months as of our visit in August 2019. We also found that the hours of available medical services are limited and not convenient for shift workers such as RPA aircrews. For example, officials at Creech stated occupational therapy is offered only once a month, optometry twice a month, and nutrition on an as-needed basis. In addition, Creech has two family health personnel, a behavioral health officer who is available every Wednesday and Friday, and one flight surgeon who comes over from Nellis Air Force Base is available twice a week. A 2018 internal assessment done for Creech leadership estimated that 20,714 man-hours are wasted each year due to personnel needing to obtain medical services, the equivalent of losing 11.5 people in a given year. To address health issues, Creech Air Force Base has a Human Performance Team that includes chaplains, religious affairs airmen, a psychologist, a mental health tech, and a physiologist. While team members are physically located at Creech, they told us that they are also responsible for RPA units at all the bases under the same wing, including Creech, Ellsworth Air Force Base, South Dakota; Whiteman Air Force Base, Missouri; and Shaw Air Force Base, South Carolina. Further, at Shaw Air Force Base, a religious affairs airman made similar comments about serving a large variety of military personnel, not just the RPA community and a chaplain at Cannon Air Force Base said that he can be assigned responsibility for up to as many as 2,000 to 3,000 people at a time. Childcare is not limited for 24/7 shift workers at certain facilities although a CPIP initiative called for childcare support for workers performing 24/7 operations, citing the Missile Care childcare program offered at Minot Air Force Base. To this end, the Air Force established two programs, RPA Care and RPA 2 Care. The RPA Care program provides additional care outside the normal work hours at no additional cost to members who are already purchasing full-time care from the Child Development Center. However, in 12 of 14 focus groups we conducted, participants said that they found childcare services were of low quality or limited for 24/7 shift workers. For example, Cannon Air Force Base has two Child Development Centers, but they operate Monday through Friday from 6 a.m.to 6 p.m., and focus group participants noted a long waiting list for admission. At Creech Air Force Base, there is no childcare on base and at Shaw Air Force Base, participants said it was difficult finding available childcare to aid RPA personnel working shiftwork. For example, one RPA aircrew member was permanently assigned to the day shift because of childcare issues. Spouse and Family Support Issues RPA personnel have complained about the issues associated with working at remote location, such as the Creech Air Force Base, Nevada, and Cannon Air Force Base, New Mexico, locations. In 9 of 14 focus groups, participants made various comments regarding the limited spousal opportunities and family support issues such as the following: I got orders to Cannon . The problem is I ll be bringing my wife there who has no job opportunities. There will be a lot of military spouses competing for jobs. I ve already decided I ll leave at the end of my contract and then will go to the Guard. I ve told my wife I ll get out because I don t want to hurt her quality of life. I loved the mission at Cannon, but the facilities and area and schools are absolutely terrible. I m fed up with Cannon and this area in general. RPA bases vary in housing available for personnel with Cannon and Creech Air Force Bases reporting inadequate housing situations. At Cannon, officials stated that lack of dormitory space was forcing first-term Airmen off base. During our visit in June 2019, Cannon housing officials provided a report that stated that the shortfall in dormitory space continues to put Airmen and the Air Force Special Operations Command mission at risk. The report said that the locations off base where first-term Airmen can afford to live are usually in the worst crime-ridden parts where there is a far greater propensity for trouble. This can create morale issues and a distraction from the mission, according to the report. Additionally, Creech Air Force Base does not have any permanent on- base housing. At Creech, unaccompanied first-term Airmen must live in the dormitories on Nellis Air Force Base, which is approximately 50 miles away. The remoteness of Creech Air Force Base and the lack of basic services offered only at Nellis Air Force Base creates an unusual level of stress brought on by the added time, effort, and expense Creech Airmen experience that those at almost every other continental United States installation do not. In fact, a 2018 internal assessment for Creech leadership calculated that a junior airman who must live at Nellis Air Force Base would have a one-way commuting time of 63 minutes if they drive a personal vehicle or 105 minutes if they take the shuttle. To help address the housing and access to medical facilities, Creech Air Force Base senior officials said that a plan to create officer and non- commissioned officer housing and a medical facility on the northwest side of Las Vegas has been approved, but it is not expected to be completed until between fall 2021 and fall 2022. Many of the RPA workforce issues we identified at the time of our 2014 review continue to exist today. These workforce issues include the challenges to the RPA workforce s quality of life due to stressful working conditions, including work shifts that frequently rotate, long hours, and increased workloads. In 2017, we recommended that the Air Force should monitor the extent to which its RPA human capital efforts are achieving the Air Force s overall programmatic goals. The Air Force had not implemented this recommendation as of February 2020. Because long- standing RPA quality of life and workforce management issues affecting RPA personnel continue to exist, we believe that this recommendation is still valid and would aid the Air Force in its efforts to address many of the challenges facing this career field. Therefore, we are not making any additional quality of life related recommendations. <5. Conclusions> A healthy RPA workforce is one that balances supply with demand and addresses quality of life conditions to motivate and sustain performance and retention. Successful efforts to assess, train and retain RPA pilots and sensor operators would allow the Air Force to grow sufficient quantities of its RPA workforce to meet its goal of implementing its combat-to-dwell policy. While the total number of Air Force RPA pilots and sensor operators has increased between 2015 and 2019, the number of positions required to meet the constant demand is increasing at a faster pace. Additionally, the Air Force has not achieved its accession targets for pilots and sensor operators for most of those years. Moreover, the inability to use standard retention metrics due to the newness of the RPA pilot career field is hindering the Air Force s ability to determine accurately if sufficient quantities of RPA personnel are remaining in the service to grow its RPA workforce. Further, the Air Force currently does not have a comprehensive metric (or set of metrics) to track the overall progress toward having sufficient numbers of RPA personnel through its accessions and retention of RPA personnel to meet its prescribed timeline for implementing its combat-to-dwell policy. This policy is intended to balance the time RPA units spend in combat with non-combat activities, to provide relief from those combat operations that it has conducted constantly for many years, to improve the quality of life of these RPA aircrew members. Without a metric, it is unclear whether the Air Force is on course to achieve implementation of its combat-to-dwell policy. As such, the Air Force cannot know if adjustments are needed specifically to that policy and its implementation timeline or to its overall personnel management efforts to access, train and retain sufficient numbers of RPA personnel. Further, the Air Force previously prioritized having maximum instructor staffing at the training units to help increase the production of new RPA aircrews. However, the number of instructor positions required at the RPA formal training unit at Holloman Air Force Base is out-of-date and does not reflect what is needed to teach the current training curriculum. Additionally, this formal training unit has consistently experienced staffing shortages since fiscal year 2016. As such, without updated information, the Air Force does not know the number of instructor positions necessary for sufficiently training RPA aircrews and it may not fully address the challenges affecting the training unit s staffing and ability to produce the needed number of aircrews to support the continued demand for RPAs and the implementation of the combat-to-dwell policy as planned. The Air Force developed initiatives with its 2015 Culture and Process Improvement Program to address quality of life issues and other challenges affecting the RPA community, but has not fully implemented them. We also identified workforce management challenges in our previous work. We believe that our prior recommendation that the Air Force monitor its human capital efforts would help address these challenges. We believe the Air Force should implement our prior recommendation to aid the Air Force in its attempts to improve the quality of life issues that still exist within the RPA community. <6. Recommendations for Executive Action> We are making the following two recommendations to the Secretary of the Air Force. The Secretary of the Air Force should ensure that a comprehensive metric (or set of metrics) is established to track the progress of its combined accession and retention efforts to obtain sufficient quantities of RPA pilots and sensor operators needed to achieve its objective of implementing the combat-to-dwell policy as planned. (Recommendation 1) The Secretary of the Air Force should ensure that the number of instructor positions needed at the RPA training unit at Holloman Air Force Base is updated by applying more complete, accurate and timely information to better reflect the training curriculum and instructor needs. (Recommendation 2) <7. Agency Comments and Our Evaluation> We provided a draft of this report to DOD for review and comment. In written comments reproduced in appendix III, the Department of the Air Force partially concurred with our first recommendation and concurred with our second recommendation. In concurring with our second recommendation to ensure the number of instructor positions needed at the RPA training unit at Holloman Air Force Base is updated, the Air Force noted that it has requested an updated study to determine the appropriate number of instructor positions. With regard to our first recommendation to establish a comprehensive metric (or set of metrics) to track the progress of its combined accession and retention efforts the Air Force noted that it already has efforts to monitor accession, production, and retention for RPA pilots and sensor operators. Additionally, it expects that standard retention metrics used in other rated career fields will provide increased utility as the RPA career field matures. The Air Force acknowledges in its comments, however, that these efforts could be better integrated to allow for greater analysis, to include tracking progress in meeting the combat-to-dwell policy by 2024. We continue to believe that in developing a specific metric (or set of metrics) the Air Force would be in a better position to evaluate the status of its combined accession and retention efforts to obtain the proper number of RPA personnel to achieve its combat-to-dwell implementation goal. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretary of the Air Force. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov . If you or your staff have any questions regarding this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Focus Group Methodology To obtain the perspectives of Air Force remotely piloted aircraft (RPA) pilots and sensor operators regarding training, availability of services and support to RPA personnel and their families; quality of life issues; retention issues; and other challenges facing the RPA career field, we analyzed participants comments from 14 focus groups at three different RPA operational locations. These locations were: Shaw Air Force Base, South Carolina; Cannon Air Force Base, New Mexico; and Creech Air Force Base, Nevada. We selected Cannon and Creech Air Force Bases because they have the largest population of RPA operators in Air Force Special Operations Command and Air Combat Command, respectively. In addition, we selected Shaw Air Force Base to obtain the perspectives of RPA pilots and sensor operators working at a base with newly established RPA operations since 2018. To obtain a balance of perspectives from RPA pilots and sensor operators with varying levels of experience and responsibilities, we conducted focus group sessions with active-duty MQ-9 Reaper RPA pilots and sensor operators who were divided by their occupation, Air Force Specialty Code, and rank at the selected locations. Specifically, we used the following categories as shown in table 3 for the formation of the focus groups. The 14 focus groups we held ranged in size from five to 11 participants across the three sites with 105 total participants. We conducted five focus groups at Shaw Air Force Base; four focus groups at Cannon Air Force Base; and five focus groups at Creech Air Force Base. Of the 14 focus groups, eight focus groups were with RPA pilots and six focus groups were with RPA sensor operators. These sessions involved structured small-group discussions designed to gather in-depth information that is not easily obtained from other methods. We requested that our point of contact at each location gather approximately 8 to 12 participants to attend the five pre-defined focus groups. We conducted focus groups with RPA pilots and sensor operators separately because they have different roles and responsibilities and to encourage active participation and minimize the risk of participants being the same group as immediate supervisors. We segmented our groups by this characteristic in order to compare and contrast their perspectives on training, retention, and quality of life issues and to identify meaningful similarities and differences. Participants in the focus groups were not randomly selected by using a probability sampling method, but recruited by unit leadership based on shift availability and correspondence with the characteristics we requested. Because scheduling availability was the primary factor affecting participation, coupled with the fact that questions for focus group sessions were not shared in advance, we considered the risk of leadership selectively picking participants to be minimal. Methodologically, focus groups are not designed to (1) demonstrate the extent of a problem or to generalize results to a larger population, (2) develop a consensus to arrive at an agreed-upon plan or make decisions about what actions to take, or (3) provide statistically representative samples or reliable quantitative estimates. Instead, they are intended to generate in-depth information about the reasons for the focus group participants attitudes on specific topics and to offer insights into their concerns about and support for an issue. A facilitator who used a standard script and list of questions to guide the discussion and encourage participants guided the focus group participants to share their thoughts and experiences. We confirmed at the start of each session that participants met the inclusion criteria for the respective group. Due to the low numbers of 18X pilot participants at the O3-O5 rank and 11U/12U pilot participants at Cannon Air Force Base, we conducted a focus group of the available participants together instead of separately. Additionally, at Creech Air Force Base, we encountered three situations where participants were currently full-time Reserve pilots, but because all had former active-duty experience and dismissing them would result in too few participants in the group, we allowed them to stay in the focus groups in order to have a sufficient number of participants. This situation occurred in the O1-O2 18X pilot focus group, the O3-O5 18X pilot focus group, and the E5-E9 1U0XX sensor operator focus group. The core questions that the GAO facilitator asked during each of the focus groups are listed in table 4. During the focus group meetings, three GAO members independently took separate sets of detailed notes to document the participants comments. Afterward, each member s notes were compiled into one final official record documenting the comments made in each of the focus groups we conducted. Then, these records were consolidated into one database to be used for coding each comment and to facilitate the team s content analysis of all the comments. To identify common categories and themes from the participants comments across all focus groups, the team met, reviewed and discussed the official record of each of the 14 focus groups. From that meeting, the team identified 43 categories across seven areas of inquiry; see table 5 for a list of the categories and themes. Using the categories and themes identified, the team conducted a pre- test by having two groups of two coders independently code an identical subset of the comments to determine their levels of coding consistency and accuracy before attempting to code all 1,848 individual recorded comments. After the pretest, the two groups split the list of comments in half and each coder independently coded the comments contained in their list into the categories and themes under which the coder believed the comment fell. Once completed, the coders within each group met to discuss any discrepancies in each of their coding and to make any necessary adjustments in the coding. Where discrepancies could not be resolved between coders, an independent third team member determined which code would be used. Once the coding of all 1,848 comments was finalized, the team s methodologist prepared a report that presented all comments that fell within each of the categories and themes. The team used this information as the basis for frequency tabulation and qualitative analysis of focus group comments. In addition to discussing the RPA pilots and sensor operators perspectives in a focus group setting, we administered a questionnaire to each participant at the end of each session before the participants were dismissed. All participants completed the questionnaire. A GAO methodologist with a social science background and knowledge of small group methods and survey administrations reviewed the focus group script and the questionnaire. In addition, we pre-tested both the focus group protocol and the questionnaire on our first site visit to Shaw Air Force Base and both were used again at the remaining RPA locations, Cannon and Creech Air Force Bases, without any changes. Appendix II: Reports and Studies on Air Force Remotely Piloted Aircraft Personnel The Department of Defense (DOD), the military services, and organizations outside DOD have produced reports and studies that addressed issues associated with Air Force remotely piloted aircraft (RPA) personnel, including the following: Armour, Cherie, and Jana Ross. The Health and Well-Being of Military Drone Operators and Intelligence Analysts: A Systematic Review. Military Psychology, 2017. Bryan, Craig J., Tanya Goodman, Wayne Chappelle, Lillian Prince, and William Thompson. Subtypes of severe psychological distress among US Air Force remote warriors: A latent class analysis. Military Psychology, 2018. Campo, Joseph L. Distance in War: The Experience of MQ-1 and MQ-9 Aircrew. Air and Space Power Journal, 2015. Chappelle, Wayne L., Kent McDonald, Lillian Prince, Tanya Goodman, Bobbie N. Ray-Sannerud, and William Thompson. Symptoms of Psychological Distress and Post-Traumatic Stress Disorder in United States Air Force Drone Operators. Military Medicine, 2014. Chappelle, Wayne, Emily Skinner, Tanya Goodman, Julie Swearingen, and Lillian Prince. Emotional reactions to killing in remotely piloted aircraft crewmembers during and following weapon strikes. Military Behavioral Health, 2018. Chappelle, Wayne, Julie Swearingen, Tanya Goodman, Sara Cowper, Lillian Prince, and William Thompson. Occupational Health Screenings of U.S. Air Force Remotely Piloted Aircraft (Drone) Operators. Report, Wright-Patterson Air Force Base, OH: Air Force Research Laboratory, 2014. Chappelle, Wayne, Kent McDonald, and Raymond King. Psychological Attributes Critical to the Performance of MQ-1 Predator and MQ-9 Reaper U.S. Air Force Sensor Operators. Report, Brooks City-Base, TX: Air Force Research Laboratory, 2010. Chappelle, Wayne, Kent McDonald, Billy Thompson, and Julie Swearangen. Prevalence of High Emotional Distress and Symptoms of Post-Traumatic Stress Disorder in U.S. Air Force Active Duty Remotely Piloted Aircraft Operators (2010 USAFSAM Survey Results). Report, Wright-Patterson Air Force Base, OH: Air Force Research Laboratory, 2012. Chappelle, Wayne, Kent McDonald, Lillian Prince, Tanya Goodman, Bobbie N. Ray-Sannerud, and William Thompson. Assessment of Occupational Burnout in United States Air Force Predator/Reaper Drone Operators. Military Psychology, 2014. Chappelle, Wayne, Tanya Goodman, Laura Reardon, and Lillian Prince. Combat and operational risk factors for post-traumatic stress disorder symptom criteria among United States Air Force remotely piloted aircraft Drone warfighters. Journal of Anxiety Disorders, 2019. Chappelle, Wayne, Tanya Goodman, Laura Reardon, and William Thompson. An analysis of post-traumatic stress symptoms in United States Air Force drone operators. Journal of Anxiety Disorders, 2014. Cooke, Nancy J., Kristen Barrera, Howard Weiss, and Claude Ezzell. Psychosocial Effects of Remote Operations. In Remotely Piloted Aircraft Systems: A Human Systems Integration Perspective, by Nancy J. Cooke, Leah J. Rowe, Winston Bennett, Jr. and DeForest Q. Joralmon. West Sussex: John Wiley & Sons, 2017. Goodman, Tanya, Lillian Prince, Wayne Chappelle, and Craig Bryan. A Reassessment of Risk Factors and Frequency of Suicide Ideation Among U.S. Air Force Remote Warriors. Report, Wright-Patterson AFB, OH: Air Force Research Laboratory, 2018. Hardison, Chaitra M., Eyal Aharoni, Christopher Larson, Steven Trochlil, and Alexander C. Hou. Stress and Dissatisfaction in the Air Force s Remotely Piloted Aircraft Community. Santa Monica, CA: RAND Corporation, 2017. Hijazi, Alaa, Christopher J. Ferguson, Harold Hall, Mark Hovee, F. Richard Ferraro, and Sherrie Wilcox. Psychological Dimensions of Drone Warfare. Current Psychology, 2017. Martin, Kiel M., Daniel J. Richmond, and John G. Swisher. Sustaining the Drone Enterprise: How Manpower Analysis Engendered Policy Reform in the United States Air Force. INFORMS Journal on Applied Analytics, 2017. Martin, Matt. Remote-Split Operations and Virtual Presence: Why the Air Force Uses Officer Pilots to Fly RPAs. 18th International Symposium on Aviation Psychology. Dayton, 2015. Ouma, Joseph A., Wayne L. Chappelle, and Amber Salinas. Facets of Occupational Burnout Among U.S. Air Force Active Duty and National Guard/Reserve MQ-1 Predator and MQ-9 Reaper Operators. Report, Wright-Patterson Air Force Base, OH: Air Force Research Laboratory, 2011. Terry, Tara L., Chaitra M. Hardison, David Schulker, Alexander C. Hou, and Leslie Adrienne Payne. Building a Healthy MQ-1/9 RPA Pilot Community: Designing a Career Field Planning Tool. Santa Monica, CA: RAND Corporation, 2018. Wood, III, Joe, et al. Prevalence of Posttraumatic Stress Disorder in Remotely Piloted Aircraft Operators in the United States Air Force. Report, Wright-Patterson Air Force Base, OH: Air Force Research Laboratory, 2016. Wood, III, Joe D, et al. Relationship Between Spiritual Well-being and Post-traumatic Stress Disorder Symptoms in United States Air Force Remotely Piloted Aircraft and Intelligence Personnel. Military Medicine, 2018. Appendix III: Comments from the Department of Defense Appendix IV: GAO Contact and Staff Acknowledgments <8. GAO Contact Staff Acknowledgments> Brenda S. Farrell, (202) 512-3604 or farrellb@gao.gov In addition to the contact named above, key contributors to this report were Lori Atkinson, Assistant Director; Rebecca Beale, Brad Crofford, Caitlin Cusati, Felicia Lopez, Terry Richardson, Ophelia Robinson, Pamela Snedden, and John Van Schaik. Related GAO Products Unmanned Aerial Systems: Air Force Pilot Promotion Rates Have Increased but Oversight Process of Some Positions Could Be Enhanced. GAO-19-155. Washington D.C.: February 7, 2019. Unmanned Aerial Systems: Air Force and Army Should Improve Strategic Human Capital Planning for Pilot Workforces. GAO-17-53. Washington D.C.: January 31, 2017. Unmanned Aerial Systems: Actions Needed to Improve DOD Pilot Training. GAO-15-461. Washington, D.C.: May 14, 2015. Air Force: Actions Needed to Strengthen Management of Unmanned Aerial System Pilots. GAO-14-316. Washington, D.C.: April 10, 2014. | Why GAO Did This Study
High demand and constant combat operations have created challenges for Air Force RPA pilots and sensor operators who conduct missions across the world. In January 2017, the Air Force approved a combat-to-dwell policy to better balance RPA units' time in combat with non-combat activities. It plans to fully implement the policy in 2024.
Senate Report 115-262 included a provision that GAO review ongoing challenges in the Air Force RPA community. This report assesses, among other things, the extent to which the Air Force (1) met overall RPA pilot and sensor operator staffing targets and tracked its progress in implementing its combat-to-dwell policy and (2) identified and met instructor staffing levels at its RPA formal training unit. GAO analyzed selected Air Force accession, retention, and instructor staffing data; held non-generalizable focus groups at three RPA military bases; and interviewed officials at various levels of the RPA enterprise.
What GAO Found
The Air Force does not have enough pilots and sensor operators to meet its staffing targets for its unmanned aircraft—also called remotely piloted aircraft (RPA). It also does not track its overall progress in accessing and retaining enough RPA personnel needed to implement its combat-to-dwell policy, which is intended to balance RPA units' time spent in combat with non-combat activities. Officials stated that to fully implement combat-to-dwell the Air Force needs to access and retain more RPA personnel because since fiscal year 2016 it has had fewer RPA personnel than authorized (see figure for RPA sensor operator example). The Air Force has provided financial incentives to address retention of RPA personnel, but it does not yet have enough historical data to help predict RPA pilot retention trends going forward given the newness of the career field. Officials additionally expressed specific concerns about sensor operator retention particularly due to the possibility of lucrative private-sector jobs. Further, the Air Force does not have a comprehensive metric (or set of metrics) to know whether its accession and retention efforts are on track to generate the additional RPA personnel needed to implement its combat-to-dwell policy by 2024. Without a metric (or set of metrics), it is unclear whether any adjustments are needed to meet its implementation timeframes.
The Air Force has not fully identified the number of RPA pilot and sensor operator instructor positions needed at its formal training unit and since 2016 has experienced instructor staffing shortages. Specifically, the number of instructor positions required is understated because they are based on a 2009 program of instruction with 49 training days while the current program of instruction is 83 training days. Moreover, since fiscal year 2016, the formal training unit has had fewer assigned instructors than authorized positions even though those numbers of instructor positions are underestimates of actual needs. To help address the effect of the instructor gap, officials temporarily reduced the length of training. Without updated information to inform the number of required instructors, the Air Force does not know the correct number of instructor positions necessary to train RPA aircrews to be ready to complete their mission.
What GAO Recommends
GAO recommends that the Air Force establish a comprehensive metric (or set of metrics) to track the progress of its efforts to access and retain enough RPA personnel needed to implement its combat-to-dwell policy, and update the number of required RPA instructor positions. The Air Force partially concurred with the first recommendation and concurred with the second one. GAO continues to believe the first recommendation is valid, as discussed in the report. |
gao_GAO-19-539 | gao_GAO-19-539_0 | <1. Background> <1.1. The Agricultural Census and Socially Disadvantaged Farmers and Ranchers> USDA conducts the Census of Agriculture every 5 years, most recently in 2012 and 2017. The census provides a detailed picture of farms and the people who operate them. The census identifies several categories of farmers, including the following: Producers. Producers are individuals involved in farm decision- making. A single farm may have more than one producer. Primary producers. The primary producer is the individual on a farm who is responsible for the most decisions. Each farm has only one primary producer. The 2017 Census questionnaire substantially revised the way it collected certain data in order to better capture the contributions of all persons involved in farm decision-making. For example, the 2017 questionnaire asked for the names and demographic information of up to four producers per farm (compared to three in 2012) and used a series of questions on specific types of farm decisions to determine the primary producer (the 2012 questionnaire did not include these questions). Therefore, comparisons between the two censuses regarding the number and personal characteristics of producers and primary producers should be considered with the 2017 revisions in mind. While some changes may be the result of actual changes in the population of farmers and ranchers, other changes may be the result of changes in census methodology. USDA s 2017 Census counted about 3.4 million producers across the roughly 2 million farms nationwide, compared to 3.2 million in 2012. This represents an approximately 7 percent increase over 2012 in the number of reported producers, despite a slight drop in the number of farms reported. Many of these additional producers were SDFRs. In 2017, SDFRs accounted for 41 percent (1,390,449) of all producers, compared to 36 percent (1,133,163) in 2012. The number of reported SDFR primary producers also grew between 2012 and 2017. Among SDFR subgroups, women accounted for the largest increase in producers and primary producers. In the 2017 Census, women also made up the largest group of SDFR producers and primary producers (see table 1). Women accounted for 88.3 percent of all SDFR producers and 81.0 percent of SDFR primary producers. Hispanic, Latino, or Spanish-origin producers were the next largest group, accounting for 8.1 percent of all SDFR producers and 11.0 percent of SDFR primary producers. On average, farms for which an SDFR was the primary producer (SDFR farms) were smaller and brought in less revenue than non-SDFR farms in 2017. While representing 30 percent of all farms, SDFR farms operated 21 percent of total farm land and accounted for 13 percent of the market value of agricultural products sold in 2017 (see table 2). About 55 percent of SDFR farms had fewer than 50 acres, and 88 percent had less than $50,000 in total sales and government payments. Additionally, a lower proportion of SDFR-operated farms (21 percent) received government payments compared to non-SDFR farms (36 percent). <1.2. Types and Sources of Agricultural Credit> Agricultural producers generally require financing to acquire, maintain, or expand their farms, ranches, or agribusinesses. Agricultural loans generally fall into two categories: Farm ownership loans. These loans are used to acquire, construct, and develop land and buildings and have terms longer than 10 years. They are secured by real estate and are sometimes referred to as real estate loans. Farm operating loans. These loans are generally short-term or intermediate-term loans that finance costs associated with operating a farm. Short-term loans are used for operating expenses and match the length and anticipated production value of the operating or production cycle. Intermediate-term loans are typically used to finance depreciable assets such as equipment and usually range from 18 months to 10 years. These loans may also be referred to as non- real-estate loans. Several types of lenders provide credit to agricultural producers, including, but not limited to, the following: Farm Credit System. The Farm Credit System is a government- sponsored enterprise, established, in part, to provide sound, adequate, and constructive credit to American farmers and ranchers. The Farm Credit System includes a national network of 73 banks and associations. The Farm Credit System lends money to eligible agricultural producers primarily through its 69 lending associations, which are funded by its four banks. All are cooperatives, meaning that Farm Credit System borrowers have ownership and control over the organizations. The Farm Credit System is regulated by the Farm Credit Administration, an independent federal agency. The Farm Credit System s statutory objectives include being responsive to the needs of all types of creditworthy agricultural producers having a basis for credit, with additional requirements to serve young, beginning, and small farmers and ranchers. According to the Farm Credit Administration, the Farm Credit System is not statutorily mandated to focus on providing financial opportunities to any other group. Commercial banks. Commercial banks are regulated by the federal depository institution regulators. They vary in size and the type of credit they provide. In a January 2013 report, we found that large banks were more likely to engage in transactional banking, which focuses on highly standardized products that require little human input to manage and are underwritten using statistical information. We also found that small banks were more likely to consider not only data models but information acquired by working with the customer over time. Additionally, we found that by using this banking model, small banks may be able to extend credit to customers who might not receive a loan from a larger bank. The American Bankers Association reported that in 2017, the majority of farm banks those that made more agricultural loans than the industry average were small institutions with a median asset size of $125 million. USDA Farm Service Agency. USDA s Farm Service Agency makes direct loans to farmers and ranchers and guarantees loans made by commercial lenders and Farm Credit System associations. The Farm Service Agency is a lender that focuses on assistance to beginning and underserved farmers and ranchers who are unable to obtain credit elsewhere. For its guaranteed loans, the agency typically guarantees 90 percent of losses the lender might incur in the event that a borrower defaults, although the agency may guarantee up to 95 percent for qualifying loans to certain groups, including SDFRs. Guaranteed loan terms and interest rates are set by the lender, though USDA has established maximum rates and terms. Agricultural loans guaranteed by the Farm Service Agency generally account for about 4 5 percent of outstanding loans made by the Farm Credit System and commercial banks and credit unions. Other lenders. A variety of other businesses and individuals provide agricultural credit to farmers and ranchers, including credit unions, life insurance companies, farm implement dealers, and family members. According to the National Credit Union Administration, agricultural lending represents a small portion (less than several basis points) of credit union lending. Historically, life insurance companies have used agricultural real estate mortgages as part of their investment portfolios. Farm implement dealers sell machinery, parts, and services and offer financing for those products. According to USDA survey data, implement dealers currently provide almost one-third of the agricultural sector s farm operating debt with terms longer than 1 year and are an increasing source of agricultural credit. According to USDA s Economic Research Service, in 2017, the Farm Credit System and commercial banks accounted for the bulk of agricultural lending in the United States, comprising about 80 percent of the total outstanding farm debt. The remaining debt was USDA Farm Service Agency direct loans and loans made by other lenders. <2. Information Is Limited, but Survey Data Provide Some Insights into Credit to Socially Disadvantaged Farmers and Ranchers> <2.1. Regulatory Data Collection Restrictions Limit What Is Known about Agricultural Credit to Socially Disadvantaged Farmers and Ranchers> Information on the types and amount of agricultural credit to SDFRs is limited. Regulation B, which implements the Equal Credit Opportunity Act (ECOA), generally prohibits lenders from collecting data on the personal characteristics (such as sex, race, and national origin) of applicants for loans other than certain mortgages. Therefore, financial institutions and their regulators generally do not have information on the types or amount of agricultural lending to SDFRs. In contrast, USDA collects and maintains personal characteristic data on applicants for the farm loans it makes or guarantees in order to target loans to traditionally underserved populations and fulfill statutorily mandated reporting requirements. The lack of personal characteristic data on a large portion of agricultural loan applications limits the ability of regulators, researchers, and stakeholders to assess potential risks for discrimination. In a July 2009 report, we found that federal enforcement agencies and depository institution regulators faced challenges in consistently, efficiently, and effectively overseeing and enforcing fair lending laws due in part to data limitations. Additionally, we found that such data would enhance transparency by helping researchers and others better assess the potential risk for discrimination. For our current review, some federal depository institution regulators we spoke with said that additional data on nonmortgage lending would allow them to perform additional assessments of financial institutions compliance with fair lending laws. Some SDFR advocates we spoke with also expressed concern about the lack of accurate public information on lending to SDFRs, which they said forces them to rely on anecdotal evidence in attempts to monitor potential discrimination. A rulemaking pursuant to Section 1071 of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) would modify the Regulation B prohibition for certain loans, including possibly some agricultural loans. Section 1071 amended ECOA, requiring financial institutions to report information on credit applications made by women- owned, minority-owned, and small businesses. However, in April 2011, CFPB issued a letter stating that the requirements under Section 1071 do not go into effect until CFPB issues implementing regulations. The purpose of Section 1071 is to facilitate enforcement of fair lending laws and enable communities, governmental entities, and creditors to identify business and community development needs and opportunities of women-owned, minority-owned, and small businesses. Section 1071 is consistent with our 2009 report on fair lending issues, which said Congress should consider requiring additional data collection and reporting for nonmortgage loans. Section 1071 did not specify a time frame for CFPB to complete its rulemaking. As of June 2019, CFPB had not yet completed a rulemaking implementing Section 1071 of the Dodd-Frank Act. In 2017, CFPB issued a request for information seeking public comment on topics related to the collection of data on small business lending. However, in November 2018, CFPB announced that it was delaying the rulemaking due to resource constraints and other priorities. CFPB reported in the Spring 2019 Unified Agenda of Federal Regulatory and Deregulatory Actions that it plans to resume pre-rulemaking activities later in 2019. <2.2. Survey Data Have Limitations but Provide Information on the Farm Debt and Credit Providers of Socially Disadvantaged Groups> USDA s annual survey of farm producers, the Agricultural Resource Management Survey, provides some insights into agricultural lending to SDFRs but has limitations when used for this purpose. The limitations fall into two main categories, as follows: First, the sample size used in the survey does not allow for capturing potential differences in the credit needs and challenges of specific socially disadvantaged subgroups. The relatively small proportion of SDFRs in the survey s sample population renders estimates of SDFR farm debt less precise. To increase the precision of its estimates, USDA averaged 3 years of survey data (2015 2017) to increase the sample size of SDFRs available for analysis. Due to the small size of several SDFR subgroups, we analyzed SDFRs as a single combined group. Second, the survey may underrepresent the total outstanding farm debt of socially disadvantaged groups and should be interpreted with caution, according to USDA officials. As previously discussed, the 2017 Census questionnaire included revisions that better captured the role of SDFRs in farm operations, and its results suggest that the 2012 Census and the 2015 2017 surveys (which used similar methodologies) may have underreported the number of SDFRs designated as primary producers, particularly women. Specifically, in the 2015 2017 surveys, SDFRs represented 17 percent of primary producers, whereas in the 2017 Census, SDFRs accounted for 30 percent of primary producers. However, the potential underrepresentation issue should not affect the statistical significance of comparisons between the SDFR and non-SDFR subgroups within the survey. With these caveats in mind, the 2015 2017 survey data suggest that SDFR primary producers had annual average outstanding farm debt of $20.0 billion ($17.5 $22.6 billion at the 90 percent confidence level). This estimate represents debt used specifically for farm purposes. Farm ownership debt was a larger share of SDFR outstanding farm debt than it was for all other farmers and ranchers. Among SDFR primary producers, farm ownership debt was estimated to account for 67 percent of outstanding farm debt, compared to an estimated 59 percent for non-SDFR primary producers (see fig. 1). Farm operating debt accounted for the remaining 33 percent and 41 percent of outstanding SDFR and non-SDFR farm debt, respectively. SDFRs received proportionately fewer loans and less agricultural credit overall than non-SDFRs. Specifically, SDFRs accounted for an estimated 17 percent of primary producers in the survey but only 13 percent of farms with loans and 8 percent of total outstanding farm debt. SDFR debt represented an estimated 9 percent of total farm ownership debt and 7 percent of total farm operating debt (see table 3). Therefore, even though farm ownership debt comprised most outstanding SDFR farm debt (67 percent), SDFR primary producers were still less likely to have outstanding farm ownership debt than all other farmers and ranchers. While the survey data show that SDFRs had proportionately less agricultural credit than non-SDFRs, the survey does not provide information on the reasons why. However, a number of factors may help explain these differences. For example, the 2017 Census shows that SDFRs are more likely than non-SDFRs to operate smaller farms with less market value, and smaller farms may require less credit to operate. In addition, as discussed later in this report, SDFRs may have greater difficulty qualifying for agricultural loans or may be dissuaded from applying for credit. SDFR primary producers generally borrowed from the same type of lenders as non-SDFRs and reported using a range of agricultural credit providers. The distribution of SDFR and non-SDFR farm debt by lender type in the survey was roughly similar, with all differences within the margin of error (at the 90 percent confidence level). According to the survey data, an estimated 51 percent of SDFRs outstanding farm debt was lent by commercial banks and savings associations. Lending by Farm Credit System institutions (28 percent), USDA s Farm Service Agency (6 percent), and other lenders, such as individuals and equipment dealers (15 percent), comprised the remainder. SDFRs received a larger share of their operating credit, compared to ownership credit, from lenders in the other category. This was true for non-SDFR operating debt as well. These results should be interpreted cautiously because the information is self-reported and respondents may not have known the specific types of lenders they used. The survey results for all farms appear to overrepresent debt from commercial banks and savings associations when compared with data collected by USDA s Economic Research Service on farm-sector balance sheets. It is possible some respondents mischaracterized some debt from Farm Credit System institutions as debt from commercial banks. <2.3. About 11 Percent of Lending Guaranteed by the Farm Service Agency Went to Socially Disadvantaged Farmers and Ranchers> While loans guaranteed by USDA s Farm Service Agency make up a small percentage of overall agricultural lending, the agency tracks how much of this lending goes to SDFRs and the purpose of the loans (ownership or operating). In fiscal year 2018, the Farm Service Agency guaranteed $3.2 billion in new agricultural loans. About $340 million (10.8 percent) of this amount went to SDFRs (see fig. 2). By dollar volume, farm ownership loans accounted for about 71 percent of the guaranteed loans to SDFRs. Farm operating loans accounted for the remaining 29 percent. Guaranteed farm ownership loans to SDFRs averaged about $519,000, while farm operating loans averaged about $279,000. A 1988 amendment to the Consolidated Farm and Rural Development Act states that USDA should establish annual target participation rates for SDFRs on a county-wide basis for farm ownership loans and, to the greatest extent practicable, reserve funds for certain loans it makes or insures under these targets. However, in August 2007, USDA s Office of General Counsel provided a legal opinion that stated that the statute could be read to apply only to the direct loan program. As a result, officials at the Farm Service Agency told us it does not set annual target participation rates by county or reserve funds for guaranteed loans. Over the last 5 fiscal years (2014 2018), the Farm Service Agency guaranteed an increasing number of loans to SDFRs each year. The agency guaranteed 489 loans to SDFRs in fiscal year 2014 and 817 loans in fiscal year 2018 a 5-year high. Over that period, the total dollar amount of guaranteed loans to SDFRs increased by 69.6 percent when adjusted for inflation. The increase was similar for farm ownership and farm operating loans (see fig. 3). While the total dollar amount of guaranteed loans to SDFRs increased each year, the percentage of guaranteed loans that went to SDFRs, by dollar volume, decreased from fiscal years 2014 through 2016 (see fig. 4). This percentage started increasing in fiscal year 2017, when SDFRs accounted for 8.7 percent of guaranteed loans by dollar volume. However, guaranteed loans to SDFRs still accounted for a slightly smaller portion of all guaranteed loans in fiscal year 2018 (10.8 percent) than in fiscal year 2014 (11.0 percent). In fiscal year 2018, the dollar amount and percentage of guaranteed loan funds that went to SDFRs differed substantially by state (see table 4). Hawaii and Puerto Rico were the only two states or territories where SDFRs received more than one-half of all guaranteed loans (farm ownership and operating loans combined). However, Hawaii and Puerto Rico received 0.1 percent of all guaranteed loans. For several states where SDFRs received a large dollar amount of guaranteed loans, these loans represented less than 20 percent of the state s guaranteed loan funds (for example, Arkansas, Missouri, and South Dakota). In contrast, several states with the largest proportions of guaranteed loans to SDFRs had less guaranteed loan funds overall (for example, Florida, Wyoming, and Maryland). The Farm Service Agency did not guarantee any loans to SDFRs in Alaska, Connecticut, New Hampshire, or Rhode Island in fiscal year 2018. <3. Stakeholders Identified Multiple Challenges That Socially Disadvantaged Farmers and Ranchers Face in Obtaining Private Agricultural Credit> <3.1. Smaller Operations, Weaker Credit Histories, and Land Ownership Issues Reportedly Present Hurdles to Obtaining Agricultural Credit> According to representatives from some SDFR advocacy groups, federal depository institution regulators, and lending industry associations we interviewed, SDFRs can have difficulty obtaining agricultural credit from private-sector lenders because they operate smaller farms and in some cases do not meet standards for farm revenue, applicant credit history, and collateral. Farm size. As previously discussed, SDFRs are more likely than other farmers and ranchers to operate small farms, which can make it difficult for them to qualify for private credit. According to data from the 2017 Census of Agriculture, SDFRs represented 30 percent of primary producers but operated 39 percent of farms smaller than 50 acres and 16 percent of farms 500 acres or larger. Some SDFR advocates and lending industry association representatives we interviewed said lenders have several incentives to lend to larger farms. First, one advocate noted that operators of smaller farms typically need smaller loans, and making many small loans is more time- and resource-intensive than making fewer, larger loans. Second, one industry association and one SDFR advocate noted that large farms often produce major commodities such as corn, soybeans, and beef cattle, while small farms often produce specialty crops. The SDFR advocate said underwriting loans to large farms that produce major commodities is easier and less risky because more data are available on the market for those products. Third, representatives of one SDFR advocacy group and one industry association noted that programs such as crop insurance are geared toward large, major-commodity farmers. They said these programs mitigate repayment risk and make lenders more likely to approve a loan or provide more favorable terms, such as lower interest rates. In contrast, representatives from the Office of the Comptroller of the Currency noted that the Community Reinvestment Act can provide incentives for banks to lend to smaller farms. Farm revenue. Consistent with their smaller size, SDFR farms also generate less revenue on average than non-SDFR farms. As previously noted, SDFR primary producers accounted for a disproportionally small portion (13 percent) of total agricultural product sales in 2017 relative to their overall representation among primary producers (30 percent). Additionally, according to one SDFR advocate, SDFRs may have more difficulty than other farmers and ranchers in documenting their revenue because they are more likely to sell their products through informal cash transactions. Operating a lower-revenue farm and having limited documentation of revenue can be hurdles to obtaining private credit because these factors may negatively affect a lender s assessment of the applicant s repayment ability. Federal depository institution regulators have noted that farm revenue is critical to demonstrating a borrower s capacity to repay an agricultural loan. For example, in its risk management expectations for agricultural credit, the Board of Governors of the Federal Reserve System says banks should review borrower-prepared cash-flow statements to identify potential repayment-ability problems. Lenders consider farm revenue when calculating an applicant s debt-to-income ratio (the percentage of income that goes to recurring debt payments), which is a central underwriting criterion. In general, having lower income relative to recurring debt payments indicates weaker repayment ability. Consistent with this principle, Farm Credit Administration regulations require Farm Credit System associations to have written policies and procedures that include underwriting standards that demonstrate an applicant s repayment capacity when approving a loan. Additionally, representatives of one industry lending association said that revenue is the most important factor that banks consider in underwriting agricultural loans. Credit history. Some SDFRs may have relatively low credit scores or limited credit histories, which can make it difficult to obtain agricultural credit. Some SDFR advocates and lending industry association representatives we interviewed said that some SDFR subgroups are more likely than members of nondisadvantaged groups to have difficulty meeting credit score standards for agricultural loans. Prior research provides some evidence to support this view. For example, the Board of Governors of the Federal Reserve System reported in 2007 that African Americans and Hispanics had lower credit scores on average than non- Hispanic whites and Asians, although the study did not specifically examine farmers and ranchers. While private agricultural lenders are not subject to federal statutory or regulatory credit score requirements for approving agricultural loans, federal depository institution regulators emphasize the importance of evaluating applicants creditworthiness in their lending guidelines. For example, the Office of the Comptroller of the Currency s handbook on agricultural lending states that current credit information is essential to a bank s ability to evaluate borrowers creditworthiness. Lending industry association representatives we interviewed also noted that underwriting for agricultural lending is increasingly standardized and reliant on credit scores. For example, representatives from the Farm Credit Council (the trade association for the Farm Credit System) said approval decisions for about one-half of the loans that Farm Credit System associations make each year are made using credit scorecards. Credit scorecards are algorithms that statistically quantify a borrower s probability of repayment using inputs such as the borrower s credit score. Additionally, participation in the secondary market for agricultural loans may require lenders to comply with credit score criteria. For example, the Federal Agricultural Mortgage Corporation (commonly known as Farmer Mac) a federal government-sponsored enterprise that purchases and securitizes agricultural loans has minimum credit score standards that range from 660 to 720. Collateral. Some SDFRs face challenges using their agricultural land as collateral. Many long-term agricultural loans require the borrower to pledge land as collateral to secure the transaction. For example, long- term loans (up to 40 years) made by Farm Credit System associations must be secured by a first-position lien on interests in real estate, generally enabling the Farm Credit System to obtain ownership or control of the land in the event of default. Federal regulators, lending industry association representatives, and SDFR advocates we spoke with identified several reasons why SDFRs, especially African Americans and American Indians on tribal lands, have difficulty using agricultural land as loan collateral. Some SDFRs do not have a clear title to their agricultural land because the land was passed down informally from generation to generation without a will. In addition, land passed down in this manner can result in numerous heirs thousands in some cases owning the land in common (that is, not physically divided among them). These circumstances can limit use of the land as collateral because of lending requirements or conventions that require formal proof of ownership or that disallow the use of a partial ownership interest as security for a loan. SDFR advocates and officials from the Farm Credit Administration told us these issues have particularly affected African American farmers due to historical factors that limited their access to legal services. In our May 2019 report about lending on tribal lands, we discussed how these issues also have posed problems for American Indian farmers. As we also reported in May 2019, American Indian farmers on tribal lands face additional challenges in using tribal land as collateral for agricultural loans because of statutory restrictions and some lenders concerns about their ability to enforce a foreclosure. <3.2. Farmer Advocates Report Additional Challenges for Socially Disadvantaged Farmers and Ranchers Seeking Agricultural Credit> SDFR advocates we spoke with said that in addition to difficulty meeting loan underwriting standards, SDFRs face challenges related to historical discrimination, ongoing unfair treatment by lenders, and a lack of familiarity with some programs and technologies when trying to obtain private agricultural credit. As the Congressional Research Service reported in 2013, allegations of unlawful discrimination against SDFRs in the management of USDA programs are long-standing and well-documented. For example, in 1965, the U.S. Commission on Civil Rights found evidence of discrimination in the delivery of USDA farm programs, including loan programs. A subsequent report by the commission in 1982 and a report by the USDA Civil Rights Action Team in 1997 found continuing problems with the experience or treatment of SDFRs in USDA programs. USDA has also settled several class action lawsuits that SDFRs filed for, among other things, discrimination in the agency s farm assistance programs. The allegations in these lawsuits included that USDA systematically denied SDFRs agricultural credit and other program benefits in violation of ECOA and failed to investigate complaints of discrimination, as required by USDA regulations. The settlements made more than $4 billion in awards available to farmers and ranchers whose claims were approved through administrative procedures. Some SDFR advocates told us that historical discrimination in agricultural lending adversely affects SDFRs current ability to obtain private credit in several ways. First, they said SDFRs who were unfairly denied USDA loans and other program benefits in the past have not been able to develop their farms in the same ways as farmers and ranchers who did receive loans, thus reducing their ability to obtain private credit today. The advocates elaborated that USDA agricultural credit allows recipients to expand operations and to purchase land and equipment that can later be used as collateral, making it easier to get subsequent and larger loans. Some SDFR advocates also stated that historical exclusion from credit markets and farm programs has limited SDFRs familiarity with lending standards and resulted in less formal recordkeeping, which impairs their ability to obtain private-sector credit. Finally, advocates said that historical discrimination has led generations of SDFRs to distrust institutional lenders, making them less likely to apply for credit. Some SDFR advocates we spoke with said that unfair treatment by private lenders is also a barrier to SDFRs obtaining private agricultural credit. One SDFR advocate said some lenders discriminate against SDFRs in loan approval decisions but that they more frequently treat SDFRs unfairly with respect to loan terms and conditions (for example, interest rates, fees, and collateral requirements) and loan servicing (for example, restructuring and foreclosure mitigation actions). Another noted that adverse loan terms and conditions and servicing practices can increase the risk that borrowers will lose their farm, house, and other property by making the loan unaffordable or reducing the chances that borrowers will catch up on payments if they fall behind. For example, this SDFR advocate said they were aware of cases in which (1) lenders required SDFRs to pledge potentially excessive collateral for loans, such as the borrower s home in addition to the farm land, and (2) loan servicers moved more quickly to foreclose on SDFR borrowers who were behind on loan payments than on other borrowers and did not provide repayment options that may have allowed them to continue their operations. One SDFR advocate also stated that some SDFRs report not feeling welcome at lending institutions based on the perception of having been repeatedly dismissed by lender staff, while another said that in some cases, SDFRs have not been provided timely or helpful information on the loan application process. One SDFR advocate we spoke with said these practices are prevalent in some agricultural credit markets and that they had been or were currently involved in litigation related to these types of practices. However, banking industry association representatives said they did not believe that SDFRs are being treated unfairly and that denying loans to qualified applicants would cause lenders to decrease profits in a competitive market. They noted that lenders face significant competition, which incentivizes them to make loans to all qualified borrowers, and that lending decisions and loan terms are based only on the applicant s ability to repay a loan and other underwriting criteria. We did not attempt to independently verify claims of unfair treatment of SDFRs by private-sector lenders, in part because data limitations discussed earlier limit the identification and analysis of possible discriminatory practices. Some SDFR advocates also said that some SDFRs may not be obtaining private agricultural credit because they are not aware of all potential credit options and related programs and are not always familiar with the technology needed to access them. For example, one advocate told us some SDFRs may not be aware that they could qualify for private agricultural loans, especially if they are recent immigrants or new to agriculture. This problem may be particularly true for loans from the Farm Credit System associations. Two advocates said SDFRs are not familiar with these lenders, and representatives of the Farm Credit Council told us people who did not grow up in farming tended not to know about the Farm Credit System. SDFR advocates we spoke with said this issue is exacerbated by limited outreach by private lenders to SDFRs, as discussed in more detail later in this report. Advocates also noted that historically disadvantaged groups are less likely to have access to or be familiar with computer technology and the internet, and that credit applications and related financial education programs are now provided online. <4. Lenders and Federal Agencies Conduct Some Outreach to Socially Disadvantaged Farmers and Ranchers, but the Effectiveness of These Efforts Is Unknown> <4.1. Farm Credit System Outreach Is Not Specifically Targeted to Socially Disadvantaged Groups, and Data Collection Restrictions Prevent Assessment of Impact> The Farm Credit System does not have a specific mandate to serve SDFRs, but its associations conduct some outreach to SDFRs in implementing the following statutory requirements and Farm Credit Administration regulations. The Farm Credit Act of 1971 was amended in 1980 to require the Farm Credit System to serve young, beginning, and small farmers. Related Farm Credit Administration regulations require the associations to implement effective outreach programs to these groups. While these requirements do not mandate outreach to SDFRs specifically, Farm Credit Administration officials said that many SDFRs qualify as young, beginning, or small farmers and, therefore, that Farm Credit System outreach efforts reach SDFRs to some extent. In 2012, the Farm Credit Administration amended its regulations on business planning to help ensure the Farm Credit System is responsive to the credit needs of all eligible and creditworthy persons. The regulations, which first applied to 2013 business plans, require Farm Credit System associations to develop marketing plans describing, among other things, (1) the demographic groups in their service areas, (2) ways to market their services to all qualified farmers and ranchers, and (3) specific outreach toward diversity and inclusion in each market segment. The supplementary information included with the publication of the final rule cites the perception of some SDFR advocates that Farm Credit System associations are not accessible to underserved farmers and have not conducted sufficient outreach to those populations about programs and services. The full extent of the Farm Credit System associations outreach to SDFRs is unknown. Neither the Farm Credit Administration nor the Farm Credit Council maintains aggregated information on the number or type of completed outreach activities involving SDFR participants. However, our nongeneralizable review of recent marketing plans from six Farm Credit System associations in areas with relatively high proportions of SDFRs identified some examples of outreach to SDFRs. For instance, some associations have partnered with a nonprofit organization to provide educational programs designed to strengthen women s roles in the modern farm enterprise. Associations have also participated in agricultural conferences at historically black colleges and universities and translated marketing materials for non-English speakers. Despite some outreach, some SDFR advocates we spoke with said that Farm Credit System associations outreach has had limited effects on the amount of credit provided to SDFRs and SDFRs familiarity with the system. One SDFR advocate we spoke with said that while some Farm Credit System associations engage with socially disadvantaged communities, the outreach has not increased the diversity of the system s borrowers. Others said that Farm Credit System outreach to SDFR communities has been insufficient and that some SDFRs are still not aware of the Farm Credit System. However, one SDFR advocate noted that the Farm Credit System s outreach to young, beginning, and small farmers has been beneficial for those populations. The impact of Farm Credit System associations outreach to SDFRs is also not known. The marketing plan requirement does not oblige Farm Credit System associations to meet specific lending goals or favor any type or group of agricultural producers in their underwriting. Accordingly, the associations are not expected to quantify the extent to which they are meeting their diversity and inclusion outreach plans in the information they provide to their boards of directors. Moreover, Farm Credit Administration officials said Regulation B, discussed earlier, prevents the associations from collecting data on the race, ethnicity, and sex of loan applicants that would be needed to assess the effects of outreach efforts on lending to socially disadvantaged groups. In contrast, the officials noted that Farm Credit System associations are required to set lending targets for young, beginning, and small farmers; monitor outreach to those groups; and report on performance results of their young, beginning, and small farmer programs. In 2018, the Farm Credit System reported that all direct-lender institutions with young, beginning, and small farmer programs within the system were in compliance with these requirements. While the Farm Credit Administration has not evaluated the impact of outreach by Farm Credit System associations, its reviews of association marketing plans have found that most of the plans comply with requirements for outreach toward diversity and inclusion but that some lack specificity. The Farm Credit Administration told us it examines all of the associations marketing plans for regulatory compliance every 3 years. Farm Credit Administration officials reviewed their examinations from 2014 and 2017, the two scheduled examination cycles after the new requirements were implemented in 2012. They found that 85 percent of the 78 Farm Credit System associations examined in 2014 complied with the marketing and outreach requirements, and 94 percent of the 71 associations examined in 2017 complied. In cases where examiners identified deficiencies in marketing plans, the agency said it prescribed corrective actions, including requiring associations to do the following: obtain sufficiently detailed information to analyze and understand develop specific action plans and outreach strategies to market the institution s products and services to potentially underserved markets; and ensure appropriate reporting on progress in accomplishing marketing plan strategies and actions. Farm Credit Administration officials said they hold periodic discussions with managers of Farm Credit System associations to monitor the status of corrective actions and conduct follow-up examinations to determine the adequacy of the corrective actions and, if applicable, the need for additional enhancements. The results of our review of a nongeneralizable sample of association marketing plans were broadly consistent with the Farm Credit Administration s findings. We reviewed the most recent available plans of the six Farm Credit System associations noted previously for evidence of demographic information on the institution s service area and for diversity and inclusion outreach efforts. Among the plans we reviewed, five included demographic information, but one did not. Farm Credit Administration officials said they also had identified that deficiency in their examination of that marketing plan. Additionally, five of the plans had examples of planned outreach efforts to SDFRs, but another one did not. <4.2. Other Lenders Conduct Little Outreach to Socially Disadvantaged Farmers and Ranchers and Are Not Required to Do So> According to representatives of lending industry associations we interviewed, commercial banks generally do not target outreach for agricultural lending to specific demographic groups. Officials from the federal depository institution regulators noted that commercial banks and credit unions are not required to conduct outreach on agricultural lending, and that the extent to which any lender conducts outreach is a private business decision. However, officials from one federal depository institution regulator noted that some lenders have participated in conferences organized by SDFR groups. They also said that in fulfilling responsibilities under the Community Reinvestment Act, lenders engage with community groups in their assessment areas to help identify credit needs. The officials said these efforts would likely engage SDFRs in areas where agriculture was prevalent and where agricultural lending was part of a bank s business model. Some SDFR advocates we interviewed said that outreach and engagement by commercial banks was insufficient. For example, despite their familiarity with agricultural lending, some noted that they did not know of any specific outreach to SDFRs by private-sector lenders. They also noted that additional outreach is needed because some SDFRs are not familiar with agricultural lending products offered by commercial banks. Federal depository institution regulators do not monitor outreach to SDFRs by the institutions they supervise but have conducted some additional outreach themselves. Officials from the regulatory agencies told us they do not collect data on the amount of, types of, participation in, or impact of outreach conducted by their regulated institutions. However, as part of their efforts to promote the availability of credit and other services, the federal depository institution regulators have engaged in some outreach to SDFRs, as shown in the following examples. The Office of the Comptroller of the Currency has established an Office of Minority and Women Inclusion and an Office of External Outreach and Minority Affairs, which help to address fair credit access issues affecting minority communities and have worked with some national SDFR groups to coordinate, facilitate, and implement conferences, roundtables, and seminars. The Federal Deposit Insurance Corporation s Community Affairs Branch has engaged bankers, nonprofits, and other stakeholders to provide small business training for SDFRs. This training provides examples of small business lending and has highlighted programs for which participants may qualify. In 2017, the Federal Reserve Bank of St. Louis and the Board of Governors of the Federal Reserve System engaged with federal agencies, businesses, and groups representing SDFRs to develop and publish a guide titled Harvesting Opportunity, which focuses on how credit can provide greater support for local food-related businesses and farmers. <4.3. USDA Conducts Outreach to Socially Disadvantaged Groups on Its Lending Programs, but Data- Collection Challenges Hamper Evaluation of Outcomes> USDA facilitates and provides outreach to SDFRs that some SDFR advocates say has been beneficial, but outreach on USDA-guaranteed farm loans is just one component of this broad-based effort. USDA s Office of Partnerships and Public Engagement implements the Outreach and Technical Assistance for Socially Disadvantaged and Veteran Farmers and Ranchers Program, referred to as the Section 2501 program. The program is designed to enhance coordination of outreach, technical assistance, and education efforts authorized under agricultural programs to improve SDFR and veteran farmer and rancher participation in the full range of USDA programs, including guaranteed farm loans. USDA officials said this program primarily provides grants and technical assistance to community-based organizations and develops materials describing best practices for national, state, and local outreach efforts. Two SDFR advocates we interviewed said outreach programs coordinated through the Section 2501 program have improved SDFRs understanding of USDA s farm lending programs, and that the program s efforts to engage SDFRs in programs and services are better now than they have been historically. USDA officials said they track these outreach activities but do not maintain data on activities that specifically address guaranteed loans because the outreach is generally intended to connect socially disadvantaged groups with any USDA program that may be appropriate. In addition to department-level outreach activities, USDA s Farm Service Agency conducts outreach to increase SDFR participation in its programs through activities targeted to underserved populations. Farm Service Agency outreach efforts are conducted by the agency s field offices and overseen by the Outreach Office. The outreach includes lender trainings and partnerships with community-based and tribal organizations to engage socially disadvantaged communities. Farm Service Agency officials said that they have partnered with private-sector lenders to conduct some outreach events specifically related to the guaranteed farm loan program but that most of the outreach is more general. Farm Service Agency officials told us they use data on guaranteed loans to SDFRs to target outreach to underserved communities. As previously discussed, unlike other providers of agricultural credit, USDA generally collects data on the personal characteristics of guaranteed loan applicants and borrowers. Farm Service Agency officials told us that state executive directors, farm loan chiefs, and outreach coordinators plan their outreach in annual strategy sessions. As part of this planning, state offices review the state s lending goals for SDFRs, Census of Agriculture data on the state s farmer population, and data on Farm Service Agency direct and guaranteed loans made to farmers belonging to different socially disadvantaged groups to target outreach to underserved communities. While the outreach is planned by state offices, the Farm Service Agency s Director of Outreach said the Outreach Office has emphasized the use of lending goals and loan data in targeting outreach efforts. Although it maintains data on guaranteed loans made to SDFRs, USDA generally does not evaluate whether SDFR outreach participants go on to use Farm Service Agency lending programs or otherwise evaluate the impact of its outreach on lending to SDFRs. Farm Service Agency officials said that they track outreach activities at the national level by monitoring the number of activities, the groups engaged, and the number of participants, but that they face challenges evaluating the impact of outreach efforts. The officials said any personal or demographic information on outreach participants must be voluntarily provided by the participants, but that many of them are reluctant to do so. As a result, data on the characteristics of outreach participants are limited. The lack of data, in turn, makes it difficult to assess how effectively the outreach was targeted and whether it could be expected to increase lending to socially disadvantaged groups. Representatives from one SDFR advocacy organization said that while outreach programs may increase SDFRs understanding of USDA s loan programs, it is unclear how much outreach programs help SDFRs obtain credit because USDA does not track participant outcomes. Farm Service Agency officials said that some of their state offices have begun trying to track the progress of individual outreach participants in obtaining loans through Farm Service Agency programs (using voluntarily provided information), but that these efforts were in the early stages. <5. Agency Comments and Our Evaluation> We provided a draft of this report to USDA, the Farm Credit Administration, the Consumer Financial Protection Bureau, the Office of the Comptroller of the Currency, the Federal Deposit Insurance Corporation, the Board of Governors of the Federal Reserve System, and the National Credit Union Administration for their review and comment. The Board of Governors of the Federal Reserve System and the National Credit Union Administration did not provide comments. USDA, the Farm Credit Administration, the Consumer Financial Protection Bureau, the Office of the Comptroller of the Currency, and the Federal Deposit Insurance Corporation provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Agriculture, the Acting Chairman and Chief Executive Officer of the Farm Credit Administration, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or ortiza@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Objectives, Scope, and Methodology The objectives of this report were to examine (1) what is known about the amount and types of agricultural credit to socially disadvantaged farmers and ranchers (SDFR), (2) challenges SDFRs reportedly face in obtaining agricultural credit, and (3) outreach efforts to SDFRs regarding agricultural credit and related services. In this report, we use the term SDFR as defined in the Consolidated Farm and Rural Development Act, as amended, and related U.S. Department of Agriculture (USDA) regulations. The act defines a socially disadvantaged group as one whose members have been subject to racial, ethnic, or gender prejudice because of their identity as members of a group without regard to their individual qualities. USDA regulations further define SDFRs as belonging to the following groups: American Indians or Alaskan Natives, Asians, Blacks or African Americans, Native Hawaiians or other Pacific Islanders, Hispanics, and women. Although the act and USDA regulations defined SDFR for purposes of classifying participants in USDA programs, in this report, we use USDA s definition to identify SDFRs both in USDA programs and in the broader population of agricultural producers, consistent with the statutory provision this report responds to. Additionally, based on the language of the statutory provision, we excluded USDA direct loans from the scope of our review and focused on lending by private entities. The provision defines an agricultural credit provider as a Farm Credit System institution, a commercial bank, the Federal Agricultural Mortgage Corporation, a life insurance company, and any other individual or entity as determined by the Comptroller General of the United States. <6. Estimates of the Numbers of Farms and Socially Disadvantaged Farmers and Ranchers> For the background section of this report, USDA s National Agricultural Statistics Service provided estimates from the 2012 and 2017 Censuses of Agriculture on the number of farm and ranch operations (which we refer to as farms) whose primary producer that is, main decision maker qualified as an SDFR, broken down by different SDFR subgroups. The service also provided estimates on the characteristics of farms whose primary producer was an SDFR, including the total acreage and market value of products sold. We compared the 2017 Census estimates of SDFR primary producers to analogous estimates from the 2012 Census and calculated numerical and percentage differences. We reviewed documentation on the methodologies used by the 2012 and 2017 Censuses to identify the main decision maker on a farm. We also interviewed National Agricultural Statistics Service officials about methodological differences between the two censuses and their likely effects on the number of reported SDFR primary producers. The 2012 Census used the term principal operator rather than primary producer to identify the main farm decision maker, but for ease of presentation we use the term primary producer in reference to both the 2012 and 2017 Censuses because the terms generally have the same meaning. <7. Amount and Types of Credit to Socially Disadvantaged Farmers and Ranchers> To examine what is known about the amount and types of agricultural credit to SDFRs, we reviewed requirements in the Equal Credit Opportunity Act and its implementing regulation (Regulation B) governing the collection of data on the personal characteristics of loan applicants. We interviewed officials from the Consumer Financial Protection Bureau (CFPB), which has primary responsibility for issuing Equal Credit Opportunity Act regulations, about these requirements and the status of a related rulemaking pursuant to a provision in the Dodd-Frank Wall Street Reform and Consumer Protection Act. We also interviewed officials from the federal depository institution regulators the Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, Office of the Comptroller of the Currency, and National Credit Union Administration about the extent of information available on agricultural lending to SDFRs and about data restrictions stemming from Regulation B. We also drew upon information and analysis from our June 2008 and July 2009 reports on data limitations in nonmortage lending. Additionally, we analyzed data from USDA s Agricultural Resource Management Survey. The survey is a multiphase series of interviews that uses a multiframe, stratified, probability-weighted sampling design. The survey does not include Hawaii or Alaska. USDA s Economic Research Service provided us customized summary statistics from the 2015, 2016, and 2017 surveys combined. Specifically, the service averaged survey data for those 3 years to provide a robust sample size of surveyed SDFRs. The service provided estimates and associated confidence intervals on the proportion of primary producers who were and were not SDFRs; the annual average amount of outstanding farm debt each group had over the 3-year period, by type of debt (ownership or operating); and the lending source for this debt (USDA Farm Service Agency, Farm Credit System institution, commercial bank and savings associations, or other). The service adjusted debt information for inflation. Specifically, to create standard errors for the 3-year averages, the service adjusted outstanding debt to 2017 dollars using the chain-type gross domestic product deflator. We compared and contrasted survey statistics for SDFRs and non-SDFRs, focusing on the volume and percentage of total outstanding farm debt, farm ownership and operating debt, and lender type. We interviewed Economic Research Service officials about limitations of the survey data. The limitations include the small size of several SDFR subgroups (which prevented more detailed analysis of different demographic groups), the potential underrepresentation of SDFRs in the survey, and potential overreporting of debt from commercial lenders. With regard to lender type, respondents may not have known the specific types of lenders they used. The survey results for all farms appear to overrepresent debt from commercial banks and savings associations when compared with data collected by the service on farm- sector balance sheets. It is possible some survey respondents mischaracterized some debt from Farm Credit System institutions as debt from commercial banks. These issues and their implications are discussed in the body of this report. To assess the reliability of the survey data, we reviewed methodology and quality review documents and compared results to other publicly available sources, such as farm balance-sheet data and the 2017 Census. We concluded that the data were sufficiently reliable for describing the amount and types of agricultural credit SDFRs received, the sources of this credit, and how SDFRs and non-SDFRs compared along these dimensions. We also analyzed USDA data on farm ownership and farm operating loans guaranteed by the Farm Service Agency in fiscal years 2014 through 2018. We focused on guarantees issued by the Farm Service Agency because it operates the primary federal agricultural credit programs. For the 5-year period, we analyzed the annual amount and percentage of guaranteed loans (by dollar volume and adjusted for inflation) that went to SDFRs. We also separately examined trends in guaranteed farm operating and farm ownership loans to SDFRs. Finally, we analyzed the volume of guaranteed loans to SDFRs by state. We used this analysis to identify the top 10 states (or territories) in terms of (1) the dollar amount of guaranteed loans that went to SDFRs and (2) the proportion of guaranteed lending to the state or territory that went to SDFRs. To assess the reliability of data from USDA, we conducted electronic testing including checks for missing data and erroneous values and compared the data to publicly available sources. The loan guarantee data we present are somewhat different than publicly available information on USDA s website because we used loan closing dates to group loans by fiscal year, while the publicly available data used the dates on which USDA obligated commitment authority for the loans. According to USDA officials, the closing date is a more accurate representation of the actual amount of loans guaranteed in a fiscal year, because some loans for which commitment authority is obligated may close in the following fiscal year or not close at all. We also interviewed USDA officials about interpretations of data fields and robustness of estimated values, among other things, and reviewed USDA internal policies and procedures for data entry. We concluded that the data were sufficiently reliable for describing the amount and proportion of farm lending guaranteed by the Farm Service Agency that went to SDFRs and non-SDFRs nationwide and by state. Finally, we reviewed documents and interviewed officials from the Farm Service Agency on the agency s performance goals and target participation rates for farm lending to SDFRs. We also reviewed a 2007 USDA Office of General Counsel legal opinion on a statutory provision concerning establishment of target participation rates for SDFRs. However, an evaluation of the legal opinion was outside the scope of our study. <8. SDFR Credit Challenges and Outreach Efforts to SDFRs> To examine challenges SDFRs face in obtaining agricultural credit and outreach efforts to SDFRs regarding agricultural lending, we conducted searches of government and academic literature for research on private agricultural lending to socially disadvantaged groups. We searched the internet and various databases, such as AGRICOLA, EconLit, ProQuest Newsstand Professional, and Social SciSearch. Using broad search terms, we identified articles related to our research objectives that provided useful context and discussion topics for interviews with stakeholders. We did not identify any government or peer-reviewed academic literature that directly addressed private agricultural lending to socially disadvantaged groups, barriers those groups may face when trying to obtain agricultural credit, or outreach to disadvantaged groups by private agricultural lenders. We also solicited expert recommendations for academic literature on agricultural lending to socially disadvantaged groups. Several SDFR advocates identified the Socially Disadvantaged Farmers and Ranchers Policy Research Center as a potential source for academic literature on the subject. We found that the center had conducted some potentially relevant research but that the work had yet to be published in academic journals or government publications. To review efforts by agricultural lenders and their regulators to provide and oversee credit-related services to SDFRs including marketing, outreach, and education activities we reviewed data and documents from the Farm Credit System, USDA, and the federal depository institution regulators. We reviewed summary statistics from the Farm Credit Administration s 2014 and 2017 examinations of Farm Credit System association marketing plans to determine the extent to which the associations had met requirements for outreach for diversity and inclusion. We supplemented this effort by reviewing marketing plans from a sample of six Farm Credit System associations in areas with substantial proportions of SDFRs from each of the socially disadvantaged groups identified in USDA regulations. While we included associations from different geographic regions of the country, the sample was not intended to be representative of all associations. We documented the extent to which the marketing plans we reviewed contained information on the demographic characteristics of the population in the associations service areas and planned outreach activities for diversity and inclusion. We also documented examples of outreach to SDFRs that were ongoing or that they had completed. Further, we also reviewed illustrative examples of outreach materials to SDFRs developed by USDA and the federal depository institution regulators, and we interviewed officials from these agencies about their outreach efforts. To gain further insight into challenges faced by and outreach efforts to SDFRs, we interviewed (1) SDFR advocacy and research organizations, (2) industry group representatives, and (3) federal agency officials. We refer collectively to the entities we interviewed as stakeholders. To select SDFR advocacy and research organizations, we used a snowball sampling technique that identified organizations based on referrals obtained during prior GAO studies and referrals from stakeholder interviews during this study. We limited our interviews to organizations that are national in scope and that focus on one or more socially disadvantaged populations and on agricultural credit or finance. Based on the snowball sampling, we identified and interviewed representatives from the following five groups: Socially Disadvantaged Farmers and Ranchers Policy Research Center, National Sustainable Agriculture Coalition, National Black Farmers Association, Rural Coalition, and Rural Advancement Foundation International-USA. The snowball sampling did not identify a national advocacy organization focused on women farmers the largest SDFR subgroup but we identified American Agri- Women based on an internet search, and we interviewed representatives from that organization as well. Because the group of organizations we interviewed was a nonprobability sample, the information they provided is not generalizable. We also interviewed representatives from lending industry groups the American Bankers Association, the Independent Community Bankers of America, and the Farm Credit Council that we selected to cover the major types of private institutional lenders that make agricultural loans, including large commercial banks, community banks, and the Farm Credit System. Additionally, we contacted industry associations representing insurance companies and community development financial institutions both of which provide some agricultural credit but representatives from these associations said they did not have information directly related to our research topic. Finally, we interviewed officials from USDA and its Farm Service Agency, the Farm Credit Administration, CFPB, and the federal depository institution regulators. For our work on credit challenges faced by SDFRs, we also drew upon information and analysis from our May 2019 report on agricultural lending on tribal lands. Among other things, that report describes (1) what is known about the agricultural credit needs of Indian tribes and their members, (2) barriers stakeholders identified to agricultural credit on tribal lands, and (3) Farm Credit System authority and actions to meet those agricultural credit needs. We conducted this performance audit from January 2019 to July 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments <9. GAO Contact> <10. Staff Acknowledgments> In addition to the contact named above, Steve Westley (Assistant Director); Jeremy Anthony (Analyst in Charge); Katherine Carter; William Chatlos; Tom Cook; Sam Portnow; Jennifer Schwartz; Jena Sinkfield; Tyler Spunaugle; and Farrah Stone made key contributions to this report. | Why GAO Did This Study
In 2017, there were about 2 million farm and ranch operations nationwide. Farmers and ranchers often require loans to buy agricultural real estate, make capital improvements, and purchase supplies and equipment. However, minorities and women comprise a disproportionately small share of agricultural producers, and certain minority groups have alleged discrimination in obtaining agricultural credit. Most agricultural lending is done by either commercial banks or the Farm Credit System, a network of lenders regulated by the Farm Credit Administration. USDA accounts for a small share of agricultural credit, but it makes direct loans and guarantees loans made by private lenders. USDA and Farm Credit System lenders have responsibilities to expand credit access.
Congress included a provision in statute for GAO to study agricultural credit services provided to SDFRs. USDA direct loans were outside the scope of GAO's review. This report examines (1) what is known about the amount and types of agricultural credit to SDFRs, (2) challenges SDFRs reportedly face in obtaining agricultural credit, and (3) outreach efforts to SDFRs regarding agricultural credit and related services.
GAO analyzed survey, census, and other USDA data; reviewed statutes and regulations governing collection of personal data on borrowers; and reviewed Farm Credit Administration and USDA documentation on outreach to SDFRs. GAO also interviewed SDFR advocacy groups, lending industry groups, and officials from the Farm Credit Administration, USDA, and the federal depository institution regulators.
What GAO Found
Information on the amount and types of agricultural credit to socially disadvantaged farmers and ranchers (SDFR)—which the U.S. Department of Agriculture (USDA) defines as members of certain racial and ethnic minority groups and women—is limited. Comprehensive data on SDFRs' outstanding agricultural debt are not available because regulations generally prohibit lenders from collecting data on the personal characteristics of applicants for loans other than certain mortgages. A Consumer Financial Protection Bureau rulemaking pursuant to a provision in the Dodd-Frank Wall Street Reform and Consumer Protection Act that requires collection of such data in certain circumstances would modify this prohibition for certain loans, possibly including some agricultural loans. The bureau delayed the rulemaking in 2018 due to stated resource constraints and other priorities, but reported that it plans to resume work on the rule later in 2019. An annual USDA survey of farmers provides some insights into agricultural lending to SDFRs but, according to USDA, may underrepresent SDFRs compared to more inclusive estimates from the 2017 Census of Agriculture. In the 2015–2017 surveys, SDFRs represented an average of 17 percent of primary producers in the survey, but they accounted for 8 percent of outstanding total agricultural debt. Loans to purchase agricultural real estate accounted for most of SDFRs' outstanding debt (67 percent).
SDFRs reportedly face a number of challenges that hamper their ability to obtain private agricultural credit. According to SDFR advocacy groups, lending industry representatives, and federal officials, SDFRs are more likely to operate smaller, lower-revenue farms, have weaker credit histories, or lack clear title to their agricultural land, which can make it difficult for them to qualify for loans. SDFR advocacy groups also said some SFDRs face actual or perceived unfair treatment in lending or may be dissuaded from applying for credit because of past instances of alleged discrimination. Additionally, they noted that some SDFRs may not be fully aware of credit options and lending requirements, especially if they are recent immigrants or new to agriculture.
Private lenders and federal agencies conduct outreach to SDFRs, but the effectiveness of these efforts in increasing lending is unknown. For example, lenders have sponsored educational events targeted to SDFRs and translated marketing materials for non-English speakers. Farm Credit Administration regulations require Farm Credit System lenders to prepare marketing plans that include specific outreach actions for diversity and inclusion. The Farm Credit Administration examines these plans and indicated that it has prescribed corrective actions in some cases. However, the Farm Credit Administration does not require lenders to meet specific lending goals, and the regulatory data restrictions noted previously constrain the Farm Credit Administration's ability to assess the effect of outreach efforts. USDA conducts outreach to SDFRs and lenders about its loan programs and collects data on the personal characteristics of loan applicants. However, USDA officials said they face challenges evaluating the impact of their outreach efforts, in part because outreach participants are reluctant to provide their demographic information. |
gao_GAO-19-397 | gao_GAO-19-397_0 | <1. Background> <1.1. Fleet Energy Requirements and Directives> Federal agencies fleets consist of many types of vehicles that support a variety of purposes. For example, federal vehicles may be used to carry staff and gear to remote, off-road locations to perform maintenance or other tasks; to transport and provide healthcare to veterans; or to support daily operations on military installations. Congress and several administrations have required federal agencies to take various steps to reduce federal fleets petroleum use and greenhouse gas emissions. During fiscal year 2017, agencies were: to meet requirements to acquire alternative fuel vehicles and low greenhouse-gas-emitting vehicles; to increase use of alternative fuel; and to decrease use of petroleum and per-mile greenhouse gas emissions (see table 1). According to DOE guidance for the 2015 Executive Order, acquiring such vehicles and increasing the use of alternative fuels can facilitate the goals of reducing both petroleum use and greenhouse gas emissions. For fiscal year 2017, in addition to meeting the above requirements, federal agencies were to meet other requirements related to overall fleet management. Federal regulations require agencies to complete a fleet management plan annually and conduct an assessment of their fleet at least every 5 years. In addition, an Executive Order issued by the prior administration in 2015 directed agencies to determine and plan for their optimum fleet inventory with emphasis placed on eliminating unnecessary or non-essential vehicles. Certain federal fleet energy directives in place in fiscal year 2017 were revoked by an Executive Order issued in May 2018. Specifically, directives related to acquiring zero emission (electric) vehicles and reducing per-mile greenhouse gas emissions, as well as the additional fleet management expectations, were revoked. The Trump administration issued a new Executive Order requiring that the Secretary of Energy, in collaboration with other federal agencies, review existing federal vehicle fleet requirements and report to the Council on Environmental Quality (CEQ) and the Office of Management and Budget (OMB) regarding opportunities to optimize federal fleet performance, reduce associated costs, and streamline reporting and compliance requirements. According to DOE officials, DOE submitted a report to CEQ and OMB as required. In April 2019, CEQ and OMB issued implementing instructions for the Executive Order. The implementing instructions emphasized that agencies should focus on the statutory requirements while increasing efficiency, optimizing performance, and reducing waste and costs. The guidance particularly emphasized agencies focus on reducing petroleum use and increasing alternative fuel consumption. The guidance did not mention the extent to which agencies should continue to acquire any specific type of alternative fuel vehicle. Annually, federal agencies are responsible for reporting vehicle inventory (including acquisitions and disposals), fuel consumption, mileage, and cost to the FAST database. Additionally, federal agencies are required to annually report on their fleets inventories, operating costs, and other fleet data. Costs submitted to the FAST database include acquisition costs, maintenance, fuel costs, indirect costs, commercial lease, GSA lease, and disposal proceeds. Prior to fiscal year 2017, agencies submitted this data at an aggregate, rather than the vehicular level, so that costs or other performance could not be analyzed at the vehicular level. For fiscal year 2017, as required by GSA and DOE, agencies began submitting vehicular level data to the FAST database, providing more detail about agency s vehicles. The FAST database specifically tracks data to assess agencies performance relative to fleet energy requirements in federal statute and executive orders. <1.2. Alternative Fuel Vehicles> A range of vehicles qualify as alternative fuel vehicles (see fig. 1). This range includes vehicles that run entirely on alternative fuel, such as electricity, and dual-fueled vehicles that can run on an alternative fuel as well as on gasoline, such as flex-fuel vehicles, which can run on gasoline or ethanol fuel blends (E85). In 2008, the definition of alternative fuel vehicles was amended to include hybrid electric vehicles, which run on gasoline with help from an electric battery, and, in certain circumstances, other vehicles that would achieve a significant reduction in petroleum consumption, such as highly fuel efficient gasoline vehicles that are also low greenhouse gas-emitting vehicles. Alternative fuel vehicles, including electric vehicles, can offer environmental benefits compared to similarly-sized conventional petroleum-fueled vehicles but also carry their own environmental costs. For example, flex-fuel vehicles, if fueled by E85, reduce petroleum use because E85 consists of up to about 85 percent ethanol, and according to DOE, using ethanol as a vehicle fuel reduces greenhouse gas emissions, along with emission of other harmful toxics. However, using ethanol increases other harmful emissions deemed carcinogenic and may also contribute to ozone formation. Furthermore, as we reported in May 2019, the production of biofuels, such as ethanol, just like the production of gasoline, results in greenhouse gas emissions throughout its life- cycle including growing the corn feedstock, transporting it, converting it to ethanol, distributing the ethanol, and burning it in an engine. Other emissions are released indirectly through broad economic changes associated with increased biofuel use, including increased ethanol use, such as when changes in land use to grow corn cause the conversion of previously nonagricultural lands into agricultural lands. Nonetheless, recent studies have found the life-cycle emissions of corn ethanol to be lower than those of gasoline. Similarly, battery-electric, plug-in hybrid electric, and hybrid-electric vehicles rely on batteries for all or some of their power, reducing or eliminating petroleum use and associated tailpipe greenhouse gas emissions, but charging, producing, and disposing of these batteries can result in environmental effects. With respect to charging, the production of electricity to power these vehicles results in emissions, the amount of which is dependent on the source of the electricity, a factor we discuss in greater detail later in this report. With respect to production, GAO previously reported that extracting lithium and other minerals from locations where it is abundant, such as in South America, can pose environmental challenges that would damage the ecosystems in these areas. With respect to disposal, according to DOE s alternative-fuels data center, the disposal of batteries used in electric and hybrid-electric vehicles can result in hazardous materials entering the waste stream but work is under way to develop battery recycling processes that minimize the life-cycle effects of such batteries. According to DOE, as electric-drive vehicles become increasingly common, the battery-recycling market may expand. In addition, the climate in which battery-electric and plug-in electric vehicles are used can affect the life of the battery. However, federal agencies do not collect the data that would allow analysis of these effects specific to the use of vehicles in federal agencies fleets. Furthermore, emissions related to fuel production or battery production or disposability are not incorporated into the requirements placed on federal agencies with respect to their fleets. As we discuss in more detail later, the various types of alternative fuel vehicles vary in the extent to which they can help agencies meet existing requirements to reduce petroleum use and the subsequently revoked requirement in place for fiscal year 2017 to reduce tailpipe greenhouse gas emissions. <1.3. Federal Responsibilities> According to DOE officials, DOE is responsible for overseeing energy goals and requirements and assists agencies in meeting these federal energy requirements. DOE tracks whether federal agencies are meeting the fleet energy requirements by analyzing the fleet inventory, fuel consumption, and fuel use data uploaded to the FAST database. DOE also oversees the Fleet Sustainability Dashboard (FleetDASH) database. FleetDASH tracks agencies fuel consumption through data produced when employees use fuel cards. This tool can track where vehicles are filling up and if there was an alternative fuel station nearby that could have been used. FleetDASH can also provide agency fleet managers with reports on alternative fuel use and when drivers missed opportunities to fuel with alternative fuels. DOE also issues guidance and conducts research into vehicle technologies that can support energy requirements, including electric vehicles. In prior work, we recommended that DOE develop guidance for agencies that specifies the elements that agencies should include in their plans for acquiring a mix of vehicles to meet federal requirements and goals. In June 2010, DOE issued the Comprehensive Federal Fleet Management Handbook, implementing this recommendation. DOE s Fleet Management Handbook recommends to agencies how to develop greenhouse gas and petroleum reduction strategies and acquire vehicles in support of these strategies, among other issues. DOE also has developed online tools to help provide guidance to agencies and consumers on the fuel efficiency and environmental effects of vehicles. GSA is responsible for providing vehicles for federal agencies to purchase or lease. GSA is a mandatory source for purchase of new vehicles for executive agencies and other eligible users. Federal agencies can also use GSA to acquire leased vehicles. Under this arrangement, an agency informs GSA what kind of vehicle is necessary for its mission. Every year, GSA publishes an annual guide on vehicles available for purchase or lease that includes the vehicles fuel type, purchase and lease prices, size, and other specifications. In setting the lease prices, GSA is required by law to recover all costs it incurs in providing vehicles and services to federal customers. Agencies that lease vehicles from GSA generally pay a monthly rate and a mileage rate. These charges are designed to cover fixed costs such as: (1) the vehicle s acquisition cost; (2) administrative costs (including staff and facilities); and (3) depreciation as well as the variable costs of fueling (except electricity used) and vehicles maintenance. In the case of alternative fuel vehicles, if the cost of the vehicle is greater than that of an equivalent conventional vehicle, agencies must cover these higher costs. Pursuant to law, GSA distributes these higher costs for alternative fuel vehicles across the agency s entire leased fleet via a flat per-vehicle monthly surcharge in the year the vehicle was acquired. Surcharges are set at the agency headquarters level. According to a GSA fact sheet, this approach allows GSA to offer a greater variety of alternative fuel vehicles without affecting lease rates of non-alternative fuel vehicles and spread the additional cost across all agencies. At times, GSA has conducted special pilot programs that have waived higher costs of alternative fuel vehicles in order to test new technology. For example, in 2011 and 2014, GSA ran two pilot programs that added over 300 electric vehicles and charging stations to the fleet. According to GSA officials, these pilots were designed to help GSA Fleet understand more about the performance, costs, and maintenance needs of electric vehicles to help them prepare for the potential increase in electric vehicles in the fleets in order to better advise other agencies on these vehicles use and operation. In these programs, GSA spent over $5.9 million covering the additional costs for the electric vehicles and spent another $1.2 million on purchasing electric-vehicle-charging stations. <2. Agencies Reported Meeting Most Fleet Energy Requirements by Adding More Alternative Fuel Vehicles to their Fleets and Improving Fleet Management> The majority of agencies subject to federal-fleet energy requirements reported meeting most requirements for fiscal year 2017 by changing the mix of vehicles acquired and improving fleet management. Specifically, agencies credited acquiring low greenhouse-gas-emitting and alternative fuel vehicles for helping to reduce petroleum use and per-mile greenhouse gas emissions. Agencies also described improving their fleet management in other ways, such as removing unnecessary vehicles and reducing miles traveled in order to reduce petroleum use and greenhouse gas emissions. Agencies fleets reflected increasing numbers of alternative fuel vehicles over the past 10 years, predominantly flex-fuel vehicles. <2.1. Agency Officials Stated That Acquisitions and Better Fleet Management Helped Reduce Petroleum Use and Greenhouse Gas Emissions> DOE and other agency officials we spoke with from agencies that met the reduction targets for petroleum use and per-mile greenhouse gas emissions generally attributed their ability to meet these requirements to efforts in two areas: 1. acquiring low greenhouse-gas-emitting vehicles whenever they could (even if they did not meet the related requirement) as well as alternative fuel vehicles, and 2. improving fleet management in other ways, such as by eliminating unnecessary vehicles or driving fewer miles, in line with GSA s fleet management guidance. In line with these efforts, a majority of agencies reported meeting most fleet energy requirements for fiscal year 2017 (see table 2). Fleet managers at two of the case study agencies said that acquiring low greenhouse-gas-emitting vehicles was key to their ability to meet the fiscal year 2017 targets for reducing petroleum use or greenhouse gas emissions. For example, although VA reported not meeting the low greenhouse-gas-emitting acquisitions requirement for fiscal year 2017, VA officials said that they did acquire low greenhouse gas vehicles when they could, and that to the extent they acquired such vehicles, it was the primary reason they were able to reduce their per-mile greenhouse gas emissions by 24 percent from fiscal year 2014 to fiscal year 2017. This reported reduction far exceeded the requirement for a 4 percent reduction in per-mile greenhouse gas emissions during this time frame. According to VA officials, VA s acquisition process requires them to consider low greenhouse-gas-emitting vehicles for each acquisition and to select one whenever one is available that will meet the purpose for the vehicle. According to VA officials, the reason VA reported not meeting the low greenhouse-gas-emitting acquisitions requirement for fiscal year 2017 was that the agency did not consistently self-certify for exceptions to the requirement in cases where there was no low greenhouse-gas-emitting vehicle available that met their mission needs, an issue we also heard from GSA officials. (As shown in table 2, above, this was the one fleet- energy requirement that was reported as being met by less than a majority of the 29 agencies, with 8 reporting meeting this requirement for fiscal year 2017). Fleet managers at all of our case study agencies emphasized that they sought to acquire low greenhouse-gas-emitting vehicles whenever one was available that would serve their needs. GSA officials told us agencies are acquiring significant numbers of low greenhouse gas vehicles. By their count, of the sedans agencies acquired in fiscal year 2018, 92 percent were low greenhouse-gas-emitting vehicles; of the light-duty sport-utility vehicles and trucks agencies acquired, 45 percent were low greenhouse-gas-emitting vehicles. GSA officials stated that according to their analysis, it is likely that the low number of low greenhouse gas vehicles being reported is a result of how the vehicles are identified and reported, and that the number reported is lower than the number acquired. Vehicles considered to be low greenhouse-gas-emitting vehicles include selected makes and models of conventionally fueled vehicles that were identified by EPA as highly efficient, as well as different types of alternative fuel vehicles, such as selected makes and models of flex fuel vehicles, plug-in hybrid electric vehicles, and hybrid electric vehicles, and all battery electric vehicles. Thus, the costs of vehicles considered to be low greenhouse-gas-emitting vary widely. We discuss later in the report the costs of different types of alternative fuel vehicles. Along with the acquisition of low greenhouse- gas-emitting vehicles generally, fleet managers at some case study agencies stated that their acquisition and use of alternative fuel vehicles also helped them to meet the fiscal year 2017 targets for reducing petroleum and per-mile greenhouse gas emissions. Fleet managers at two agencies we spoke with stated or reported that their acquisitions of hybrid vehicles and, to a lesser extent, small numbers of plug-in hybrid and battery electric vehicles also helped managers to meet petroleum and greenhouse gas emissions reduction targets. According to Interior s fiscal year 2015 Strategic Sustainability Performance Plan, over 1,300 hybrids helped the agency reduce petroleum consumption, increase fuel efficiency, and reduce greenhouse gas emissions. Within Interior, officials at the National Park Service told us that they replaced older, inefficient gas vehicles with more fuel efficient hybrids. EPA officials stated that acquiring hybrid vehicles and plug-in hybrid electric vehicles helped them exceed their per-mile greenhouse gas emission reduction target for fiscal year 2017 by just over 9 percent. Furthermore, of the 29 agencies we surveyed, 20 identified that a key benefit to acquiring battery-electric or plug-in hybrid electric vehicles was environmental, particularly in reducing greenhouse gas emissions. In addition, some fleet managers emphasized the role that flex-fuel vehicles fueled with E85 had played in their efforts to meet these targets. Some agencies told us that they acquired flex-fuel vehicles to meet alternative fuel vehicle acquisition requirements, and that using E85 in these vehicles contributed to reducing petroleum use and per-mile greenhouse gas emissions. For example, DOT s fleet manager stated that DOT s acquisition of flex-fuel vehicles and focus on using E85 to fuel those vehicles when available helped DOT to meet these targets for fiscal year 2017. Similarly, in the 2016 Strategic Sustainability Performance Plan, EPA emphasized that using alternative fuel in flex-fuel vehicles helped the agency reduce petroleum use. According to DOE officials, for agencies that met the fiscal year 2017 petroleum reduction target, about 11 percent of their petroleum reduction was due to using alternative fuel. According to DOE officials, the balance of petroleum reduction for these agencies was achieved through fuel efficiency improvements and behavioral changes, including reduction in vehicle miles traveled. In spite of the emphasis some agencies put on alternative fuel use as part of their strategy to reduce petroleum use and greenhouse gas emissions, alternative fuel use in federal fleets overall has dropped in recent years. According to data reported in FAST, while alternative fuel use increased from 4.9-million gasoline gallon equivalents in fiscal year 2005 to 16.2- million gasoline gallon equivalents in fiscal year 2013, since fiscal year 2013 it declined to 12.1-million gasoline gallon equivalents in fiscal year 2017 (see fig.2). The fleet energy requirement to increase use of alternative fuel by 10 percent is based on a fiscal year 2005 baseline, and most agencies reported continuing to meet this requirement. In fact, as a whole, the federal government could continue to decrease its alternative fuel use by as much as 6.7 million gasoline gallon equivalents and still meet the targeted 10 percent increase above the fiscal year 2005 baseline. While E85 was the primary alternative fuel used, according to DOE data, alternative fuel use per dual-fueled vehicle is also at comparatively low levels decreasing between fiscal years 2012 and 2016 from 123 to 90 gasoline gallon equivalents. This decrease was despite DOE s reporting that the number of dual-fueled alternative fuel vehicles with access to alternative fuel increased from about 80,000 vehicles to about 112,000 over the same period. DOE officials said agencies could be using more alternative fuel, but suggested the recent decline could be due to a general lack of available E85 stations, among other reasons. Fleet managers from all five case study agencies reported that their efforts to improve fleet management even beyond those specifically related to acquiring alternative fuel vehicles also helped them to reduce petroleum use and greenhouse gas emissions. Officials at several agencies reported in their Strategic Sustainability Performance Plans or told us that carrying out required fleet reviews helped them reduce the number of vehicles and change to more fuel-efficient vehicles, which directly helped them meet energy requirements. For example, EPA officials told us that through reviewing their vehicle usage, they identified which vehicles to either eliminate or replace with more efficient ones, moves that resulted in reducing petroleum use. Furthermore, in its 2017 Strategic Sustainability Performance Plan, EPA cited that it has reduced its fleet by 170 vehicles in the past 5 years and that its last study showed the potential to discontinue use of 80 to 100 vehicles in the next 5 years. Similarly, DOD reported in its fiscal year 2016 Strategic Sustainability Performance Plan that Army s strategy to meet the requirement to reduce petroleum use was to reduce its fleet size and find the right mix of vehicles to meet its mission needs in addition to acquiring fuel-efficient and alternative fuel vehicles. In this plan, Army reported that between fiscal year 2011 and fiscal year 2015, it reduced its fleet s size by 16,400 vehicles. According to GSA officials, at times, an agency may reduce its petroleum use and greenhouse gas emissions more by replacing large, inefficient vehicles (such as older, large trucks) with more efficient vehicles (such as new small trucks or sedans) even if both are fueled by gasoline than by replacing an already highly efficient conventionally fueled small sedan with an alternative fuel vehicle of the same size. Our review of FAST data suggests that agencies were more successful in reducing the number and size of their sedans and size of their sport utility vehicles than in reducing the number or size of their larger vehicles, such as vans and trucks (see fig. 3). For example, overall, the number of sedans in federal fleets fell by 4 percent from fiscal year 2013 to fiscal year 2017, with the number of larger sedans falling by 15 percent and the number of subcompact sedans increasing by 37 percent, suggesting that agencies moved to smaller, more efficient sedans. On the other hand, among passenger vans, there was an increase in heavier, medium-duty passenger vans, and an overall increase in trucks was fueled by an increase in medium- duty trucks, while the number of light-duty trucks fell. In addition to reviewing and changing fleets, fleet managers also reported that encouraging certain driver behavior helped them to meet energy goals. According to VA s, Interior s, and EPA s fleet managers, agencies also reduced greenhouse gas emissions through educating or encouraging drivers to make behavioral changes such as reducing vehicle idling and overall miles traveled. For example, according to EPA fleet managers, certain regional offices have systems in place that facilitate their combining of motor pools and sharing trips to reduce petroleum use. As previously indicated, according to DOE officials, 11 percent of the reduction in petroleum use for agencies that met the petroleum reduction target was due to an increase in alternative fuel use. According to DOE officials, the balance of petroleum reduction for these agencies was achieved through fuel efficiency improvements and behavioral changes, including reduction in vehicle miles traveled. <2.2. Overall Composition of Federal Fleets Includes More Flex-Fuel Vehicles and Hybrids, and Electric Vehicle Numbers Remain Low> As a result of agencies efforts to meet federal fleet energy requirements, the number of alternative fuel vehicles in federal fleets has grown steadily over the past 10 years, largely due to an increase in flex-fuel vehicles. The number of alternative fuel vehicles in federal fleets increased by 65 percent from fiscal year 2008 through fiscal year 2017, according to FAST data (see fig. 4). During that same time, the number of conventional petroleum-fueled vehicles decreased by 19 percent. As a result, as of fiscal year 2017, alternative fuel vehicles made up about 38 percent of approximately 604,000 total domestic vehicles in the fleet. Most of the alternative fuel vehicles in the federal fleets about 87 percent in fiscal year 2017 are flex-fuel vehicles. As previously mentioned, while flex-fuel vehicles can contribute to reducing petroleum consumption when E85 is used, data show that the usage of E85 continues to fall (see fig. 2), thus reducing the potential environmental benefits of acquiring these vehicles. While the majority of flex-fuel vehicles offered to federal agencies by GSA in fiscal year 2017 did not cost more for agencies to acquire than equivalent petroleum-fueled vehicles, some flex fuel vehicles did cost more for agencies to acquire, with, for example, a few sport-utility flex-fuel vehicles costing between $4,000 and $7,000 more than comparable vehicles. Within the past decade, the number of hybrid vehicles in federal fleets also increased significantly, from almost 1,800 in fiscal year 2008 to over 25,000 in fiscal year 2017. Hybrids accounted for about 11 percent of all alternative fuel vehicles in fiscal year 2017. Finally, while agencies have acquired some electric vehicles, the number of electric vehicles in federal fleets has remained very small consisting of just over 1,000 plug-in hybrid electric and battery electric vehicles in fiscal year 2017. <3. Several Challenges May Limit Further Progress toward Fleet Energy Goals> In spite of federal agencies reported general success in meeting fleet energy requirements, several challenges may hinder agencies further progress towards the goals of reducing federal fleets petroleum use and greenhouse gas emissions. First, although acquiring electric and hybrid vehicles could help agencies meet the current fleet energy goals to reduce petroleum use and per-mile greenhouse gas emissions in federal fleets, depending on where and how the vehicles are used, costs can be prohibitive. The costs of these vehicles and charging infrastructure make it challenging for agencies to acquire them on a large scale. Second, a lack of fuel and infrastructure availability limits agencies use of alternative fuel, specifically E85. Third, agency officials we interviewed stated that a continuing need for larger vehicles to perform certain tasks limits the number of low greenhouse gas vehicles agencies can acquire and thus the potential to reduce petroleum use and greenhouse gas emissions. <3.1. Higher Costs Pose Challenges to Acquiring Electric and Hybrid Vehicles> Acquiring electric and hybrid vehicles could help agencies meet fleet energy goals, but higher costs pose challenges. As described previously, prior to May 2018, federal agencies were under a directive to acquire zero-emission (electric) and plug-in hybrid electric vehicles for 20 percent of all new agency passenger vehicle acquisitions by December 31, 2020, and for 50 percent by December 31, 2025. Some of the discussions we had with agency officials about challenges related to acquiring electric vehicles took place while this directive was in effect. In part because guidance on the new Executive Order had not been issued at the time we spoke with them (although it was subsequently issued in April 2019), agency officials we spoke with after this directive was revoked said they were uncertain of the effect of the new Executive Order and would continue to try and meet fleet energy goals until new guidance was issued. Compared to other alternative fuel vehicles available from GSA, battery electric, plug-in hybrid electric, and hybrid electric vehicles can offer potential to further general federal goals to reduce petroleum use and tailpipe greenhouse gas emissions. Specifically, battery electric vehicles consume no petroleum and produce zero tailpipe greenhouse gas emissions, while plug-in hybrid electric vehicles have the potential to consume very little gasoline, with a correspondingly small amount of tailpipe greenhouse gas emissions from the gasoline used, and hybrid electric vehicles offer higher fuel economy than many other vehicles. According to DOE s Fleet Management Handbook, replacing a petroleum- fueled vehicle with a battery electric vehicle provides a 100 percent reduction in that vehicle s use of petroleum. In addition, according to DOE officials, for purposes of tracking agencies compliance with the now- revoked Executive Order s fleet requirements, battery electric vehicles were considered emissions free, and plug-in hybrids were considered emissions free when run on electricity. The now-revoked fleet requirements did not consider emissions generated during the production of fuel or the manufacturing process. The Council on Environmental Quality guidance states that emissions generated from the production of electricity are not counted toward agencies fleet emissions because those emissions are assumed to be captured in each agency s facility electricity reporting and their annual greenhouse gas data report. Counting them as fleet emissions would result in double counting. Nevertheless, to fully consider the potential environmental benefits of alternative fuel vehicles, these emissions would need to be considered and compared to the emissions generated by the production of fuel and manufacturing process of conventionally fueled vehicles. From a full life-cycle perspective, greenhouse gases emitted during the manufacturing of a vehicle affect a vehicle s overall emissions. Accurately determining the amount of greenhouse gas emitted during the manufacturing of different types of vehicles is complicated, and we found no federal source that publishes this information. However, a study by the International Energy Agency found that manufacturing battery electric vehicles results in higher greenhouse gas emissions than manufacturing conventional internal combustion engine gasoline-fueled vehicles but that over the typical life of an electric vehicle, the elimination of tailpipe emissions results in these vehicles having lower greenhouse gas emissions overall than conventional gasoline-fueled vehicles, with the amount of emissions savings depending on the carbon intensity of power generation used to charge the vehicles. Another study, by Argonne National Laboratory, considered mid-size light-duty vehicles. According to this study, on a life-cycle basis including emissions related to the manufacture and disposal of the vehicles, the production of the fuel, and the use of fuel to operate the vehicle hybrid electric vehicles produced about 25 percent fewer greenhouse gas emissions per mile than conventionally fueled gasoline vehicles, plug-in hybrid electric vehicles produced about 26 to 29 percent fewer greenhouse gas emissions per mile than conventionally fueled gasoline vehicles, and battery electric vehicles produced about 26 to 34 percent fewer greenhouse gas emissions per mile. The study also considered the life-cycle greenhouse gas emissions for flex fuel vehicles run on E85, finding them to produce about 20 percent fewer greenhouse gas emissions per mile than a conventionally fueled gasoline vehicle. This study also considered the costs of alternative fuel vehicles in light of their potential to reduce greenhouse gas emissions. It estimated that in 2013 dollars and, based on high volume production, a 15-year vehicle life-cycle, and a 5 percent discount rate, the greenhouse gas emissions avoided by using hybrid-electric vehicles compared to a conventional gasoline fueled vehicle cost $240 per metric ton. For plug-in hybrid electric vehicles, the cost is between $390 and $860 per metric ton of greenhouse gas emissions avoided, and for battery electric vehicles the cost is from $1,090 to $2,600 per metric ton of greenhouse gas emissions avoided. For flex fuel vehicles, the cost was estimated to be $170 per metric ton of greenhouse gas emissions saved. Based on these findings, when an agency replaces a petroleum fueled vehicle with a battery electric vehicle, a plug-in hybrid electric vehicle, or a hybrid electric vehicle, it can reduce its petroleum use and greenhouse gas emissions, though the extent of its reduction depends on the type of vehicle the agency acquires, and the type of vehicle it replaces, as well as many other factors. However, it may currently be paying more for such vehicles from a life-cycle perspective. In the time since this study was published, according to DOE, battery costs have continued to fall, and these vehicles may be cost competitive in the near future. For battery-electric vehicles and plug-in hybrid electric vehicles, which must be regularly charged from the electrical grid, one consideration included in the Argonne National Lab study s analysis of how much greenhouse gasses are emitted through the vehicle s operation is the level of greenhouse gas emissions associated with electricity generation. Such emissions affect the extent to which using electricity instead of gasoline to fuel vehicles reduces the amount of greenhouse gas emissions generated into the atmosphere and this varies by location. While the Argonne National lab study described above based its analysis on the average mix of electrical generation in the U.S., the amount of greenhouse gas emissions associated with electricity generation in the U.S. actually varies widely depending on the sources used to generate the electricity. These sources vary depending on the region of the country where the electricity is produced. For example, the production of electricity from burning coal causes relatively high greenhouse gas emissions, while the production of electricity from solar or wind causes little to no greenhouse gas emissions. As a result, a battery electric vehicle charged in a region with low coal electricity generation, such as the Northeast whose electricity generation mix includes about 2.6 percent coal will result in greater greenhouse gas emissions reductions than those charged in regions where most electricity generation comes from coal, such as the upper Midwest, which uses about 62.3 percent coal (see fig. 5). These figures are meant to illustrate the differences in electricity generation, and they do not account for other factors that may affect vehicles efficiency and thus the extent to which they lead to reductions in emissions. For example, in extreme weather conditions, the range of battery-electric vehicles can be reduced, resulting in more frequent charging, and thus more electricity use. Further, the use of air conditioning or other components in the vehicle can also impact their fuel efficiency. We analyzed emissions data on vehicles operating in different parts of the country and found that when considering both tailpipe and fuel-production greenhouse gas emissions, electric and plug-in hybrid electric vehicles produce less greenhouse gas emissions than an equivalent gasoline-only vehicle in both higher-coal and lower-coal electricity generation regions. In higher-coal electricity generation regions, however, electric vehicles can offer less or about an equivalent reduction in greenhouse gas emissions to comparably-sized hybrid electric vehicles, whereas in lower- coal electricity generation regions, electric vehicles offer the opportunity to reduce greenhouse gas emissions by a greater extent than comparably-sized hybrid electric vehicles. In 2009, we recommended that DOE develop guidance to help agencies plan to acquire the right mix of vehicles that can meet requirements while also taking into account the energy sources used to generate the electricity used to fuel electric vehicles. In response, DOE issued guidance that recommended agencies consider, among other things, whether coal-based electricity is used in an area in order to evaluate the location and emissions-reduction potential of using such vehicles. However, of the five case study agencies we spoke to, no agency officials said that they specifically worked to locate electric vehicles where the production of electricity was likely to produce fewer greenhouse gases. Since greenhouse gas emissions due to the production of electricity were not considered in the now-revoked executive order s requirements and, according to the case study agency officials, was not stressed by GSA in discussions about increasing electric vehicles, they stated that this had not been a focus of their efforts. Instead, they stated that they focused on locating electric vehicles where they were able to install electric charging stations and had a mission need that fit with the use of electric vehicles. According to some agency officials, the higher acquisition costs associated with electric vehicles and the costs of installing charging infrastructure have hindered the extent of their integration into federal fleets. (See app. III for a more detailed discussion of life-cycle costs of electric vehicles.) As part of an effort to further the overall goal of reducing greenhouse gas emissions, the now-revoked 2015 Executive Order called for agencies to increase their acquisition of zero-emission vehicles (battery-electric vehicles) or plug-in hybrid electric vehicles by 2020. While all five case study agencies had acquired small numbers of electric vehicles and associated charging infrastructure, two fleet managers said that the cost challenges would have made it difficult to acquire sufficient numbers of vehicles to meet the Executive Order s requirements by 2020, had the Executive Order not been revoked. To meet the revoked electric-vehicle acquisition requirements, federal agencies would have had to acquire close to 3,000 battery electric or plug-in hybrid electric vehicles per year starting in 2020, according to GSA officials. According to data provided by GSA, in fiscal year 2017, agencies purchased 373 battery electric or plug-in hybrid electric vehicles. Just over half of these vehicles were plug-in hybrid electric minivans, with the rest being sedans. The purchase of these 373 battery electric or plug in hybrid electric vehicles, plus an additional 4,584 hybrid electric sedans, made up about 31 percent of the just over 16,000 sedans and minivans acquired that year and increased the total amount agencies spent purchasing sedans and minivans by about $10.5 million (see table 3) or about 3 percent of the total of approximately $314 million spent purchasing sedans and minivans overall. Among the hybrid electric, battery electric, and plug-in hybrid electric sedans and minivans, federal agencies purchased the largest numbers of hybrid- electric sedans, which had the smallest additional average per-vehicle costs as compared to comparably sized gasoline or flex-fueled vehicles. As a result, agencies spent an average amount of about $2,000 more per battery electric, plug-in hybrid electric, and hybrid electric vehicle acquired, although the average amount per vehicle varied widely by size and type of vehicle acquired. As described below, some of the higher acquisitions costs of these alternative fuel vehicles will be recovered due to lower maintenance and fuel costs of the vehicles over time. However, we were unable to get data on federal agencies actual lifecycle costs of these vehicles because, according to agency officials, agencies had not tracked these data consistently. Of the 29 agencies we surveyed, 11 identified acquisition costs as a challenge to acquiring and using electric vehicles. In addition, 20 of the 29 agencies identified charging infrastructure as a key challenge to acquiring electric vehicles, citing the costs of installation among other challenges. In discussions with case study agencies, federal officials did not cite the acquisition costs of flex-fuel vehicles as a challenge to acquiring these vehicles. Some officials stated that these vehicles relatively low costs compared to other alternative fuel vehicle options was one reason that agencies have largely met the alternative fuel vehicle acquisitions requirement through the acquisition of flex fuel vehicles. GSA s purchasing data did not provide sufficient detail for us to analyze the extent to which agencies paid more to purchase flex fuel vehicles. According to GSA s leasing data on GSA-leased vehicles, for fiscal year 2017, agencies acquired over 20,600 alternative fuel vehicles, of which over 14,700 were flex fuel vehicles leased at no additional cost. However, agencies also acquired 1,268 flex fuel vehicles that, on average, had an additional cost of about $2,300, with the result that agencies spent a total of about $2.9 million more to acquire these vehicles to lease than if they had acquired equivalent gasoline-fueled vehicles. When agencies choose to lease an alternative-fuel vehicle that is more expensive than a comparable conventionally fueled vehicle, under law, GSA must spread that difference in cost sometimes referred to as the incremental cost across the agency s entire fleet during the year the alternative fuel vehicle is acquired. According to GSA officials, this requirement makes it easier for agencies to incorporate higher-priced alternative fuel vehicles, such as battery-electric or plug-in hybrid electric vehicles, into their fleets. The difference in cost between acquiring a plug- in hybrid electric or battery-electric vehicle compared to an equivalently sized conventionally fueled vehicle can vary depending on the amount GSA has negotiated with the dealer to pay for a particular vehicle. For example, GSA s lease offerings showed that for fiscal year 2019, agencies would have to pay anywhere from about $5,300 to $19,400 more to acquire a plug-in hybrid electric vehicle than to acquire an equivalently sized conventionally fueled vehicle, and approximately $16,100 to $18,800 more to acquire a battery electric vehicle that is an equivalently sized conventionally-fueled vehicle. Officials from two case study agencies told us that because GSA spreads the additional costs over an agency s entire leased fleet, the costs may not affect the agency s budget much as long as the agency acquires only a small number of vehicles. For example, according to a local DOT official, the acquisition of two battery-electric Ford Focuses added an additional $15 per vehicle to each of its vehicles in its fleet. While electric vehicles have higher acquisition costs, they generally have lower fuel and maintenance costs than conventionally fueled vehicles, and as a result, GSA officials charge agencies lower mileage rates for these vehicles. GSA also charges agencies lower mileage rates for hybrid vehicles, based on their higher fuel efficiency. Of the agencies we surveyed, 14 of the 29 identified lower fuel and maintenance costs as a key benefit to acquiring battery electric or plug-in hybrid electric vehicles. Because of these lower mileage rates, the more miles an agency drives a leased electric vehicle, the more the overall cost difference to the agency between an electric vehicle and a conventionally fueled vehicle will shrink. However, our analysis of GSA s leasing rates showed that over 5 years the typical life of a lease of an electric vehicle and with average mileage these lower mileage costs would not make up for the higher acquisition costs of these vehicles (see fig. 6). GSA officials and several fleet managers also told us that in their experiences with leasing electric vehicles, lower utilization coupled with the lower mileage costs charged by GSA to agencies had not made up for the significantly higher acquisition cost over the life of the leases. The GSA lease costs consider the lifetime costs of the vehicles, including fueling and maintenance and eventual disposal of the vehicle through auction. The five case study agencies we spoke with did not use a life-cycle analysis to compare costs across various vehicle types when making vehicle procurement decisions. However, all five case study agencies told us they analyze life-cycle costs to inform their lease versus purchase decisions. See appendix III for more discussion on life-cycle costs. Fleet managers at three of the case study agencies we spoke with before the Executive Order was revoked told us that they had worked to increase the number of electric vehicles in their fleets, in spite of the higher costs. Officials at a few agencies stated that when the budget allowed, they would try to acquire electric vehicles. For example, VA officials told us that VA budgets for electric vehicles on the local level, and that local staff decide how much of their budget will go towards funding of electric vehicles. VA and Interior officials said their acquisitions of electric vehicles had thus far not greatly affected their budgets, but within Interior, the fleet managers for Fish and Wildlife Services and the Bureau of Indian Affairs said cost could become an issue if more electric vehicles were to be acquired. GSA Office of Governmentwide Policy officials told us that agencies could fit the higher costs of acquiring electric vehicles into their budget by reducing their fleet size and acquiring a few of these more expensive vehicles. Further, GSA has introduced several initiatives to help agencies finance alternative fuel vehicle acquisitions, including specific electric vehicle initiatives. For example, in fiscal year 2016, according to an Army fleet manager, Army acquired electric vehicles through GSA at a price GSA had negotiated that was equal to the price for comparably sized petroleum fueled vehicles. However, this pricing was only offered in 2016 as part of a one-time deal that GSA negotiated with the vehicle manufacturer. In addition to the costs of purchasing or leasing electric vehicles, agencies described challenges balancing the costs of purchasing and installing charging stations with other competing priorities. Agency officials told us they generally prefer charging stations, such as Level 2 stations, that can charge a vehicle in a few hours to allow vehicles to be used multiple times a day. These types of Level 2 charging stations can cost anywhere from about $400 to $8,000 depending on the model and its features and do not include installation costs. Generally, the less expensive models may not include features such as energy monitoring that tracks electricity use or communication capabilities that enables data communication that some fleet managers said they view as necessary to manage and track the performance and costs of electric vehicles. We were unable to determine the total amount that agencies had spent to acquire existing charging stations to date because data were not available at a sufficient level of detail. Installation costs also varied widely, depending, among other things, on the complexity of the installation, such as the need for trenching or upgrading the electrical service. For example, officials from VA told us that sometimes in order to install charging stations, they have had to trench an entire parking lot to ensure the units have the necessary power to charge its vehicles which can be expensive. DOE estimates that to install a charging station it costs about $100 per foot to trench through concrete, lay conduit, and refill. As a result, it could cost up to $10,000 to trench 100 feet. Further, the Veterans Health Administration indicated that funding for purchasing and installing charging stations at their facilities had to compete with other priorities. Specifically, the costs for charging stations came out of the facilities capital-planning budget, which also includes funding for veterans care. Similar to determining what agencies have spent on charging stations, we were also unable to determine what total installation costs have been to date because of data limitations. Although many federal facilities are not equipped with fast charging infrastructure and the number of public charging stations remains limited, federal agencies had begun taking steps to install more charging stations. Prior to the 2015 Executive Order being revoked, agencies had recently begun to install more of these stations as part of their efforts to prepare for the requirement that 20 percent of light-duty vehicle acquisitions be zero emission (electric) vehicles or plug-in hybrid vehicles by 2020. We found 12 out of the 29 agencies we surveyed had installed more than 20 charging stations, while 14 others had installed at least one charging station, and only 3 agencies had not installed any charging stations. According to past Strategic Sustainability Performance Plans, agencies had started to implement strategies to increase their electric- vehicle infrastructure. For example, according to EPA s fiscal year 2016 plan, it planned to conduct a survey of its parking facilities to develop a charging infrastructure policy and plan, including identifying potential locations for charging stations. Similarly, Army officials described taking additional steps, including sending specialized teams to several of its bases to determine the optimal and least costly placement of its charging stations. However, fleet managers also told us they were having difficulties installing electric vehicle infrastructure, in particular at leased facilities. Specifically, several agencies fleet managers told us that it was difficult or impossible to install charging stations at leased properties unless their installation was negotiated into the lease from the beginning. In part because guidance on the new Executive Order had not been issued at the time we last spoke with agency officials on this issue, the extent to which the revoking of the directive related to acquiring electric vehicles would affect agencies efforts to install charging infrastructure was unclear. <3.2. Availability Limits Agencies Use of Alternative Fuel> Fleet managers told us that another challenge that may limit progress toward energy goals was a lack of fuel availability in particular the availability of E85 which made it difficult to fuel flex-fuel vehicles with alternative fuel. Of the 29 agencies, 20 identified the availability of E85 as a challenge to using alternative fuel in flex-fuel vehicles. While some agencies still largely rely on flex-fuel vehicles to meet alternative fuel vehicle acquisition requirements, E85 can only be found at about 2 percent of all refueling stations, according to GSA. To help agencies locate alternative fuel stations, such as those with E85, DOE developed an Alternative Fuel Station Locator tool that maps nearby refueling stations. VA and Interior officials said they routinely use the tool to check for accessible alternative fuel stations prior to acquiring an alternative fuel vehicle. However, outside the rural Midwest and Texas, E85 may be difficult to find. In addition, when E85 is available, agency officials from two case study agencies said these locations may be mislabeled, out of service, or too far from the vehicle s operating location. We reported similar concerns in 2011; specifically, that while agencies acquired primarily flex-fuel vehicles, the low availability of E85 resulted in a majority of flex-fuel vehicles receiving a waiver from the requirement to use alternative fuel, and as a result, agencies refueled their flex-fuel vehicles with petroleum. Another difficulty fleet managers face with regard to increasing the use of E85 is that, even when E85 is available and conveniently accessible, drivers still may refuel with gasoline even though federal agencies have undertaken a number of efforts to encourage its use. As we mentioned previously, to help agencies track their fleet fuel purchases, DOE developed the FLEETDASH system that can identify opportunities where drivers could have refueled with E85 within 5 miles of their location but, instead, chose not to because they were unaware or unwilling. Some agency officials described using this system to try to increase alternative fuel use. For example, VA officials told us they use FLEETDASH to track and identify opportunities to increase their alternative fuel use. In another example, EPA officials told us that to increase their use of alternative fuels, drivers at one location started to print out maps that identified alternative fuel refueling locations near their routes. DOE recently estimated that if federal agencies refueled flex-fuel vehicles with E85 every time they refueled within 5 miles of an E85 station, the use of E85 would quadruple, and agencies could decrease their use of petroleum by 10 percent and reduce greenhouse gas emissions by a further 9 percent. <3.3. Agencies Need for Larger Vehicles Limits the Number of Low Greenhouse-Gas-Emitting Vehicles They Can Acquire> Another challenge that may limit further progress towards energy goals is that agencies continue to need larger, less efficient vehicles for many of their mission needs, according to many agency officials. According to FAST data, about 85 percent of agencies fleets in fiscal year 2018 was comprised of sport-utility vehicles, passenger vans, and trucks (as illustrated previously in fig. 3). In response to our survey, 26 of 29 agencies indicated that mission or intended use was a very important factor when selecting a vehicle, and officials at some case study agencies told us that they had a significant need for larger vehicles to meet certain missions. For example, Interior operates on large rural Indian reservations where they need pick-up trucks or sport-utility vehicles to navigate the often rugged terrain. In another example, DOT officials stated that to support their national airspace facilities, their vehicles must drive off-road carrying bulky or sensitive tools to go to remote air strips. For these purposes, they look to acquire larger vehicles such as cargo vans and enclosed pickup trucks with 4-wheel drive capabilities or 2- wheel-drive sport-utility vehicles that have the ground clearance to meet their needs. GSA and agency officials told us that the vehicles designated as low greenhouse-gas-emitting vehicles are typically smaller vehicles and in some cases are not suitable for these mission needs. For example, GSA offered one 4x2 hybrid-electric sport-utility vehicle and one 4x4 plug-in hybrid-electric sport-utility vehicle in fiscal years 2017 and 2018. In fiscal year 2019, additional vehicles have been added. While these options are considered low greenhouse-gas-emitting vehicles, an agency official told us that they have a variety of other characteristics that may make them less desirable for certain missions for example, they may cost significantly more than other options to acquire, or, in the case of the plug-in, rely on charging infrastructure that the agency may not have in the location where the vehicle is needed. According to VA staff, there are not enough low greenhouse gas vehicle options to ensure fleet managers can meet mission goals and low greenhouse-gas-emitting vehicle acquisition requirements. For example, VA relies on minivans to transport patients and deliver health care services; however, no gasoline or E85- fueled minivans offered by GSA in fiscal year 2017 were designated as low greenhouse-gas-emitting vehicles. Furthermore, in some cases, when an agency has determined it needs a larger vehicle, fleet managers told us they are likely to choose a flex-fuel vehicle because these vehicles are offered in larger, more rugged models. These vehicles are often not designated as low greenhouse-gas-emitting vehicles but count towards the alternative fuel vehicle acquisition requirements. In contrast, officials representing four case study agencies stated that when the mission need is suitable for a sedan, the agency seeks to acquire low greenhouse-gas-emitting vehicles. GSA offers a number of alternative fuel vehicle options for sedans, including hybrid, battery electric, and plug-in electric hybrid vehicles. Further, many GSA offered gasoline-fueled sedans are also designated as low greenhouse-gas- emitting vehicles. Officials at one agency told us, when possible, the agency acquires alternative fuel sedans such as flex-fuel vehicles, hybrid vehicles, or, in a few cases, electric vehicles. Furthermore, officials at this agency stated that when they are acquiring a vehicle where alternative fuel is not readily available, they will sometimes acquire a low greenhouse-gas-emitting vehicle that runs only on gasoline. <4. Agency Comments> We provided a draft of this report to Army, DOE, DOT, EPA, GSA, Interior, and the VA for their review and comment. In response, Army, DOE, EPA, GSA, Interior, and VA provided technical comments which were incorporated as appropriate. Army and DOT reviewed the report but did not provide any comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of the Departments of Defense, Energy, Interior, and Veterans Affairs, and the Administrators of GSA and EPA. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at 202-512-2834 or vonaha@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix IV. Appendix I: Survey of Federal Agencies on Acquiring Alternative Fuel Vehicles In April 2018, we initiated a survey of 29 federal agencies fleet managers. The questions we asked and the aggregate results of the responses to the closed-ended questions are shown below. Our survey was comprised of closed- and open-ended questions. We do not provide results for the open-ended questions. We received 29 completed survey responses a response rate of 100 percent. 1. What is the process your agency follows when acquiring a new vehicle to replace a vehicle? Please list (in numerical order) the sequence of events from deciding to acquire a vehicle to actually acquiring it. To the extent that the process is different when adding an additional vehicle, please describe that as well. (Written responses not included) 2. At what point in the above process, does your agency consider whether to acquire an alternative fuel vehicle or a petroleum fuel vehicle when replacing a vehicle? To the extent that the process is different when adding an additional a vehicle, please describe that as well. 3. (Written responses not included)In the process to replace a vehicle described above, does your agency consider vehicle life-cycle cost information as part of a lease versus purchase analysis? 3a. If yes, does your agency consider the following factors in their vehicle life-cycle cost analysis? Please check one answer for each row. 4. In the process to add an additional vehicle, does your agency consider vehicle life-cycle cost information as part of a lease versus purchase analysis? 4a. If yes, please describe how, if at all, the above lease versus purchase analysis differs in the case of adding an additional vehicle, and in particular any differences in the type of life-cycle cost information considered in the case of adding a vehicle. (Written responses not included) 5. Excluding the lease versus purchase analysis, does your agency conduct any other vehicle life-cycle cost analysis at any other point in the vehicle replacement process described in Question 1? 5a. Does your agency compare the life-cycle costs of multiple vehicle types prior to selecting a type of vehicle to acquire? 5b. Does your agency perform a cost analysis comparing life-cycle costs of acquiring a non-electric vehicle to costs of acquiring an electric vehicle? 5c. If no, please describe how your agency considers the results of this life-cycle cost analysis excluding the lease versus purchase analysis. (Written responses not included) 5d. What factors below does your agency consider in this life-cycle cost analysis? Please check one answer for each row. Useful life (number of years it is expected to be used) 6. In the process to add an additional vehicle, does your agency consider vehicle life-cycle cost information at any point outside the lease versus purchase analysis? 6a. If yes, please describe how, if at all, any life-cycle cost analysis described in question 5 differs in the case of adding an additional vehicle, and in particular any differences in the type of life-cycle cost information considered in the case of adding a vehicle. (Written responses not included) 7. Has your agency ever determined that an electric vehicle is the most appropriate vehicle to meet the agency s needs? 7a. If yes, please provide some examples of those situations and how your agency determined the type of electric vehicle (i.e. electric vehicle, plug-in electric hybrid vehicle, hybrid electric, etc.). (Written responses not included) 8. How important are the following factors when determining whether the vehicles your agency acquires will be alternative fuel vehicles or petroleum fuel vehicles? Mission (The expected function or purpose of the vehicle) Availability of alternative fuel vehicles Other (specify in box below) For agencies that indicated there were other factor(s), we provided an open-ended question that requested a description of the factor(s) and 3 agencies provided descriptions of other factors not shown here. 9. What are the benefits, if any, (including any related to costs, maintenance, environment, safety, federal requirements, etc.) of acquiring and using each of the following types of alternative fuel vehicles relative to petroleum fuel vehicles? 9a. Electric vehicles (EVs) and plug-in hybrid electric vehicles (PHEVs) that use battery power (Written responses not included) 9b. Hybrid electric vehicles (HEVs) powered by an internal combustion engine (Written responses not included) 9c. Flex Fuel Vehicles (FFVs) designed to run on E85 (Written responses not included) 9d. Other alternative fuel vehicles (Written responses not included) 10. What are the challenges, if any, (including any related to costs, maintenance, environment, safety, federal requirements, etc.) of acquiring and using each of the following types of alternative fuel vehicles relative to petroleum fuel vehicles? 10a. Electric vehicles (EVs) and plug-in hybrid electric vehicles (PHEVs) that use battery power (Written responses not included) 10b. Hybrid electric vehicles (HEVs) powered by an internal combustion engine (Written responses not included) 10c. Flex Fuel Vehicles (FFVs) designed to run on E85 (Written responses not included) 10d. Other alternative fuel vehicles (Written responses not included) 11. How many electric charging stations has your agency installed? 12. Has your agency encountered any challenges while trying to site and install electric charging stations? 12a. If yes, what were those challenges and how, if at all, have you been able to overcome them? (Written responses not included) 13. Has your agency encountered any challenges related to acquiring and using alternative fuel vehicles and alternative fuel while trying to meet federal fleet energy requirements, including Executive Order 13693? 13a. If yes, what were those challenges and how, if at all, have you been able to overcome them? (Written responses not included) 14. Has your agency taken steps to prepare for Executive Order 13693 s requirement that 20 percent of all new passenger vehicles be zero emission vehicles or plug-in hybrids by 2020? 14a. If yes, please provide some examples of the steps you have taken. (Written responses not included). 15. Has the availability of alternative fuel vehicles from GSA s inventory ever prevented your agency from acquiring an alternative fuel vehicle? 15a. If yes, please describe what vehicle you were interested in and why it was not available. (Written responses not included) Appendix II: Objectives, Scope, and Methodology You asked us to review the costs and challenges related to federal agencies meeting the different federal energy requirements for vehicle fleets. This report addresses: (1) how agencies meet fleet energy requirements and how their efforts changed agencies fleets and (2) challenges federal agencies faced related to furthering fleet energy goals. The report also includes information on the extent agencies consider life- cycle costs when selecting vehicles. To determine the extent to which federal agencies reported meeting fleet energy requirements and the composition of federal agencies fleets, we analyzed data from the Federal Automotive Statistical Tool s (FAST) database on the composition and fuel use of federal agencies fleets from fiscal years 2008 through 2017, the most current data available at the time of our review. Annually federal agencies must submit data on all of their non-tactical vehicles to this database, which the General Services Administration (GSA) and the Department of Energy (DOE) established in 2000 and is used to satisfy statutory and regulatory reporting requirements. We reviewed the data relative to selected statutory requirements and directives that were in effect for fiscal year 2017. Specifically, we analyzed these data to identify the total numbers of alternative fuel vehicles by fuel type and vehicle size in federal fleets and the changes in alternative fuel use during this time period. DOE provided us fleet performance data on the extent to which each of the agencies subject to these federal requirements met requirements or directives to acquire alternative fuel vehicles, use alternative fuel, and reduce petroleum use and per-mile greenhouse gas emissions for fiscal year 2017. In addition, the Environmental Protection Agency (EPA) reported on the extent to which agencies were meeting the requirement to acquire low greenhouse-gas-emitting vehicles for fiscal year 2017, based on the same database. To assess the reliability of these data, we interviewed DOE officials on how the data were checked for accuracy and collected written responses from them on how the data were collected, maintained, analyzed and presented. This assessment included how DOE flags suspicious data, reviews the data, and validates final entries. Based on the information collected, we found the data sufficiently reliable for our purposes of identifying the number of vehicles by type of vehicle and size, and fuel consumed by federal fleets in order to describe how vehicle fleets changed over the past decade. In May 2018, a new Executive Order was issued that revoked a previous Executive Order. The previous Executive Order contained two directives, to acquire zero emission (electric) vehicles and reduce per-mile greenhouse gas emissions by specific targets and specific years. Thus, while the above statutory requirements for fiscal year 2017 remained in effect for fiscal year 2018, the directives related to acquisition of zero emission (electric) vehicles and per-mile greenhouse gas emissions reductions were no longer in effect after May 2018. To understand the different federal energy requirements for vehicles fleets and guidance for agencies to implement them, we reviewed federal statutes, agency rules, and executive orders, and examined DOE and GSA guidance on the various statutory and regulatory requirements and executive orders. For example, we reviewed DOE s federal fleet management handbook intended for agencies to select and implement strategies to reduce fleet greenhouse gas emissions and use of petroleum, and EPA guidance on how to meet the requirement to acquire low greenhouse-gas-emitting vehicles, among other documents. In April 2019, CEQ and OMB issued implementing instructions for the Executive Order. The implementing instructions emphasized that agencies should follow the statutory requirements that are still in place and annually identify targets for petroleum reduction and increases in alternative fuel use as part of agencies Strategic Sustainability Plans. To broaden our understanding of agencies efforts to meet requirements, we also identified five case study agencies Department of the Interior (Interior), Department of Veterans Affairs (VA), Department of Transportation (DOT), the Army, and the EPA. We selected these case study agencies based on data from the FAST database and their planning documents to represent different sized fleets, a mix of alternative fuel vehicle types, including electric vehicles, and missions with varying vehicle needs. Interior, VA, and Army represented larger fleets, whereas DOT represented medium and EPA small sized fleets. In part, we also chose DOT and EPA to learn about their unique vehicle acquisition processes and plans for acquiring electric vehicles, based on their responses to the survey we conducted, which is described below. From these case study agencies and their sub-agencies, we interviewed agency officials, including fleet managers, to learn their efforts to meet requirements, how they acquired vehicles, and how they managed their fleets. We spoke with these agencies before and after the Executive Order was revoked in May 2018. We also reviewed documents reporting on the extent to which these agencies met fleet energy requirements. The results from the case studies cannot be generalized to make inferences about all agencies. However, we determined that our selection methodology was appropriate for our design and objectives and that this methodology would generate valid and reliable evidence to support our work. To determine any challenges agencies face related to further meeting fleet energy goals, we surveyed 29 federal agencies, and asked them to describe their vehicle acquisition processes, the type of cost analysis done when acquiring an alternative fuel vehicle, and the benefits and challenges of using alternative fuel vehicles. We identified and surveyed agencies that were required to comply with fleet energy requirements and conducted the survey beginning in April 2018. Overall, 31 federal agencies were subject to these requirements in fiscal year 2017; however, as part of our review of Department of Defense (DOD) documentation, we found that its various military departments operate independently and decided to survey Air Force, Army, Marine Corps, and Navy separately. We also excluded the Court Services and Offender Supervision Agency because of the decentralized nature of its fleet and the Defense Agencies within DOD because it was small relative to other DOD agencies. To increase the validity and reliability of our survey, we conducted pretests of the survey with fleet management officials from three federal agencies: VA, Interior, and the Government Accountability Office. We received a 100 percent response rate to our survey. (See app. I for survey results.) To further learn about the challenges of alternative fuel vehicles as well as strategies agencies were using to acquire these vehicles, we interviewed agency officials, including fleet managers, from our five case study agencies, GSA and DOE. In addition, to understand agencies efforts to further fleet energy goals and the challenges they faced, we reviewed the Fleet Management Plans and Strategic Sustainability Performance Plans of each agency we surveyed. The strategic sustainability plan is to prioritize agency actions to support the reduction of greenhouse gas emission and other agency wide targets. The fleet management plan is to specifically address how an agency s fleet will meet its greenhouse gas reduction targets, petroleum reduction targets, and other relevant fleet requirements. We also focused our analysis only on selected types of alternative fuel vehicles. Specifically, we included flex-fuel vehicles, hybrid-electric vehicles, plug-in hybrid electric vehicles, and battery electric vehicles because these represent the most numerous in federal fleets or those with specific acquisition requirements. We obtained vehicle cost information from GSA s Alternative Fuel Vehicle Guide that lists the costs and specifications of each alternative fuel vehicle GSA offers, and analyzed cost differences based on fuel type. For the purposes of our analysis, we focused on lease costs, not the costs of purchasing a vehicle from GSA, because in fiscal year 2017, 70 percent of agencies battery electric and plug-in hybrid electric vehicles were leased. To analyze and compare petroleum consumption and greenhouse gas emissions, we judgmentally selected a sample of vehicles from GSA s Alternative Fuel Vehicle Guide and first estimated their annual fuel using DOE s Vehicle Cost Calculator. We then entered their estimated fuel use into Argonne National Laboratory s Alternative Fuel Life-Cycle Environmental and Economic Transportation (AFLEET) tool to estimate well to wheel greenhouse gas emissions. To assess the reliability of these tools, we interviewed and collected written responses from DOE officials regarding the source of the data and the values and assumptions used in its calculations. Based on the information collected, we found that they were sufficiently reliable to estimate petroleum consumption and greenhouse gas emissions. We conducted this performance audit from November 2017 to July 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Agencies Consideration of Costs in Selecting Electric Vehicles Until May 2018 during the time when the previous administration s Executive Order was in effect our case study agencies acquired limited numbers of battery electric and plug-in hybrid electric vehicles with a general understanding that, when the mission need was compatible, acquiring such vehicles was supported by the Executive Order s requirements in spite of their higher costs compared to a conventional vehicle. As of February 2019, the last time we spoke with agency officials on this issue, agency officials stated that they were uncertain of the effect of the new executive order and would continue to try and meet fleet energy goals until new guidance was issued. This guidance was subsequently issued in April 2019, and emphasized that agencies should focus on the statutory requirements while increasing efficiency, optimizing performance, and reducing waste and costs. Until May 2018, when the previous Executive Order was revoked, agencies were expected to increase their acquisition of battery electric or plug-in hybrid electric vehicles. Specifically, agencies were to acquire zero-emission or plug-in hybrid electric vehicles for 20 percent of all new agency passenger vehicle acquisitions by December 31, 2020 and for 50 percent of all new agency passenger vehicle acquisitions by December 31, 2025 in addition to meeting the other various federal fleet requirements. According to Department of Energy guidance on this Executive Order, the targets phased in over time to account for the expected future market availability and cost competitiveness of these vehicles. However, as of fiscal year 2017, GSA officials and several fleet managers also told us that in their experiences leasing electric vehicles, the lower mileage costs of these vehicles had not made up for the significantly higher acquisition cost over the life of the leases, a situation that they described as a challenge to significantly increasing the numbers of such vehicles in their fleets. Three case study agencies described acquiring battery electric and plug-in hybrid electric vehicles despite the higher costs largely because of the Executive Order s requirement. Similarly, 10 of the 29 agencies we surveyed identified meeting federal requirements as a key benefit to acquiring electric vehicles. All five case study agencies had acquired small numbers of electric vehicles in light of the Executive Order s requirements. Agency officials described acquiring these vehicles when their mission and budgets allowed for it. For example, a case study agency with a larger fleet told us that mission needs drove its vehicle acquisitions, and there were limited instances in which an electric sedan would have met the agency s mission needs. However, when the agency acquired a vehicle for a mission that could be met with an electric vehicle such as to ferry officials to and from different offices in an area where charging stations were easily accessible it would have been likely to select an electric vehicle, in part, to help the agency take steps towards meeting the Executive Order s acquisition goals. Agency officials at four of the five case study agencies said once they had identified an opportunity to acquire an electric vehicle generally at a location where the mission aligned with the capabilities of an electric vehicle, recharging infrastructure was available, and there were sufficient funds in the budget they would conduct a lease versus purchase analysis to determine whether leasing or purchasing the vehicle would be most the cost effective option, a key aspect of a life-cycle cost analysis. We have previously reported that a life-cycle cost analysis, which considers vehicle costs from the beginning to the end of vehicle ownership, can help agencies make cost-effective decisions. Officials at the fifth case study agency, Army, stated that the agency had conducted an agency-wide analysis that had determined that leasing was always a better option than purchasing for non-tactical vehicles, and so it no longer conducted this analysis on a vehicle-by-vehicle basis. Officials at our case study agencies stated they did not conduct life- cycle cost analysis to compare and contrast different types of vehicles during the acquisitions process because they considered mission and federal fleet energy requirements to be the key drivers of which type of vehicle to select. However, about half of the agencies that responded to our survey stated that they did do so. Specifically, 14 of 29 agencies indicated they conduct a life-cycle costs analysis outside of a lease-versus-buy analysis when replacing a vehicle, and 13 of these agencies responded that they did such an analysis to compare the costs of an electric vehicle to a non-electric vehicle. Almost all of these agencies responded that they considered initial acquisition cost, fuel cost, electricity consumption, useful life, maintenance costs, and annual miles, with fewer agencies checking that they considered other costs, such as depreciation and disposal costs. As of February 2019, the last time we spoke with agency officials on this issue, agency officials stated that they were unsure of how the revoking of the previous Executive Order and implementation of the new Executive Order would affect the extent to which they acquired electric vehicles in the future. Officials at one case agency stated that with the uncertainty surrounding the requirement to acquire more of these vehicles in the future, it was likely that they would not acquire electric vehicles due to their higher costs. Another case study agency said that although the Executive Order had been revoked, the agency may continue to acquire a limited number of these vehicles in locations where it had already invested funds for electric vehicle infrastructure. Appendix IV: GAO Contact and Staff Acknowledgments <5. GAO Contact> <6. Staff Acknowledgments> In addition to the individual named above, Alwynne Wilbur (Assistant Director); Eric Hudson (Analyst-in-Charge); Ross Gauthier; Bonnie Ho; Malika Rice; Amy Rosewarne; Kelly Rubin; Andrew Stavisky; and Crystal Wesco made key contributions to this report. | Why GAO Did This Study
Since 1988, a series of laws have been enacted and executive orders issued related to federal goals of reducing federal fleets' petroleum use and greenhouse gas emissions. For fiscal year 2017, federal agencies were required to: (1) to acquire certain types of vehicles, (2) to use more alternative fuel, and (3) to meet targets for reducing petroleum and per-mile greenhouse gas emissions. Federal agencies were also under a directive to increase acquisitions of zero emission (electric) vehicles.
GAO was asked to review federal agencies' efforts related to these fiscal year 2017 requirements. This report addresses: (1) how agencies reported meeting fleet energy requirements and how agencies efforts changed their fleets and (2) challenges agencies face related to further meeting fleet energy goals.
To conduct this review, GAO surveyed 29 federal agencies subject to fleet energy requirements and selected 5 agencies—of a variety of sizes and missions—for case studies. The case studies results are not generalizable to all agencies. GAO also: (1) reported on DOE's and GSA's data on federal fleets for fiscal years 2008 through 2017, including GSA's acquisition and cost data for fiscal year 2017, the most current data available; (2) reviewed DOE's and EPA's information on agencies' performance related to fiscal year 2017 requirements; and (3) interviewed federal officials. The directives to reduce per-mile greenhouse gas emissions and increase acquisitions of electric vehicles were revoked by an Executive Order issued in May 2018.
What GAO Found
In responding to fleet management requirements over the past 10 years, agencies have incorporated an increasing number of alternative fuel vehicles into their fleets. These have been predominantly flex-fuel vehicles, as hybrid and battery electric vehicles continue to make up a small percentage of agencies' fleets (see figure). The Department of Energy (DOE) is responsible for overseeing agencies' compliance by analyzing fleet data. Most agencies reported meeting the fiscal year 2017 requirements to reduce petroleum use and per-mile greenhouse gas emissions. DOE and other agency officials attributed agencies' success in meeting these requirements to (1) acquiring low greenhouse-gas-emitting and alternative fuel vehicles, and (2) improving general fleet management such as by reducing miles traveled.
According to agency officials, three challenges have continued to hinder agencies' efforts to further the goals of reducing federal fleets' petroleum use and greenhouse gas emissions. First, while hybrid and electric vehicles can offer reductions in petroleum use and greenhouse gas emissions, the costs of these vehicles and their charging infrastructure make it challenging for agencies to acquire them on a large scale. According to GSA data, agencies purchased 373 electric vehicles (sedans and minivans) in fiscal year 2017—along with about 4,500 hybrid electric sedans—out of a total of over 16,000 sedans and minivans acquired. In total, agencies spent about $10.5 million more to purchase hybrid or electric vehicles than they would have to purchase comparably sized conventionally fueled vehicles. However, agencies did not consistently track the life-cycle costs of these vehicles. Second, agencies also stated that a lack of fuel and infrastructure availability limits agencies' use of alternative fuel. Third, agency officials stated that a continuing need for larger vehicles limits the number of low greenhouse-gas-emitting vehicles agencies can acquire. |
gao_GAO-20-466T | gao_GAO-20-466T_0 | <1. Background> Federal agencies have varying roles in planning, approving, and implementing infrastructure projects, depending on their missions and authorities. Some federal agencies help fund or construct infrastructure projects, and others grant permits or licenses for activities on private or federal lands. Agencies that manage federal lands, such as the Bureau of Land Management, may construct infrastructure on lands they manage and must also approve projects on those lands. The circumstances under which federal agencies may need to consult with tribes will vary based on the agencies responsibilities for infrastructure projects as well as an infrastructure project s potential effects on tribes land, treaty rights, or other resources or interests. Federal agencies are generally responsible for identifying relevant tribes that may be affected by proposed projects, notifying the tribes about the opportunity to consult, and then initiating consultation, as needed. One or more tribes located near or far from the proposed project site may have treaty rights within lands ceded in treaties or interests in lands with cultural or religious significance outside of lands ceded in treaties. Additionally, the Federal Permitting Improvement Steering Council which was created to make the process for federal approval for certain (large) infrastructure projects more efficient has issued two annual reports that identified best practices for, among other things, consulting with tribes. These best practices include: training staff on trust and treaty rights; providing clear information on proposals in a consistent and timely manner; holding consultations on lands convenient to tribes when possible; compensating tribes for consultant-like advice; and working to build strong, ongoing dialogue between tribal authorities and agency decision makers, among others. In 2017, Executive Order 13807 directed agencies to implement the techniques and strategies identified by the steering council as best practices, as appropriate. For purposes of this testimony, Native American cultural resources means Native American cultural items as defined by NAGPRA, archaeological resources that are remains of past activities by Native Americans, and historic properties to which Indian tribes attach cultural or religious significance. <2. Examples of Federal Laws and Regulations That Apply to Native American Cultural Resources> ARPA, NAGPRA, and section 106 of the NHPA are examples of federal laws that apply to Native American cultural resources. These laws and their implementing regulations contain many different provisions applicable to Native American cultural resources, including requirements for federal agencies to consult with Indian tribes in certain circumstances. ARPA and NAGPRA, among other things, prohibit trafficking of certain archaeological resources and Native American cultural items, respectively. In August 2018, we reported on federal laws that address the export, theft, and trafficking of Native American cultural items and any challenges in proving violations of these laws. That report included a discussion of ARPA and NAGPRA. In addition, we reported in August 2018 that ARPA and NAGPRA contain provisions prohibiting the removal of archaeological resources and Native American cultural items from certain lands unless certain conditions are met, including consultation with Indian tribes. Specifically, ARPA prohibits, among other things, the excavation or removal of archaeological resources from public or Indian lands without a permit from the federal agency with management authority over the land. If the federal agency determines that issuance of such a permit may result in harm to, or destruction of, any religious or cultural site, the agency must notify any Indian tribe which may consider the site as having religious or cultural importance and meet, upon request, with tribal officials to discuss their interests. NAGPRA prohibits the intentional removal from, or excavation of, Native American cultural items from federal or tribal lands unless an ARPA permit has been issued and other requirements are met. Specifically, regulations implementing NAGPRA require federal agency officials to take reasonable steps to determine whether a planned activity on federal lands may result in the excavation of human remains or other cultural items. Officials are also required to consult with certain tribes, including any tribe on whose aboriginal lands the planned activity will occur, about the planned activity. After consultation, the federal agency official must complete and follow a written plan of action that includes, among other things, the planned treatment, care, and disposition of human remains and other cultural items recovered. NAGPRA and its implementing regulations also include provisions regarding inadvertent discovery of Native American cultural items on federal and tribal lands. Specifically, the person making the discovery must notify the responsible federal agency or tribal official, stop any activity occurring in the area of the discovery, and make a reasonable effort to protect the human remains or other cultural item discovered. The NAGPRA regulations specify procedures for the agency and tribal officials to take after receiving a notification and when the activity that resulted in the inadvertent discovery can resume. <2.1. Section 106 of the NHPA> In March 2019, we reported that under section 106 of the NHPA and its implementing regulations, federal agencies are required to consult with Indian tribes when agency undertakings may affect historic properties including those to which tribes attach religious or cultural significance prior to the approval of the expenditure of federal funds or issuance of any licenses. The implementing regulations require agencies to consult with Indian tribes for undertakings that occur on or affect historic properties on tribal lands or may affect historic properties to which Indian tribes attach religious or cultural significance, regardless of where the historic properties are located. In addition, these regulations establish the following four-step review process for federal agencies, with tribal consultation required for each step: (1) initiating the section 106 process, (2) identifying historic properties, (3) assessing adverse effects, and (4) resolving adverse effects. <3. Examples of Factors Tribes and Selected Agencies Identified That Impact the Effectiveness of Federal Agencies Consultation Efforts> As we found in March 2019, tribes and selected federal agencies identified a number of factors that hinder effective consultation on infrastructure projects, based on our review of the comments submitted by 100 tribes to federal agencies in 2016 on tribal consultation and our interviews with officials from 57 tribes and 21 federal agencies. Tribes identified a variety of factors that hinder effective consultation. For the purposes of this testimony, we are highlighting those factors that more than 60 percent of the 100 tribes identified as concerns. For example: Agencies timing of consultation. Sixty-seven percent of tribes that provided comments to federal agencies in 2016 identified concerns with agencies initiating consultation late in project development stages; according to one tribal official we interviewed, late initiation of consultation limits opportunities for tribes to identify tribal resources near proposed project sites and influence project design. Agency consideration of tribal input. Agencies often do not adequately consider the tribal input they collect during tribal consultation when making decisions about proposed infrastructure projects, according to 62 percent of tribes that provided comments to federal agencies in 2016. Tribes comments included perceptions that agencies consult to check a box for procedural requirements rather than to inform agency decisions. Agency respect for tribal sovereignty or the government-to- government relationship. Other concerns were related to agencies level of respect for (1) tribal sovereignty or (2) the government-to- government relationship between the United States and federally recognized tribes, according to 73 percent of tribes that provided comments to federal agencies in 2016. Comments included concerns that some agency practices are inconsistent with this relationship. For example, tribes cited agencies limiting consultation to tribal participation in general public meetings and sending staff without decision-making authority to represent the U.S. government in consultation meetings. Agency accountability. Sixty-one percent of tribes that provided comments to federal agencies in 2016 raised concerns related to the extent of agencies accountability for tribal consultation, stating that some agencies or officials are not held accountable for consulting ineffectively or for not consulting with relevant tribes. For example, comments included concerns that tribes may not have appeal options short of litigation when they believe that federal officials did not adhere to consultation requirements. In addition, officials from 21 federal agencies included in our March 2019 report identified factors that they had experienced that limit effective consultation for infrastructure projects. For the purposes of this testimony, we are highlighting those factors that more than 60 percent of the 21 agencies identified as concerns. For example: Maintaining tribal contact information. Officials from 14 of 21 agencies (67 percent) cited difficulties obtaining and maintaining accurate contact information for tribes, which is needed to notify tribes of consultation opportunities. For example, ongoing changes or turnover in tribal leadership make it difficult to maintain updated tribal information, according to some agency officials we interviewed. Agency resources to support consultation. Officials from 13 of 21 agencies (62 percent) cited constraints on agency staff, financial resources, or both to support consultation. Officials from these agencies said that they have limited funding to support consultation activities, such as funding for their staff to travel to in-person consultation meetings for infrastructure projects. Agency workload. Officials from 13 of 21 agencies (62 percent) identified a demanding workload for consultation as a constraint, because of large numbers of tribes involved in consultation for a single project, high volumes of consultations, or lengthy consultations, among other reasons. Officials from some of these agencies said that it may be difficult to stay on project schedules when there are multiple tribes to consult with or multiple agencies involved. In March 2019, we also found that the 21 agencies in our review had taken some steps to facilitate tribal consultation, but the extent to which these steps had been taken varied by agency. For example: Developing information systems to help contact affected tribes. Eighteen agencies developed systems to help notify tribes of consultation opportunities, which generally include contact information for tribal leaders or other tribal officials. Three of these agencies also included information on tribes geographic areas of interest. For example, the Department of Housing and Urban Development developed a system that aims to identify over 500 tribes geographic areas of interest and includes their contact information. The Federal Permitting Improvement Steering Council identified developing a central federal database for tribal points of contact as a best practice. We recommended that the council should develop a plan to implement such a database and consider how it will involve tribes to help maintain the information, among other actions. Developing policies to communicate how they considered tribal input. Five agencies tribal consultation policies specify that agencies are to communicate with tribes on how tribal input was considered. For example, the Environmental Protection Agency s policy directs the most senior agency official involved in a consultation to send a formal, written communication to the tribe to explain how the agency considered tribal input in its final decision. However, 16 agencies did not call for such communications in their policies. We recommended that these agencies update their tribal consultation policies to better communicate how tribal input was considered in agency decision- making. Addressing capacity gaps through training. Most of the 21 selected federal agencies have taken steps to facilitate tribal consultation for infrastructure projects by providing a range of training opportunities for staff involved in tribal consultation to help build agency officials knowledge of tribal consultation topics. For example, the U.S. Army Corps of Engineers coordinates an immersive, 4-day training, hosted by a tribe on the tribe s land or reservation for agency staff and other participating agency officials, which focuses on cultural competency important for tribal consultation. Utilizing various approaches to address resource constraints. Some of the selected federal agencies used various approaches to help address resource constraints agencies and tribes may face when consulting on infrastructure projects, according to agency officials. For example, the Bureau of Land Management s policies state that the agency may use its appropriated funds and designated accounts to reimburse tribal members travel expenses to attend meetings in connection with some consultations. The Nuclear Regulatory Commission collects fees from project applicants to cover agency costs related to consultation. In conclusion, effective consultation is a key tenet of the government-to- government relationship the United States has with Indian tribes, which is based on tribal sovereignty. Failure to consult, or to consult effectively, sows mistrust; risks exposing the United States to costly litigation; and may result in irrevocable damage to Native American cultural resources. In our March 2019 report, we made recommendations to 17 agencies to take steps to improve their tribal consultation practices, which agencies generally agreed with and in one case, have implemented. However, sustained congressional attention to these issues and the relevant factors impacting the effectiveness of agencies consultation efforts may help to minimize the negative impacts on tribes cultural resources, when relevant federal laws and regulations apply. Chairman Gallego, Ranking Member Cook, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. <4. GAO Contacts and Staff Acknowledgments> For further information regarding this testimony, please contact Anna Maria Ortiz at (202) 512-3841 or ortiza@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this statement include Lisa Van Arsdale (Assistant Director), Brad Dobbins, Leslie Kaas Pollock, and Jeanette Soares. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
Federal agencies are required in certain circumstances to consult with tribes on infrastructure projects and other activities—such as permitting natural gas pipelines—that may affect tribal natural and cultural resources. According to the National Congress of American Indians, federal consultation with tribes can help to minimize potential negative impacts of federal activities on tribes' cultural resources.
The Secretary of Homeland Security has waived federal cultural resource laws that generally require federal agencies to consult with federally recognized tribes to ensure expeditious construction of barriers along the southern U.S. border.
This testimony discusses examples of (1) federal laws and regulations that apply to Native American cultural resources and (2) factors that impact the effectiveness of federal agencies' tribal consultation efforts. It is based on reports GAO issued from July 2018 through November 2019 related to federal laws that apply to Native American cultural resources, tribal consultation for infrastructure projects, and border security. It also includes additional information about the consultation requirements in these cultural resource laws and regulations.
What GAO Found
Examples of federal laws and regulations that apply to Native American cultural resources include:
The Native American Graves Protection and Repatriation Act (NAGPRA). In August 2018, GAO reported that NAGPRA prohibits the intentional removal from, or excavation of, Native American cultural items from federal or tribal lands unless a permit has been issued and other requirements are met. NAGPRA and its implementing regulations contain provisions to address both the intentional excavation and removal of Native American cultural items as well as their inadvertent discovery on federal and tribal lands.
Section 106 of the National Historic Preservation Act (NHPA). In March 2019, GAO reported that section 106 of the NHPA and its implementing regulations require federal agencies to consult with Indian tribes when agency “undertakings” may affect historic properties—including those to which tribes attach religious or cultural significance—prior to the approval of the expenditure of federal funds or issuance of any licenses.
In March 2019, GAO reported that tribes and selected federal agencies identified a number of factors that impact the effectiveness of consultation on infrastructure projects, based on GAO's review of the comments on consultation submitted by 100 tribes to federal agencies in 2016 and GAO's interviews with officials from 57 tribes and 21 federal agencies. Examples of these factors include:
Agency consideration of tribal input . Sixty-two percent of the 100 tribes that provided comments to federal agencies in 2016 identified concerns that agencies often do not adequately consider the tribal input they collect during consultation when making decisions about proposed infrastructure projects.
Maintaining tribal contact information . Officials from 67 percent of the 21 federal agencies in GAO's review cited difficulties obtaining and maintaining accurate contact information for tribes, which is needed to notify tribes of consultation opportunities.
GAO also found that the 21 agencies in GAO's review had taken some steps to facilitate tribal consultation. For example:
Eighteen agencies had developed systems to help notify tribes of consultation opportunities, including contact information for tribal leaders or other tribal officials.
Five agencies' tribal consultation policies specify that agencies are to communicate with tribes on how tribal input was considered.
What GAO Recommends
GAO recommended in March 2019 that 17 federal agencies take steps to improve their tribal consultation practices. The agencies generally agreed and one agency has implemented the recommendation. |
gao_GAO-20-208 | gao_GAO-20-208_0 | <1. Background> SEC has five Commissioners who oversee its operations and provide final approval over staff interpretation of federal securities laws, proposals for new or amended rules to govern securities markets, and enforcement activities. Headed by the SEC Chairman, the Commissioners oversee five divisions, 24 offices, and 11 regional offices. As shown in figure 1, SEC has designated four offices and five divisions as mission-critical (i.e., primarily responsible for implementing SEC s mission). Table 1 outlines the roles and responsibilities of these mission-critical offices and divisions. The mission-critical offices and divisions are supported by other offices, such as the Office of Human Resources and the Office of Financial Management. SEC s Office of Human Resources provides overall responsibility for the strategic management of SEC s personnel management and assesses compliance with federal regulations for areas such as recruitment, retention, leadership and staff development, and performance management. In addition, certain divisions have internal human resource coordinators that coordinate between the Office of Human Resources and their respective division heads. The Office of Human Resources reports to SEC s Office of the Chief Operating Officer, which in turn reports to the Office of the Chairman. The Office of Financial Management administers the financial management and budget functions of SEC. The Office assists the Chief Operating Officer in formulating budget and authorization requests, monitors the use of agency resources, and develops, oversees, and maintains SEC financial systems. To carry out its mission, SEC employs staff with a range of skills and backgrounds throughout the United States. As of September 2019, SEC employed 4,369 staff. Of these, approximately 69 percent were designated as mission-critical, and the remaining 31 percent were other professional, technical, administrative, and clerical staff. As shown in figure 2, the largest mission-critical occupational category is attorneys, who make up over 50 percent of all mission-critical employees. In addition, over 40 percent of all mission-critical employees work in one of SEC s 11 regional offices. The regional offices are responsible for investigating and litigating potential violations of securities laws. The regional offices also have enforcement and examination staff to inspect regulated entities. SEC staff are represented by the National Treasury Employees Union (which we refer to in this report as the SEC employees union). To help SEC attract and retain qualified employees, in 2002 Congress enacted the Investor and Capital Markets Fee Relief Act (Pay Parity Act), which allowed SEC to implement a new compensation system with higher pay scales, comparable to those of other federal financial regulators. <1.1. Hiring Freeze> To stay within its annual appropriation, SEC imposed a hiring freeze beginning on October 1, 2016, and lifted it on April 1, 2019. During the hiring freeze, SEC permitted some exceptions on a case-by-case basis to fill positions that it determined to be critical to meeting key agency objectives and maintaining critical programs. Based on SEC s budget justification documents, from October 1, 2016, through September 30, 2018, SEC lost a net total of 476 positions agency-wide, including 363 positions across its mission-critical offices and divisions. Figure 3 shows the staffing levels in SEC s mission-critical offices and divisions during fiscal years 2016, 2017, and 2018. <2. Employees Reported Positive Aspects of SEC s Personnel Management and Culture but Also Concerns about Performance Management and Favoritism> The results of our 2019 survey of mission-critical nonexecutive SEC employees indicate that most employees had positive views on some aspects of SEC s personnel management and organizational culture, such as the skills of their direct supervisors and colleagues. Our survey results also indicate that employees had concerns related to SEC s performance management system, perceptions of a risk-averse culture, and perceptions of favoritism in hiring and promotions. Employees had mixed views in other areas, such as morale, communication, and training. Finally, employees responses to key questions on organizational culture in our 2019 survey generally remained consistent with the results from our 2016 survey. See appendix III for a comparison of our 2016 and 2019 survey results for selected questions. <2.1. Employees Expressed Generally Positive Views on Their Direct Supervisors and Colleagues> <2.1.1. Views on Direct Supervisors> Based on the results of our survey of mission-critical nonexecutive employees, we estimate that more than 75 percent of employees had favorable views of their direct supervisors in areas such as their skills and expertise, how they share information, and their willingness to listen to differing approaches (see fig. 4). In addition, we estimate that 70 percent of employees agreed that supervisors and managers in their division or office tolerate honest mistakes as learning experiences, and 68 percent agreed that supervisors and managers in their division or office are genuinely interested in the opinions of their staff. Similarly, in OPM s 2018 Federal Employee Viewpoint Survey (hereafter referred to as OPM s 2018 survey), SEC employees expressed positive views about their supervisors. In that survey, more than 80 percent of SEC employees agreed that they have trust and confidence in their supervisor (83 percent) and that their supervisor listens to what they have to say (88 percent) and treats them with respect (90 percent). <2.1.2. Views on Colleagues> Our survey results also indicate that most employees had positive views about the people SEC hires. As shown in figure 5, we estimate that 79 percent of employees agreed that their division or office is able to attract talented and qualified employees. We also estimate that 75 percent agreed that SEC management usually hires employees who are a good fit for SEC s mission. In addition, in OPM s 2018 survey, an estimated 90 percent of all employees agreed that SEC s workforce has the job- relevant knowledge and skills necessary to accomplish the organization s goals. For OPM s 2018 survey of SEC employees, employees responded positively to questions related to their satisfaction with SEC as a place to work. Based on that survey, SEC s overall score on OPM s Global Satisfaction Index which measures employee satisfaction with job, pay, and their organization was 82 percent, while the government-wide score was 64 percent. In addition, SEC s score on OPM s Employee Engagement Index which measures employees perceptions of leadership, interpersonal relationships between workers and supervisors, and employees feelings of motivation and competency related to their roles in the workplace was 78 percent (compared to 68 percent government-wide). Moreover, from OPM s 2013 survey to the 2018 survey, SEC s scores improved in both of these categories by more than 15 percentage points, indicating that employees views are improving over time. <2.2. Survey Indicated Heightened Employee Concerns about Performance Management, Risk-Averse Culture, and Perceptions of Favoritism> <2.2.1. Performance Management> More than 40 percent of employees expressed dissatisfaction with key aspects of SEC s performance management system. As discussed later in this report, at the time of our survey, SEC employees covered by the union s bargaining unit were rated under a pilot performance management system in which they received an initial four-tier rating, which was converted into a final two-tier rating of acceptable or unacceptable. Our survey results indicated areas of dissatisfaction with this system, as shown in figure 6. For example, based on our survey, we estimate that 48 percent of employees disagreed that the performance management system created meaningful distinctions in performance among employees. Similarly, in OPM s 2018 survey, employees also expressed concerns about various aspects of performance management. For example, an estimated 33 percent of employees disagreed that their work unit takes steps to deal with poor performers, and 35 percent disagreed that differences in performance are recognized in a meaningful way. <2.2.2. Perceptions of a Risk-Averse Culture> Our survey indicated that more than 40 percent of SEC employees continued to have concerns about excessive risk aversion the condition in which the agency s ability to function effectively is hindered by the fear of taking on risk. We estimate that 47 percent of nonsupervisors and 48 percent of supervisors agreed that the fear of public scandal has made SEC overly cautious and risk averse. These results were similar to our 2016 survey (46 percent of nonsupervisors and 49 percent of supervisors agreed), which were an improvement from the results of our 2013 survey. In addition, as shown in figure 7, about 40 percent of SEC employees agreed that the fear of being wrong makes senior officers in their division or office reluctant to take a stand on important issues. As we reported in 2013, changes to organizational culture, including reducing excessive risk aversion, require sustained efforts by senior management. Responses to other questions on our survey suggest that managers support the types of activities that may help reduce excessive risk aversion. For example, an estimated 60 percent of employees agreed that innovative ideas are encouraged in their division or office. Also, as noted above, we estimate that 70 percent of employees agreed that their supervisors and managers tolerate honest mistakes as learning experiences. <2.2.3. Perceptions of Favoritism> Our survey results suggest that a quarter of employees had concerns about favoritism in SEC s hiring process, and more than a third had such concerns about its promotion process. With respect to hiring, we estimate that 25 percent of employees agreed that hiring is sometimes based more on personal connections than on substantive experience and qualifications. With respect to promotions, as shown in figure 8, we estimate that 35 percent of nonsupervisory staff disagreed that promotion to management is based more on substantive experience than on favoritism and that favoritism is not an issue in promotions. A lack of clarity in the hiring and promotion processes may have contributed to employees perceptions related to favoritism. Based on our survey results, an estimated 50 percent of employees disagreed that the criteria for rewarding and promoting staff are clearly defined. Later in this report we discuss the steps SEC has taken to improve its promotion and hiring policies. <2.3. Employee Views on Morale Were Mixed, and Their Views on Communication and Training Varied by Division> <2.3.1. Morale> While OPM s 2018 survey results indicated that SEC employees largely had positive views about SEC as a place to work, the results of our 2019 survey of mission-critical nonexecutive employees indicate that the recent hiring freeze may have negatively impacted their views on morale. Based on our survey, we estimate that 37 percent of employees disagreed that morale is generally high most of the time, as shown in figure 9. In addition, based on our survey, we estimate that 63 percent of employees believed the recent hiring freeze had a negative impact on morale, including 31 percent who believed the negative effect was large. Over 60 SEC employees provided written survey comments related to morale. Some employees who provided written comments cited other concerns that had a negative impact on morale. For example, some employees stated that low pay increases and the lack of merit pay have contributed to low morale among high-performing employees. Some employees also noted that the 2019 government shutdown had a negative impact on morale by implying that federal employees work is not valuable. <2.3.2. Communication> Most employees expressed positive views on whether cross-divisional communication is encouraged, but employees in some offices and divisions had concerns about communication within their division or office. Specifically, an estimated 66 percent of employees agreed that communication with other divisions and offices on work-related matters is encouraged. These survey results are generally consistent with SEC s results on OPM s 2018 survey, in which an estimated 73 percent of employees agreed that managers support collaboration across work units to accomplish work objectives, and an estimated 69 percent agreed that managers promote communication among different work units. However, in our survey, we found that some employees had more negative views about communication within divisions and offices. For example, we estimate that 34 percent of employees disagreed that information and knowledge are openly shared at all levels within their division or office, and 27 percent of employees disagreed that SEC management ensures employees are included in the flow of relevant information. As shown in figure 10, these figures were highest for employees in the Division of Corporation Finance and the Office of Information Technology. Most SEC employees expressed positive views on SEC s commitment to training and the extent to which their training provided the skills and experience to meet SEC s needs (see fig. 11). However, our survey results indicated heightened concerns about the number of training opportunities with outside instructors in some divisions and offices. While we estimate that 76 percent of employees reported that there were opportunities to participate in training that provided the latest industry-specific knowledge with outside instructors, we estimate that more than 30 percent of employees in several offices and divisions indicated that the number of such opportunities was less than adequate (see fig. 12). These concerns were highest in the Office of Information Technology, where more than half of the staff viewed such training opportunities as less than adequate. <2.4. SEC Senior Officers Generally Had Favorable Views of SEC s Personnel Management and Organizational Culture> We administered a separate survey to 80 SEC senior officers in mission- critical offices and divisions, and 50 provided responses. Respondents generally had favorable views on issues such as hiring and retaining talent, communication, training, and morale. For example, 90 percent of senior officers we surveyed said their division or office is able to attract talented and qualified employees and that information is adequately shared across groups in their division or office. In addition, 82 percent agreed that morale is generally high most of the time. However, similar to nonexecutive employees, senior officers expressed concern about SEC s performance management system. For example, 70 percent disagreed that current performance incentives were effective tools to motivate employees to perform well, and 50 percent disagreed that SEC s performance management system provides consistent standards for rewarding performance. <3. Concerns about SEC s Performance Management System Persist> <3.1. SEC Has Not Addressed GAO s Recommendation to Periodically Validate Its Performance Management System> Since 2013, SEC has twice redesigned its performance management system without periodically validating it, as we recommended in 2013. Validating the system typically refers to obtaining staff input and general agreement on the competencies, rating procedures, and other aspects of the system. In our 2013 report, we found that SEC s performance management system reflected many elements of OPM s guidance but that implementation of the system could be improved. Also, consistent with best practices, we recommended that SEC conduct periodic validations, with staff input, of the performance management system and make changes as appropriate based on these validations. SEC agreed with our recommendation. In fiscal year 2016, SEC began to pilot a new performance management system with a four-tier rating scale. According to SEC officials, the four- tier rating system for non-bargaining-unit employees was fully implemented in 2017 and continued as a pilot in fiscal years 2017, 2018, and 2019 for bargaining unit employees. However, SEC did not validate this system. In our 2016 report, we reiterated the importance of our 2013 recommendation and emphasized that SEC should only make changes to its performance management system based on validations and staff feedback. Despite plans to survey all employees to validate the agency s pilot performance management system and obtain employee feedback in fiscal years 2017 and 2018, SEC officials said they have been unable to do so, in part because they could not reach agreement with the SEC employee union on the planned survey questions. SEC and the union agreed in November 2018 that SEC will implement another new performance management system, including a new incentive bonus program, in 2020. Because SEC did not validate the four-tier system it was piloting, it missed an opportunity to obtain employee input to inform the design of the new system. Under the new system, all SEC employees will be evaluated on a two-tier rating scale: accomplished performer and unacceptable. In addition, SEC plans to implement a new incentive bonus program that will provide opportunities for high- performing employees to earn a bonus of up to $10,000 once per fiscal year. According to SEC officials, SEC plans to work with OPM to validate the new performance management system by surveying staff on the new system at the conclusion of the 2020 appraisal period, after which OPM will submit a final assessment of the program with any recommended actions for SEC. These plans are consistent with our 2013 recommendation that SEC should conduct periodic validations of its performance management system. However, until SEC completes its planned activities, this recommendation remains unaddressed. The negative views expressed by many employees in our survey underscore the need for SEC to validate its performance management system. As discussed earlier, more than 40 percent of employees were dissatisfied with key aspects of SEC s performance management system, such as the extent to which the performance management system created meaningful distinctions in performance among employees. In addition, based on our survey, we estimate that 30 percent of SEC employees disagreed that SEC s performance management system uses relevant criteria to evaluate their performance. Validating the new performance management system with staff input should help SEC better ensure that it is achieving its goals and identify any changes needed to address employee dissatisfaction with performance management. <3.2. SEC Has Not Developed Mechanisms to Ensure Transparency and Fairness in New Performance Bonus Program> In prior work, we reported that effective performance management requires that the organization s leadership make meaningful distinctions between acceptable and outstanding performance of individuals and appropriately reward those who perform at the highest level. In addition, our prior work on strategies federal agencies can use to manage performance-oriented pay systems has shown the need for agencies to build in safeguards to enhance transparency and ensure the fairness of pay decisions. One such safeguard is to include multiple levels of review of performance ratings and pay decisions to ensure consistency and fairness in the process and the resulting decisions. Another safeguard is to publish aggregate data on the results of the performance cycle, which allows employees to compare results across various groups within the agency while protecting the confidentiality of individual ratings and pay decisions. SEC has not yet developed mechanisms for transparency and fairness for its new performance incentive bonus program. Under the program, a supervisor may nominate an employee who demonstrates exceptional performance according to certain criteria to receive a bonus payment of up to $10,000 once per fiscal year. SEC officials told us that specific policies and procedures for the bonus program were still being developed at the time of our review, but they could not provide details on how they planned to ensure transparency and fairness in implementing the program. Moreover, as of November 2019, SEC had not provided detailed policies and procedures, nor had it established a date by which such policies and procedures would be finalized, despite its goal of implementing the new program in January 2020. Developing and implementing adequate safeguards could increase employees confidence in the new performance incentive bonus program. Without adequate safeguards to enhance transparency and better ensure fairness, employee dissatisfaction with performance management may persist and could undermine the credibility of the new bonus program. <4. SEC Has Implemented a More Comprehensive Approach to Workforce Planning and Improved Hiring and Promotion Practices> <4.1. SEC s New Workforce and Succession Planning Processes Address Previously Identified Weaknesses> SEC has taken action to fully implement the two recommendations from our 2013 report related to developing and implementing a comprehensive workforce and succession planning process that is consistent with OPM guidance. In our 2016 report, we found that SEC had developed a workforce and succession plan in response to these recommendations. However, we identified weaknesses with this plan, such as the lack of a comprehensive skills gap analysis to help ensure that employees across all occupations have the skills necessary to fulfill SEC s mission. Since our 2016 review, SEC completed a more comprehensive skills gap analysis and began to implement new workforce and succession planning processes that address other weaknesses we had identified. In fiscal year 2019, SEC developed and began to implement a new workforce planning strategy that outlined new processes for workforce and succession planning. SEC s previous process focused on creating a consolidated workforce plan in a single document that focused on five divisions and two offices, accounting for 67 percent of SEC employees. SEC officials told us that the new process is more dynamic and responsive because it provides more workforce data to officials in the divisions and offices. Specifically, SEC developed various human capital dashboards that provide the Office of Human Resources and agency leaders with up-to-date data on the state of the agency s workforce, such as data on hiring, attrition, skill gaps, and other workforce demographics. Key components of SEC s new workforce and succession planning processes address weaknesses identified in our prior work: Skills gap analysis. Our 2016 review found that SEC s workforce plan lacked a comprehensive skills gap analysis covering all SEC occupations. In 2018, SEC conducted an agency-wide competency survey to identify skills gaps by position in each division and office. SEC incorporated the results of this survey into one of its human capital dashboards that allows users to interact with the data directly. Specifically, SEC s Workforce Competency Dashboard provides competency data (including gaps) across offices and divisions, allowing users to explore critical skill gaps by competency. According to SEC s workforce planning strategy, divisions and offices can use the data to address skill gaps through activities such as training, hiring, and knowledge sharing. For example, to address an identified gap in written communication and critical thinking for newly hired investigative attorneys, the Division of Enforcement and the Office of Human Resources developed interview questions to better screen for these skills during the hiring process. Human capital reviews. We also found in 2016 that SEC s workforce plan was not clearly linked to its budget formulation and did not inform decision-making about the structure of the workforce. Under its new workforce planning process, SEC links its workforce planning to its budget through annual human capital reviews in which divisions and offices work with the Office of Human Resources to identify workforce needs and priorities to directly inform their operating plans and budget requests. These human capital reviews include discussions about the capacity and capability of the organization to meet current mission needs and whether areas of the workforce need to be reshaped to meet SEC s mission. SEC officials told us that under SEC s previous workforce planning process, these reviews were conducted concurrently with budget meetings, whereas under its new process these meetings are conducted prior to the budget meetings. This change allows divisions and offices to use the information from the review meetings to prepare for their budget meetings. In addition, the human capital review meetings are informed by data maintained in SEC s new Workforce Supply Dashboard, which provides information on the composition and demographics of SEC divisions and offices and allows users to view data on hiring, attrition, and other workforce indicators. For example, through this process, SEC recently determined that it had an excess of certain positions, such as clerks and assistants responsible for data processing and management. This determination led SEC to request permission from OPM for a targeted early retirement authority and incentives for individuals in such positions. New succession planning processes. In 2016, we found that SEC s succession planning lacked information on workforce attrition and a fair and accurate process for identifying future leaders. Under SEC s new succession planning process, the Office of Human Resources tracks senior-level turnover to determine the level of attrition at senior leadership levels and to determine whether SEC is filling these positions internally or externally. In addition, the Office of Human Resources created a standardized template that managers in each division and office use to identify key leadership positions and candidate pools. According to SEC, this more standardized approach offers an extra level of precision and rigor to identify the specific leadership strengths and risks across the largest divisions and offices. In addition, since our 2016 report, SEC has improved processes for analyzing its talent pool for new leaders. In 2017, the Office of Human Resources surveyed employees to gauge their interest and intent in progressing to higher levels of management responsibility, including to the senior officer ranks. SEC is also developing a centralized program to screen and select a cohort of high-potential leaders who will be certified and available to fill senior officer positions as they become vacant. SEC officials said they anticipate the program will be launched in the second half of fiscal year 2020. The processes and tools described above are still new, and SEC is continuing to integrate and develop them fully. For example, 2019 was the first year SEC used its new workforce planning process, and SEC officials told us that senior officers are still learning how they can best use new tools, such as the new human capital dashboards. One SEC official told us that SEC is still refining this new approach and plans to consider additional enhancements to the dashboards, such as including more forward-looking data to inform discussions of future workforce needs. Although SEC continues to enhance its new process and practices, the actions it has taken fully implement our two 2013 recommendations. <4.2. SEC Has Improved Hiring and Promotion Practices> SEC has taken steps to improve certain practices related to hiring and promotions. For example, in 2016, we found that SEC had not identified skills gaps among its hiring specialists and that these staff received limited training. As a result, SEC lacked assurance that its hiring specialists had the necessary skills to hire and promote the most qualified applicants. We recommended that SEC develop and implement training for hiring specialists that is informed by a skills gap analysis. In response to our recommendation, SEC s Talent Acquisition Group partnered with SEC s training group to conduct a competency gap assessment for each of the Talent Acquisition Group s five primary jobs. Based on the results of this competency assessment, in 2018 SEC developed and prioritized a 2-year training plan for hiring specialists to address the identified skills gaps and to better enable SEC to recruit, develop, and retain competent staff. This skills gap analysis and the new training curriculum for hiring specialists fully address our 2016 recommendation. SEC also made changes to policies for promotion announcements to improve perceptions of fairness and transparency. For example, since 2016, a promotion opportunity can be limited to applicants within a single division or office only if that division or office has at least 15 eligible candidates. If there are fewer than 15, the announcement must be opened more broadly to candidates in SEC beyond that particular office or division. In addition, SEC now requires that promotion announcements be open for a minimum of 10 business days. <5. To Enhance Communication, SEC Has Identified Best Practices and Established Cross- Divisional Working Groups> SEC has fully addressed recommendations we made in 2013 and 2016 to improve intra-agency communication and collaboration: Incentives for staff to communicate and collaborate. In 2013, we found that SEC had made efforts to improve communication and collaboration but had not fully addressed barriers to an environment of open communication. We recommended that the SEC Chief Operating Officer identify and implement incentives for all staff to support an environment of open communication and collaboration. We determined that this recommendation had been fully implemented in November 2017. Among other steps, in 2016 SEC revised its performance expectations for supervisors to encourage communication and collaboration and proactively share relevant information. Best practices for communication and collaboration. In 2013, we recommended that SEC explore communication and collaboration best practices and implement those that could benefit SEC. SEC has taken action to fully implement this recommendation. Specifically, SEC s Office of the Chief Operating Officer engaged a third-party management consultant team to complete a study of best practices for communication and collaboration, which was completed in 2018. For the study, the consultants developed a framework of best practices recognized in the public and private sectors and assessed SEC s practices against the framework. The consultants found that each of the best practices in its framework was met by at least one of SEC s activities, tools, technologies, or initiatives. The report included eight recommendations to help address barriers to cross-division communication and collaboration, among other goals. In response to these recommendations, as of May 2019, SEC had taken action on six recommendations and developed planned actions for the remaining two. For example, to facilitate staff-to- staff communication and collaboration, SEC officials updated the intranet sites of each mission-critical office and division with main contact telephone numbers and staff directories. In addition, SEC plans to pilot an electronic communication tool for project execution among teams collaborating across divisions and offices that will provide more functionality than SEC s current application. Cross-divisional committees and working groups. In 2016, we noted that the lack of a central position or office with authority over the daily operations of all divisions and offices made it difficult to address challenges related to communication and collaboration. We recommended that SEC enhance or expand the responsibilities and authority of the Chief Operating Officer or another official or office to help ensure that improvements to communication and collaboration across SEC were made. While SEC disagreed with this recommendation, it has taken actions that meet the intent of our recommendation. First, SEC created cross-divisional committees and working groups that help to enhance intra-agency communication and collaboration. For example, in 2018, SEC created an Operations Steering Committee, which consists of senior operational leaders throughout the agency who meet on a regular basis to discuss and collaborate on cross-agency operational issues, including those related to human capital. SEC also created other formal intra-agency committees and working groups, including an Information Technology Capital Planning Committee, an Emerging Risk Group, and a Data Management Board. Second, between 2009 and 2018, SEC established Managing Executive positions in the Office of the Chairman and in eight of its nine mission- critical offices and divisions. Managing Executives are responsible for working closely with one another, including serving together on intra- agency working groups, to facilitate effective internal collaboration on operations issues, including personnel management. The Managing Executive in the Office of the Chairman, established in 2017, acts as a liaison between the Chairman s office and the various committees and working groups. According to an agency official, having a Managing Executive position in the Office of the Chairman helps ensure that someone from the Chairman s office has the time to devote to operational issues. <6. Conclusions> SEC has taken a number of actions since 2016 to strengthen its personnel management. It has implemented a more comprehensive approach to workforce planning and improved intra-agency communication and collaboration through new working groups and implementation of best practices. OPM s 2018 employee survey also suggests that employee satisfaction at SEC has improved. Despite this progress, SEC has yet to validate its performance management system since we recommended it do so in 2013. Without such validation, SEC may lack information that could help it identify changes needed to address employee dissatisfaction and ensure its system achieves its goals. We therefore reiterate our 2013 recommendation that SEC conduct periodic validations, with staff input, of the performance management system and make changes as appropriate based on these validations. Consistent with our recommendation, SEC officials stated they plan to work with OPM to validate the new performance management system. However, until SEC completes its validation of the new system, which it plans to do at the conclusion of the 2020 appraisal period, this recommendation remains unaddressed. Finally, a key feature of SEC s new performance management system will be a performance incentive bonus program through which SEC supervisors will be able to nominate individual employees for a bonus of up to $10,000 once per fiscal year. Our prior work on performance management has highlighted the importance of safeguards that can help ensure that agencies performance management systems and particularly the systems affecting pay are fair and transparent. At the time of our review, SEC was in the process of designing the performance incentive bonus program and did not provide us with detailed policies or procedures. As SEC works to finalize procedures for this bonus program, incorporating safeguards such as multiple levels of review of performance ratings and pay decisions can help to promote employee confidence in the integrity of the program. <7. Recommendation for Executive Action> The Chair of the Securities and Exchange Commission should direct the Chief Operating Officer to develop and implement safeguards to better ensure transparency and fairness in SEC s new performance incentive bonus program. Such safeguards could include multiple levels of review of performance ratings and pay decisions and publishing aggregate data on the results of the performance cycle that allow employees to compare results across various groups within the agency while protecting the confidentiality of individual ratings and pay decisions. (Recommendation 1) <8. Agency Comments> We provided SEC a draft of this report for its review and comment. SEC provided written comments that are reprinted in appendix VI. SEC also provided technical comments that we incorporated, as appropriate. In its written comments, SEC stated that it concurred with, and plans to implement, our recommendation to develop and implement safeguards to better ensure transparency and fairness in its new performance incentive bonus program. SEC stated that it appreciated our suggested practices, and that it will conduct research to consider additional safeguards. SEC also highlighted its implementation of eight of nine of our previous recommendations related to personnel management. SEC noted its progress in the areas of workforce planning and intra-agency communication and recognized that further work remains to be done. With respect to our 2013 recommendation that it conduct periodic validations of the performance management system, which SEC has not yet implemented, SEC stated that it expects to obtain feedback from employees and managers at the conclusion of the 2020 performance cycle to identify further improvements, and that it is committed to conducting periodic evaluations of its system in the future. We will continue to monitor SEC s progress toward implementing this recommendation. We are sending copies of this report to the appropriate congressional committees, the Chairman of the Securities and Exchange Commission, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-8678 or clementsm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Appendix I: Status of GAO s 2013 and 2016 Personnel Management Recommendations to the Securities and Exchange Commission Table 2 provides the status of recommendations we made to the Securities and Exchange Commission in 2013 and 2016. Appendix II: Objectives, Scope, and Methodology This report examines (1) employees views on the Securities and Exchange Commission s (SEC) personnel management and organizational culture, (2) SEC s efforts to implement a performance management system, (3) SEC s implementation of a workforce planning process, and (4) SEC s steps to strengthen communication and collaboration within and across its divisions and offices. <9. Analysis of Employees Views on SEC s Personnel Management and Organizational Culture> To examine employees views on SEC s personnel management and organizational culture, we conducted two surveys of SEC staff, performed a content analysis of open-ended responses to our surveys, and conducted individual interviews. Surveys. To obtain employees views on SEC s personnel management and organizational culture, we implemented two web-based surveys from March 2019 to May 2019. We administered the first survey to a stratified random sample of 877 nonexecutive employees in mission-critical occupations in mission-critical offices and divisions. We administered the second survey to all 80 senior officers in mission-critical offices and divisions. To determine our sample of nonexecutive employees, we stratified the population of mission-critical SEC employees into sampling strata by office and division to help mitigate the risk that a particular part of SEC could be over- or underrepresented by the respondents to our survey. We stratified the Division of Enforcement and the Office of Compliance Inspections and Examinations into two further categories ( headquarters and regional office ) because this division and office have a majority of their staff located in one of SEC s 11 regional offices. Table 3 shows the total number of employees and the number of employees selected in our sample for each of the strata. Due to their small employee counts, we combined the Offices of the Chief Accountant and Credit Ratings into one stratum for the purpose of selecting the sample. Prior to selecting the sample, we sorted the sample frame by supervisory status within each stratum. We then selected the sample via systematic random sampling within each stratum. Our initial sample size allocation was designed to achieve a stratum-level margin of error no greater than plus or minus 8 percentage points at the 95 percent level of confidence. Based upon our prior surveys on SEC s personnel management, we assumed a response rate of 70 percent to determine the sample size for the mission-critical employees. Because some employees left SEC between the time we obtained a list of SEC employees and the launch of the survey, the final sample size decreased from 884 to 877. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. We provide confidence intervals along with each sample estimate in the report. All survey results presented in the body of this report are generalizable to the estimated population of 2,907 in-scope mission-critical employees at SEC as of September 30, 2018. For our survey of nonexecutive employees in the mission-critical offices and divisions, 563 nonsupervisors and supervisors responded to our survey, for a response rate of 64 percent. For our survey of all mission- critical senior officers, 50 responded to our survey, for a response rate of 63 percent. For the nonexecutive survey, we carried out a statistical nonresponse bias analysis using available administrative data and determined that the results are generalizable to SEC s mission-critical employees. We do not attempt to extrapolate the findings of our senior officer survey to those who chose not to participate. Each GAO survey of SEC staff included questions on personnel management issues related to (1) recruitment, training, staff development, and resources; (2) communication among and within divisions and offices; (3) leadership and management; (4) performance management and promotions; and (5) organizational culture and climate. The separate survey of all mission-critical SEC senior officers (those at the SO-1, SO-2, and SO-3 pay grades) covered the same topic areas but omitted questions not relevant to senior officers and included additional questions specifically relevant to senior officers. Our surveys included both multiple-choice and open-ended questions. We analyzed the results of our 2019 survey of supervisory and nonsupervisory staff and senior officers, and we compared the results to results of similar surveys we conducted in 2013 and 2016. In addition, we reviewed the Office of Personnel Management s (OPM) 2018 Federal Employee Viewpoint Survey results to obtain additional perspectives from SEC staff on issues related to the agency s personnel management and to compare SEC s results to government-wide responses. To minimize certain types of errors, commonly referred to as nonsampling errors, and enhance data quality, we employed recognized survey design practices in the development of the questionnaires and the collection, processing, and analysis of the survey data. To develop our survey questions, we drew on prior GAO SEC personnel management surveys. For both of our 2019 surveys, we took steps to ensure that survey questions from 2016 were still relevant and to determine if new issues warranted new questions. To do this, we reviewed information from individual interviews with current and former employees, met with five mission-critical employees to pretest the nonexecutive survey, and met with two senior officers to obtain their feedback on the senior officer survey. As a result of these meetings, for example, we added three questions related to the impact of SEC s hiring freeze on personnel management. In addition, a GAO survey expert reviewed and provided feedback on our survey instrument. To reduce nonresponse, another source of nonsampling error, we sent multiple emails encouraging SEC employees to complete the surveys, and we made telephone calls to nonrespondents to encourage participation and troubleshoot any logistical issues in accessing the questionnaire. We also had respondents complete questionnaires online to eliminate errors associated with manual data entry. On the basis of our application of these practices and follow-up procedures, we determined that the survey data were of sufficient quality for the purpose of obtaining employees views on SEC s personnel management and organizational culture. Content analysis. To analyze the information we obtained from the open-ended survey responses, we conducted a content analysis on the 633 responses to the six open-ended survey questions from the survey of the mission-critical offices and divisions. Five staff members developed coding categories based on our researchable objectives, information collected during our individual interviews, and the findings from our December 2016 report. Coding categories were as follows: (1) workforce management, (2) communication, (3) management, (4) promotions, (5) performance management, and (6) risk aversion. For each of the responses to the six open-ended questions, a GAO analyst categorized the response into the respective coding categories. A second GAO analyst reviewed the coding, and any disagreements in the coding were resolved through discussion or with a third analyst. Individual interviews. We interviewed 51 nonsupervisory and supervisory employees in person at SEC headquarters and by telephone for those in headquarters and regional offices in November and December 2018 to obtain their views on personnel management at SEC. Using information provided by SEC, we sent 577 letters to all employees who separated from SEC between March 2016 and November 2018, offering them an opportunity to schedule a meeting with us. We interviewed 15 of these former SEC employees by phone in January and February 2019. We asked certain questions of every person we interviewed related to (1) what personnel management practices were working well, (2) what challenges existed in personnel management, and (3) what initiatives, if any, SEC had taken to address these challenges. To maintain the confidentiality of individual responses, we did not record individual names in our transcripts. Instead, we collected and analyzed the information by division and rank only, and we aggregated our findings so that no individual comments could be identified. GAO analysts summarized themes that emerged from these individual interviews and used them to identify key issues related to SEC s personnel management and inform the design of our surveys. <10. Review of SEC Personnel Management Practices> To obtain information on SEC s efforts related to performance management, workforce planning, and communication and collaboration, we reviewed relevant SEC documents and interviewed SEC officials in the Office of Human Resources and other divisions and offices. We reviewed changes SEC made to its personnel management practices since our 2016 review, including steps taken to address our recommendations in these areas. We interviewed SEC staff from the Office of Human Resources about the status of SEC s efforts to pilot and implement a performance management system, including the status of SEC s efforts to address our 2013 recommendation that SEC conduct periodic validations of its performance management system and make changes, as appropriate, based on these validations. We also reviewed documents describing changes to SEC s performance management system. At the time of our review, SEC had plans to implement a new performance management system, including a new incentive bonus program, in January 2020 but had not yet completed detailed policies and procedures to implement this new system. However, we compared the system s key features with criteria identified in prior GAO work, including work on strategies federal agencies can use for fair and transparent performance management. In addition, we reviewed the SEC Office of Inspector General s 2018 report that described progress and challenges in the agency s performance management efforts. To examine SEC s workforce and succession planning practices, we obtained and reviewed a copy of SEC s fiscal year 2019 2022 Workforce and Succession Planning Strategy, which outlines new approaches to workforce and succession planning that SEC began to implement in fiscal year 2019. We also obtained and reviewed documentation of SEC s implementation of key steps in its workforce and succession planning processes, such as the survey instrument used to identify skill gaps for all SEC occupations, slide presentations of SEC divisions operating plans and budget requests that are informed by human capital review meetings, examples of action plans SEC divisions and offices developed to address identified skill gaps, SEC s Succession Planning Tool Kit, and relevant training plans for SEC divisions. In addition, we attended an SEC-led demonstration of the agency s new human capital dashboards, which are interactive software tools that provide the Office of Human Resources and agency leaders with up-to- date data on the state of the agency s workforce, such as data on hiring, attrition, skill gaps, and other workforce demographics. We also interviewed staff from SEC s Office of Human Resources and senior leaders from different SEC divisions. We compared SEC s workforce planning process against key principles for effective workforce planning, and we assessed SEC s efforts to strengthen its workforce and succession planning efforts to determine the extent to which they addressed our 2013 recommendations related to developing a more comprehensive approach to workforce and succession planning. This assessment included reviewing the extent to which key components of SEC s workforce and succession planning processes aligned with OPM standards on workforce and succession planning. In addition, we reviewed the changes SEC made to its hiring and promotion policies since our last review, including the steps SEC took to address our 2016 recommendation related to developing and implementing training for hiring specialists that is informed by a skill gap analysis. To examine steps SEC has taken to strengthen intra-agency communication and collaboration, we assessed SEC s efforts to address prior recommendations in this area. Specifically, we reviewed a report by a third-party vendor on communication and collaboration practices at the agency and met with the vendor s program manager. We also obtained and reviewed documentation of SEC s actions to implement recommendations included in the vendor s report. In addition, we reviewed documentation related to SEC s cross-divisional committees and working groups, including the charter of SEC s Operations Steering Committee, a cross-agency group chaired by the Chief Operating Officer whose purpose is to facilitate predecisional communications on significant cross-agency operational issues. To obtain information on the effectiveness of SEC s efforts to enhance communication and collaboration, we also met with senior leaders from SEC s largest offices and divisions, as well as selected members of SEC s Operations Steering Committee. We assessed the reliability of all of the data we used during this review and determined they were sufficiently reliable for the purposes of selecting our survey sample; developing summary tables on staffing ratios and turnover; and describing trends and views on personnel management practices at SEC. We used SEC data extracted from the Department of the Interior s Federal Personnel/Payroll System to construct the sample frames for our two surveys and develop summary tables in our appendixes. To determine the reliability of these data, we reviewed related documentation, tested the data for missing data and errors, and obtained written responses from SEC employees about data quality and control. To assess the reliability of the Federal Employee Viewpoint Survey data, we reviewed technical documentation of the survey and conducted routine data checks. We conducted this performance audit from August 2018 to December 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Employee Views on Selected Survey Questions from GAO s 2013, 2016, and 2019 Surveys Figure 13 below shows the results of eight questions related to personnel management and organizational culture from our 2013, 2016, and 2019 surveys of Securities and Exchange Commission (SEC) employees in mission-critical occupations in mission-critical divisions and offices. However, there are important limitations in comparing the results of our 2019 survey to the previous surveys. First, while the results of our 2019 survey were generalizable to all mission-critical nonexecutive employees, the results of our 2013 and 2016 surveys were not. Second, while we present the results for mission-critical employees for each year, for our 2019 survey, we changed the definition of mission-critical to reflect changes SEC had made to its mission-critical designations. The divisions, offices, and occupational categories largely remained the same across the 3 survey years with the following exceptions: for our 2019 survey, the Offices of Information Technology, Credit Ratings, and the Chief Accountant were added to the category of mission-critical offices and divisions. In addition, financial analysts were removed and information technology specialists were added to our list of mission-critical occupations. Third, while we administered the 2019 survey to a representative sample of mission-critical employees, we administered our 2013 and 2016 surveys to all mission-critical employees. As such, we present our 2019 results as estimated percentages with bands representing the range of results within a 95 percent confidence interval. Finally, when comparing our 2019 results on these eight questions to the 2016 survey results, we found that employees views on these questions were generally within the confidence intervals of the 2019 results. In these cases, we cannot conclude whether the changes are statistically significant. Overall, employees views on whether there is an atmosphere of trust improved since our 2016 survey. Nonsupervisory employees views on whether the criteria for promotion are clearly defined and whether information is adequately shared across groups in their division or office also improved. However, for the remaining survey questions, we could not conclude whether employees views improved or worsened because changes in employees views were within the confidence intervals or were only seen on either the agree or disagree side of the survey scale, not both. Appendix IV: Ratios of Securities and Exchange Commission Supervisors and Senior Officers, Fiscal Years 2008 2018 Section 962 of the Dodd-Frank Wall Street Reform and Consumer Protection Act included a provision for us to review whether there is an excessive number of low-level, mid-level, or senior-level managers at the Securities and Exchange Commission (SEC). We did not identify any standards that have been established for evaluating excessive numbers of supervisors. Therefore, we are reporting on the ratio of SEC employees at the various levels for fiscal years 2008 through 2018 in mission-critical offices and divisions. Table 4 illustrates the ratio of nonsupervisors to supervisors at SEC. Table 5 illustrates the ratio of nonsupervisors to senior officers, and table 6 illustrates the ratio of supervisors to senior officers. Appendix V: Percentage of Staff Who Left the Securities and Exchange Commission, Fiscal Years 2008 2018 Section 962 of the Dodd-Frank Wall Street Reform and Consumer Protection Act included a provision for us to review turnover rates within Securities and Exchange Commission (SEC) subunits. While staff turnover rates could be used to identify potential areas for improvement and further develop current supervisors, turnover may not be a good indicator of poor supervision for several reasons. For example, staff may leave to pursue opportunities with a different employer or a different career path, or for personal reasons. Tables 7 and 8 show the percentage of staff who left SEC from fiscal years 2008 through 2018 from headquarters and the 11 regional offices, respectively. Table 9 shows the total number of staff who left SEC during the same period. Appendix VI: Comments from the Securities and Exchange Commission Appendix VII: GAO Contact and Staff Acknowledgments <11. GAO Contact> <12. Staff Acknowledgments> In addition to the contact above, John Fisher (Assistant Director), Charlene J. Lindsay (Analyst-in-Charge), Grzegorz (Greg) Borecki, Carl Barden, Pamela Davidson, Jill Lacey, Marc Molino, Kirsten Noethen, Shannon Smith, Jennifer Schwartz, Benjamin Wiener, and Jason Wildhagen made key contributions to this report. | Why GAO Did This Study
The Dodd-Frank Wall Street Reform and Consumer Protection Act contains a provision for GAO to report triennially on SEC's personnel management. GAO's first two reports ( GAO-13-621 and GAO-17-65 ) identified a number of challenges and included nine recommendations.
This report examines (1) employees' views on SEC's personnel management, (2) SEC's performance management system, (3) SEC's steps to improve its workforce planning processes, and (4) SEC's efforts to improve communication and collaboration. GAO surveyed a representative sample of nonexecutive SEC employees in key occupations and all senior officers in nine key divisions and offices (with response rates of 64 and 63 percent, respectively). The results of the nonexecutive employee survey are generalizable to SEC's mission-critical employees. GAO also followed up on prior recommendations, reviewed SEC documents and personnel management practices, analyzed SEC workforce data, and interviewed SEC officials.
What GAO Found
Securities and Exchange Commission (SEC) employees in the five divisions and four offices GAO surveyed expressed positive views on some aspects of SEC's personnel management but reported concerns in other areas. For example, employees GAO surveyed generally had positive views on their direct supervisors and colleagues—81 percent of nonexecutive employees agreed that their direct supervisors had the skills and expertise to be effective managers. However, more than one-third of employees expressed concerns in areas such as performance management and favoritism. For example, 48 percent of nonexecutives disagreed that the performance management system in place at the time of GAO's review created meaningful distinctions in performance.
SEC has implemented eight of GAO's nine recommendations related to personnel management. However, SEC has not yet implemented a 2013 GAO recommendation to validate its performance management system—that is, to obtain staff input and agreement on the competencies, rating procedures, and other key aspects of the system. SEC plans to implement a new system in 2020, and validating this system would help ensure that it achieves its goals and identify changes needed to address employee dissatisfaction with performance management. In addition, a key feature of SEC's new performance management system will be a bonus program through which supervisors can nominate high-performing employees for a bonus of up to $10,000 once per fiscal year. However, SEC has not yet developed mechanisms for transparency and fairness for this new bonus program. GAO has previously highlighted the need for safeguards to better ensure fairness and transparency in performance management, particularly around systems affecting pay. Incorporating safeguards into the new bonus program—such as including multiple levels of review and publishing aggregate data on award decisions—would promote transparency and could increase employee confidence in the program.
Since GAO's most recent review in 2016, SEC has taken actions to implement a more comprehensive workforce planning process and strengthen intra-agency communication and collaboration. For example, SEC conducted a comprehensive analysis to identify skills gaps in its workforce. It also improved the link between its budget formulation process and annual meetings in which the Office of Human Resources consults with each division and office on its workforce needs and priorities. Additionally, to strengthen communication and collaboration, SEC commissioned a study to identify relevant best practices and created formal mechanisms, such as working groups, to enhance collaboration across divisions and offices. For example, in 2018, SEC created its Operations Steering Committee through which senior operational leaders throughout the agency periodically meet to coordinate on cross-agency operational issues, including those related to human capital.
What GAO Recommends
SEC should develop and implement safeguards to better ensure transparency and fairness in its new incentive bonus program. SEC agreed with this recommendation. GAO also reiterates its recommendation in GAO-13-621 that SEC conduct periodic validations (with staff input) of the performance management system and make changes, as appropriate, based on these validations. SEC stated that it expects to take action on this recommendation at the end of the 2020 performance cycle. |
gao_GAO-20-186 | gao_GAO-20-186_0 | <1. Background> VHA recommends that all veterans who receive VHA services be screened for HIV as part of routine medical care, including those who do not think they are at risk for acquiring the virus. The aim is to ensure that veterans who are infected with the virus can be diagnosed as early as possible, receive life-saving care, and avoid passing the virus on to others. VHA has made earlier diagnosis of HIV a priority for the agency and established certain requirements for VAMC providers that aim to achieve early diagnoses and rapid linkages to HIV care for veterans. HIV screening at VAMCs involves three stages, and related VHA policy sets forth providers requirements related to each of these stages. (See fig 1.) Stage one: providing HIV tests to consenting veterans. A provider in a primary care clinic, a specialty care setting (such as an infectious disease clinic), or other outpatient setting (such as a women s health clinic) offers a voluntary HIV test to an eligible veteran. In accordance with Centers for Disease Control and Prevention (CDC) recommendations, VHA policy requires providers to offer a one-time test to all veterans; annual tests to veterans with known higher risk factors for acquiring the virus, such as injection drug use; and tests every 3 months to veterans with known higher risk factors who are prescribed preventive medication known as pre-exposure prophylaxis (PrEP). Once a provider obtains consent from the veteran to be tested for HIV, the provider initiates an HIV test order with the laboratory. Although VHA policy previously required that providers document that they obtained veterans verbal consent to be tested for HIV, as of April 2019, providers must obtain, but no longer need to document, such consent. In addition, under VHA policy, providers must order the most current CDC- recommended HIV test (which detects HIV antigens and antibodies) when clinically indicated, and laboratories must follow the CDC-recommended HIV testing algorithm (see text box). A blood sample is collected from the veteran, and the laboratory processes the HIV test. <1.1. Information Technology Solutions and Contacting Veterans As Needed Are Among the Approaches That Selected VAMCs Use to Facilitate HIV Screening Officials from Selected VAMCs Reported Using Information Technology Solutions, Such as Clinical Reminders, to Facilitate the Provision of HIV Tests for the First Stage of Screening> Officials from the five selected VAMCs reported using information technology solutions and other strategies to facilitate each of the three stages of HIV screening: providing HIV tests to consenting veterans (stage one), communicating HIV test results to veterans (stage two), and linking HIV-positive veterans to care (stage three). Officials from multiple VAMCs in our review stated their providers use information technology solutions, such as clinical reminders, to fulfill their requirements related to the first stage of HIV screening: offering HIV tests to veterans, obtaining veterans verbal consent to be tested, and ordering the most current recommended HIV test. Offering HIV tests to veterans. Officials from three VAMCs in our review told us that providers often use clinical reminders that were developed and implemented by the VAMC or associated VISN to prompt them to offer HIV tests to veterans. (See fig. 2.) According to these officials, clinical reminders are used to prompt providers to offer a one-time HIV test to veterans who have not been tested. They can also be used to facilitate providers identification of veterans who are at higher risk for acquiring HIV and subsequently prompt them to offer these veterans an HIV test on an annual, rather than a one-time, basis. For example, officials at two of these three VAMCs indicated that the reminders include prompts for determining if veterans are at higher risk of acquiring HIV or fields to document identified risk factors. One of these officials told us that the recurrence of these clinical reminders can subsequently be increased or decreased to prompt providers to offer an HIV test to veterans who are at higher risk of acquiring HIV on a more or less frequent basis, depending on the risk factors identified over time. Obtaining veterans verbal consent to be tested. According to officials from the three VAMCs that discussed the use of clinical reminders, this technology prompts providers to obtain veterans verbal consent to be tested for HIV before ordering tests. Further, the reminders give providers a way to document that consent was obtained. For example, officials at one of the three VAMCs stated that providers can access the laboratory menu, which they use to order an HIV test, through the clinical reminder. The officials stated that providers must either (a) document that they obtained veterans verbal consent within the clinical reminder before accessing the menu, or (b) document that verbal consent was obtained once they have accessed the menu. Ordering recommended HIV tests. Officials from four of the VAMCs in our review reported that the facilities laboratory menus are designed to make it easier for providers to order the most current CDC-recommended HIV test. For example, officials from two VAMCs told us that the most current CDC-recommended HIV test is either the first result that appears when searching for an HIV test within the laboratory menu or the first HIV test that appears within a list of different types of HIV tests. According to officials from another VAMC, the facility s laboratory menu includes a prompt that explains that an HIV viral load test (a test that is primarily used to monitor an active HIV infection) is not recommended solely to be used for diagnostic purposes if a provider attempts to order such a test for this purpose. <1.2. Officials from Selected VAMCs Reported Contacting Veterans to Schedule Non-Routine Appointments to Communicate Positive HIV Test Results for the Second Stage of Screening> Officials at each of the five VAMCs in our review told us that staff contact veterans to schedule non-routine, in-person appointments within the 7 day time frame to inform them that they have tested positive for HIV. According to officials at four VAMCs, staff first place phone calls to veterans and request that the veterans schedule face-to-face visits with providers. Officials at two VAMCs explained that providers attempt to inform veterans of positive HIV test results in person given the sensitive nature of the diagnosis, as recommended by VHA policy. If staff cannot reach the veterans by phone, officials at these two VAMCs indicated that they send letters to the veterans asking them to contact their providers to obtain their test results. Further, officials at three VAMCs stated that staff send letters to veterans to inform them of negative HIV test results within the required 14 day time frame. Officials from all five VAMCs in our review also reported using various, additional approaches to communicating negative HIV test results to veterans, including notifying them by phone, informing them of test results during face-to-face visits, or uploading test results into veterans personal electronic health records (EHR). In addition, all five VAMCs in our review have developed protocols to prevent delays in the communication of positive HIV test results to veterans when the provider who ordered the test is unavailable. These protocols are generally outlined in facility-specific policies, which we reviewed, that require that a designee communicate positive HIV test results to veterans in lieu of the ordering provider. According to officials at three VAMCs, these protocols apply when the ordering provider is unavailable for a certain number of consecutive days (typically 3 days). Officials told us that if the designee is not available, their facility s protocol requires that VAMC leadership (such as the Chief of Medicine) communicate the results to the veteran. <1.3. Officials from Selected VAMCs Use Referrals to Community Providers and Telecommunications to Link Veterans Newly Diagnosed with HIV to Care for the Third Stage of Screening> Officials from all five VAMCs in our review indicated that providers may refer eligible HIV-positive veterans to care within the community to ensure that treatment occurs in a timely manner. According to officials at two of these VAMCs, these referrals are often made based upon veterans preferences or primary care providers comfort levels in providing HIV care to veterans who are also eligible for community care. An official at another of these VAMCs told us that eligible veterans who live further distances from the VAMC may ask to be referred to community care. According to officials from multiple VAMCs in our review, providers may also use telecommunications to provide HIV care to veterans. For example, officials from two VAMCs told us that their facilities offer telehealth consultations with an infectious disease provider to veterans who live outside the city in which the VAMC is located or who otherwise find it inconvenient to be seen in-person by an infectious disease provider at the facility. Telehealth allows infectious disease providers to care for veterans who would otherwise receive HIV care from primary care providers or in the community. Officials at another VAMC reported that infectious disease providers are available via cell phone or Skype (software that can be used to make one-to-one or group voice or video- based calls from a cell phone or computer) to assist primary care providers who assume responsibility for veterans HIV care. <2. VHA Facilitates Monitoring of the Provision of HIV Tests, but Has Not Completed All Steps to Enable Monitoring of Subsequent Stages of HIV Screening VHA Collects and Disseminates Data for VAMCs to Use to Monitor the Offering of HIV Tests to Veterans> VHA facilitates monitoring of the first stage of HIV screening by providing information to VAMCs that include data on the number of veterans who have been tested for the viral infection. While VHA does not collect data on the timeliness with which HIV test results are communicated to veterans, data resulting from VHA s monitoring of the communication of other test results may indicate whether veterans are informed of HIV test results within recommended time frames. However, HIV lead clinicians may not be aware that they have access to this information. VHA does not currently monitor whether veterans who test positive for HIV are linked to care within recommended time frames; however, VHA has taken steps to collect and disseminate data that can be used to monitor this stage of screening. According to HHRC officials, the office collects and disseminates annual and biannual data to each VAMC s HIV lead clinician on the offering of HIV tests to veterans. (See table 2 for information related to VHA s monitoring activities.) This includes data on (1) the number of veterans who are eligible to receive one-time HIV tests, as well as the number of eligible veterans who were tested, for each VAMC and VISN; and (2) the number of veterans who are prescribed PrEP who are tested for HIV every 3 months to document that they are still HIV negative as recommended by the CDC. HHRC officials told us that they share the one-time testing rate data with HIV lead clinicians on an annual basis, and that these clinicians can use the data to calculate their VAMCs one- time HIV test rates and, subsequently, compare their rates regionally or to VAMCs that offer the same complexity of services. According to HHRC officials, they upload these data to an internal data sharing website and notify HIV lead clinicians that the data are available via email and during regularly scheduled conference calls that facilitate the discussion of issues related to HIV screening. HHRC officials also told us that VHA uses the same method to share with HIV lead clinicians on a biannual basis data on the HIV test rate for veterans who are prescribed PrEP. VISNs and VAMCs have used VHA s data on the offering of HIV tests to veterans to support local efforts to improve HIV screening. For example, HHRC officials told us that VISNs have used data on the number of veterans who are eligible to receive one-time HIV tests, and who were tested, to support applications for VHA-sponsored grants intended to improve the offering of such tests to homeless veterans. Officials from four VAMCs in our review told us that they have used these data to identify the need to increase testing, which led to the implementation of new strategies, such as clinical reminders that prompt providers to offer one-time and risk-based HIV tests to veterans. While VHA recently monitored the documentation of verbal consent by collecting data that VAMCs used to make related improvements, such monitoring is no longer needed due to a change in VHA policy. Between fiscal years 2013 and 2016, NCEHC (the VHA office responsible for VHA s policy on informed consent) oversaw a system-wide review that led to improvements in the number of VAMC providers that documented in veterans medical records that they obtained veterans verbal consent to be tested for HIV. In 2019, VHA amended its policy and no longer requires providers to document that they obtained verbal consent. In addition, VHA recently monitored VAMC laboratory protocols for HIV testing, but HHRC noted that this monitoring is no longer needed, because the recommended testing technologies have been implemented. In 2018, VHA conducted a one-time review of VAMC laboratory protocols to ensure that CDC recommendations for the use of HIV tests were followed at each VAMC, such as recommendations related to the type of HIV test that providers should order for diagnostic purposes. VAMCs were required to submit verification to VHA showing that their laboratories had implemented the most current CDC-recommended testing technologies. According to HHRC officials, this provided assurance that providers were ordering the most current CDC-recommended HIV test and that laboratories were following the CDC-recommended HIV testing algorithm. VHA s Director of Pathology and Laboratory Medicine Service reviewed the verification submitted by each VAMC, and VAMCs were required to develop action plans to address any identified deficiencies. As of August 7, 2018, VHA found that all VAMCs were following CDC s recommendations related to the availability and use of HIV tests. According to HHRC officials, VHA does not need to continue its monitoring effort in this area, since the implementation of recommended testing technologies by VAMCs was a one-time effort. Further, officials from the five VAMCs in our review told us that the VAMCs were using the CDC-recommended HIV test, and nothing inconsistent came to our attention during our medical records review. <2.1. VHA Makes Data on the Timeliness of Communicating Test Results Available, but Has Not Ensured that VAMC Staff Are Aware They Have Access to It> OPC and RAPID (the VHA offices responsible for VHA s policy on the communication of test results and related performance measurement) make data available to VAMC staff that may indicate the timeliness with which HIV test results are communicated to veterans. OPC and RAPID publish a quarterly report on the timeliness with which results from the eight tests that are included in its review of veterans medical records are communicated to veterans at each VAMC. While HIV tests are not one of the eight tests included in the OPC and RAPID review, VAMC officials we interviewed told us that VAMC procedures for communicating results are generally the same for all tests. OPC officials stated that VAMC officials could use the data to identify needed performance improvement efforts related to the communication of test results. OPC officials added that while it is not the primary goal of the OPC and RAPID review, data on the eight tests included in the review may serve as a sample, providing some indication as to whether VAMC procedures promote the timely communication of results of any test to veterans. Although OPC and RAPID publish a quarterly report on the timeliness of communicating test results, HIV lead clinicians may not be aware they have access to this information. OPC and RAPID officials told us that VAMC staff responsible for serving as liaisons for OPC s medical records review are notified by RAPID via email of the report s availability. RAPID officials added that any VAMC staff may opt in to the email group that officials use to notify liaisons that the timeliness data have been published. HIV lead clinicians we interviewed reported that they did not know that they can opt in to this email group. According to RAPID officials, the main mechanism for making VAMC staff aware that they can join this email group is through their VAMC colleagues. VHA has not taken steps to more systematically communicate the availability of these timeliness data to all VAMC staff (including HIV lead clinicians). Standards for internal control in the federal government require that agencies communicate necessary information throughout all agency reporting lines to achieve the agencies objectives and respond to identified risk. VHA policy requires that HIV lead clinicians serve as VAMC points of contact on HIV testing, diagnosis, and care, which may include monitoring HIV care. An HIV lead clinician we interviewed also noted that these data could be used as an indicator as to whether HIV test results are being communicated to veterans in a timely manner. Further, having these data could help staff determine if delays in communicating test results pose risks to the timely completion of HIV screening, such as whether veterans who test positive for HIV are linked to care for their diagnosis as expeditiously as possible. If there are unnecessary delays in communicating positive HIV test results to veterans, providers may be at risk of delaying the start of needed HIV treatment. According to VHA policy, and confirmed by RAPID officials, the timely communication of test results to veterans is essential for high quality care, and the timely follow- up of positive test results may help veterans achieve favorable health outcomes. <2.2. VHA Does Not Collect or Disseminate Data to Monitor VAMCs Timeliness in Linking Veterans Who Test Positive for HIV to Care, but Has Taken Steps to Do So> Linking Veterans to Preventive Care for Human Immunodeficiency Virus (HIV) In addition to linking veterans who test positive for HIV to care for their diagnosis, Department of Veterans Affairs (VA) medical centers link veterans who test negative for HIV to preventive care. The use of preventive medication, or pre-exposure prophylaxis (PrEP), reduces the risk of acquiring HIV in adults. Officials from VA s HIV, Hepatitis, and Related Conditions Programs (HHRC) told us that they implemented a PrEP quality improvement initiative in September 2016, which focuses on increasing the use of PrEP among veterans who live in areas of the country with a higher prevalence of HIV compared to the national average. HHRC officials told us that the initiative focuses on providing high quality care to veterans in accordance with current recommendations on the use of PrEP. For example, the Centers for Disease Control and Prevention (CDC) has recommended that providers prescribe PrEP medications to individuals who test negative for HIV within one week of documenting the test result. HHRC officials told us that they monitor the time frames in which veterans are prescribed PrEP medication by collecting data on a biannual basis on the date on which veterans blood was drawn for the purposes of conducting an HIV test and the date on which veterans were prescribed the medication. HHRC officials told us that these data are disseminated to VA medical center staff responsible for improving HIV screening to improve the appropriate use of PrEP as needed. source of information to determine whether veterans are linked to care specifically for their HIV diagnosis within the recommended time frame. According to officials, the data tool was implemented in October 2018, and as of early November 2019, they were in the process of building the capacity to generate a report based on these data showing the time frames in which veterans are linked to HIV care. HHRC officials initially indicated that they expected to begin monitoring linkage to HIV care in August or September 2019, but they were not able to do so for various reasons. According to HHRC officials, the process of building the new data tool and the capacity to generate a report has been lengthy due to competing priorities related to VHA s ongoing development of a new EHR system. These officials added that they have been simultaneously focused on implementing required improvements in the diagnosis and treatment of veterans with Hepatitis C. According to officials, the time frame to develop the new data tool and report has been extended due to these competing priorities. HHRC officials told us that once monitoring begins, they will report on the number of veterans who are linked to HIV care within the recommended 30-day time frame for each VAMC on an annual basis, retroactive to fiscal year 2018. According to HHRC officials, the data will be disseminated by publishing them on an internal data sharing website that each VAMC s HIV lead clinician can access. The officials explained that these clinicians will be notified when the data have been published via email and during regularly scheduled conference calls with HHRC. HHRC officials also told us that the data may be used to inform any needed improvements in the timeliness of linking newly diagnosed veterans to HIV care. Standards for internal control in the federal government require that agencies perform ongoing monitoring activities and evaluate results to remediate any identified deficiencies on a timely basis. VHA policy requires that HHRC develop data reports for monitoring the quality of HIV care that are to be disseminated to the VISNs or VAMCs, among other entities and individuals, and lead VHA efforts toward meeting the NHAS s recommendations. However, until HHRC disseminates data on the timeliness with which veterans are linked to HIV care, VAMCs are limited in their ability to identify any delays and take the necessary steps to ensure that this occurs within recommended time frames, now and in the future. In our nongeneralizable review of the 38 medical records for veterans who tested positive for HIV, we observed some instances of delay. Specifically, we found that six veterans were first seen by an infectious disease provider, who typically treats HIV, more than 30 days after being informed of their positive test results. We were unable to identify a documented explanation in the six medical records for why linkages to care exceeded 30 days. Delays in linking veterans to HIV care can increase the risk that veterans are not promptly beginning treatment to help achieve favorable health outcomes. According to the 2015 NHAS, evidence shows that earlier treatment reduces the risk that an individual with HIV will develop AIDS or transmit the virus to others. <3. Conclusions> Veterans who are voluntarily tested for HIV at VAMCs, informed of positive HIV test results in a timely manner, and expeditiously linked to care before their infections progress further have improved health outcomes, a longer life expectancy, and a reduced risk of transmitting the virus to, for example, a sexual partner. VHA has monitored the provision of HIV tests to veterans and reported related improvements resulting from these monitoring efforts, ensuring that, for example, veterans are receiving the most current CDC-recommended test. However, VHA s dissemination of data on the time frames in which test results are communicated to veterans and monitoring of the time frames in which HIV-positive veterans are linked to care specific to their diagnosis needs improvement. <4. Recommendations for Executive Action> We are making the following two recommendations to VA: The Under Secretary for Health should take steps to improve communication to VAMC staff (including HIV lead clinicians) about the availability of data on the time frames in which test results are communicated to veterans. (Recommendation 1) The Under Secretary for Health should disseminate data to HIV lead clinicians on the extent to which veterans who test positive for HIV are linked to care within recommended time frames. (Recommendation 2) <5. Agency Comments> We provided a draft of this report to VA for review and comment. In its written comments, which are reproduced in appendix I, VA concurred with our recommendations. VA stated that it will communicate to VAMC staff, including HIV lead clinicians, how providers may be notified when the data on the time frames in which test results are communicated to veterans have been published. Further, VA stated that HIV test results will be added to the OPC and RAPID quarterly review of such time frames beginning in the second quarter of fiscal year 2020. VA also indicated that as of December 2019, the agency began annual monitoring of whether veterans are linked to HIV care within recommended time frames and will notify HIV lead clinicians of the availability of the data during conference calls scheduled to take place in January and March 2020. We are sending copies of this report to the appropriate congressional committees and the Secretary of Veterans Affairs. In addition, this report is available at no charge on the GAO website at https://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at DraperD@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Veterans Affairs Appendix II: GAO Contact and Staff Acknowledgments <6. GAO Contact> <7. Staff Acknowledgments> In addition to the contact named above, Hern n Bozzolo (Assistant Director), Karen Belli (Analyst-in-Charge), Hannah Grow, Cathy Hamann, and Tatyana Walker made key contributions to this report. Also contributing were Jacquelyn Hamilton, Diona Martyn, and Vikki Porter. | Why GAO Did This Study
VHA is the largest single provider of medical care to HIV infected individuals in the nation. In 2018, VAMCs tested approximately 240,000 veterans for HIV and provided HIV care to over 31,000 veterans. Early diagnosis and timely treatment is important for achieving favorable health outcomes and reducing the risk of transmitting the virus to others.
The accompanying Joint Explanatory Statement for the Consolidated Appropriations Act, 2018 included a provision for GAO to examine how VAMCs have implemented VHA's HIV screening policy. This report examines (1) approaches that selected VAMCs use to facilitate HIV screening, and (2) the extent to which VHA monitors HIV screening. GAO analyzed VHA documents, including VHA directives and a nongeneralizable sample of 103 veterans' medical records, to understand how providers made decisions and documented actions related to HIV screening. GAO also interviewed VHA and VAMC officials, the latter from five facilities selected based on factors such as the range of HIV prevalence rates.
What GAO Found
Officials from five selected Department of Veterans Affairs (VA) medical centers (VAMC) reported using various approaches to facilitate human immunodeficiency virus (HIV) screening, which involves three stages. For example, for the first stage of HIV screening (providing HIV tests to consenting veterans), officials told GAO that VAMCs use information technology solutions, such as clinical reminders that prompt providers to offer HIV tests to veterans who have not been tested. These clinical reminders can also prompt providers to offer an HIV test on a repeated, rather than a one-time, basis to veterans with known higher risk factors for acquiring HIV.
The Veterans Health Administration (VHA) monitors the first stage of HIV screening by collecting and disseminating data that VAMCs can use to calculate and, if necessary, improve facility HIV testing rates. VHA also collects data on the time frames in which results for eight types of tests are communicated to veterans; these data could indicate how timely test results are being communicated generally (stage two of HIV screening). However, VHA has not effectively communicated the availability of these data to HIV lead clinicians. In addition, VHA does not currently monitor whether VAMCs link veterans who test positive for HIV to care in a timely manner (stage three of HIV screening). VHA officials indicated that they are in the process of building the capacity to collect and disseminate to HIV lead clinicians data on the number of veterans at each VAMC who are linked to HIV care within 30 days, as recommended. However, the time frames for completing these efforts have been extended due to competing priorities, such as implementing required improvements in the diagnosis and treatment of veterans with Hepatitis C. Until VHA improves VAMC staff's access to, or provides them with, these data, it increases its risk that HIV-positive veterans do not receive timely treatment. Such treatment can improve veterans' health outcomes and prevent the transmission of the virus to others.
What GAO Recommends
VA should (1) improve communication regarding the availability of data on the timeliness with which test results are communicated to veterans, and (2) disseminate data to HIV lead clinicians on the timeliness with which veterans are linked to HIV care. VA concurred with GAO's recommendations. |
gao_GAO-20-428 | gao_GAO-20-428_0 | <1. Background Federal Agencies and Programs that Assist Homeless Veterans> VA, HUD, and DOL are the federal agencies that provide programs specifically aimed at assisting homeless veterans. They are among the 19 federal member agencies of USICH an independent establishment within the Executive Branch charged with coordinating the federal response to homelessness and creating a national partnership at every level of government and with the private sector to reduce and end homelessness nationally. USICH, VA, and HUD have jointly established criteria and benchmarks to guide communities that are taking action toward being certified as having ended veteran homelessness. USICH stated that an end to homelessness does not mean that no one will ever experience a housing crisis again, but that every community will have a systematic response in place to prevent homelessness whenever possible and ensure that homelessness is a rare, brief, and non-recurring experience. VA, HUD, and USICH coordinate their efforts towards this goal through a working group, Solving Veterans Homelessness as One, which was established in 2012. We identified 16 federal programs that target services specifically to veterans who are homeless or are at risk of becoming homeless. As shown in table 1, the programs provide permanent and transitional housing, health care, employment assistance, and supportive services, such as assistance with rent, utilities, and moving costs. Twelve of the programs are administered solely by VA, two are jointly administered by HUD and VA, and two are administered by DOL. VA s and HUD s homelessness programs follow the principles of Housing First, which is intended to provide housing without any preconditions and barriers to entry such as sobriety, treatment, or service participation requirements. The largest of these programs were HUD-VASH, GPD, and SSVF. In fiscal year 2019, HUD-VASH reported over $1 billion in obligations; GPD reported obligations of over $234 million; and SSVF reported $380 million in obligations. <2. Role of Local VAMCs, Service Providers, and Other Entities> Homeless veterans can access program services in several ways, including through: VA Medical Centers (VAMCs). Program services can be provided to homeless and at-risk veterans at their local VAMCs. Service providers. Veterans may also obtain services through local, state, or nonprofit organizations in the community, some of which receive grants from federal agencies to fund program services. Public housing agencies (PHAs). Housing vouchers are administered to homeless veterans by PHAs, which are HUD-funded city, county, and state agencies. VAMCs, service providers, and PHAs may coordinate through Continuums of Care (CoC), which are composed of stakeholders in a geographical area that, among other things, coordinate to provide homeless services, apply for grants, set local priorities, and collect homelessness data for all homeless populations. Each year, HUD awards CoC program funding competitively to nonprofit organizations, states, and other local recipients. The CoC is responsible for its operation and developing and implementing its plan and strategies to prevent and end homelessness. Additionally, the CoC must choose an entity to operate the local information system used to collect client-level data and data on the provision of housing and services to homeless individuals and families and those at risk of homelessness (referred to as the Homeless Management Information System). The CoC also designates an entity that prepares and submits the CoC program application for HUD funding (referred to as the collaborative applicant). HUD requires each CoC to establish and operate a centralized or coordinated assessment system (referred to as Coordinated Entry). This system may include implementing a no wrong door approach in which a homeless family or individual can show up at any homeless housing and service provider in a geographic area and get screened for services using the same assessment tool (see figure 1). The goal of Coordinated Entry is to ensure that people experiencing a housing crisis within a CoC are quickly and consistently assessed and referred for services. HUD officials stated that Coordinated Entry is a process that was first developed by some CoCs based on best practices. In 2017, HUD adopted and codified requirements for all CoCs to participate in Coordinated Entry. That same year, VA published requirements for VAMCs to participate in Coordinated Entry. VA and Selected Service Providers Reported Facing Challenges Related to Meeting Veterans Needs, Limited VA Staffing, and Other Factors Meeting Veterans Needs and Other Factors Create Challenges, According to VA and Service Providers According to VAMC staff and service providers we interviewed, they faced challenges serving homeless veterans and those at risk of becoming homeless due, in part, to the additional level of service and support that some veterans need. For example: Substance use and mental illness. Substance use disorders and mental health issues such as post-traumatic stress disorder (PTSD) and depression, are among the most complex issues many homeless veterans face, according to USICH. In 2018, USICH reported that 28 percent of homeless veterans that receive VA-provided health care have been diagnosed with depression. Thirteen percent have been diagnosed with PTSD. Further, 19 percent struggle with alcohol abuse and 20 percent with drug abuse. VAMC staff and service providers told us that addressing the complex nature of these conditions is often a challenge for them. For example, one SSVF provider told us that it is challenging to find housing for veterans with mental health or substance use disorders; further, HVCES staff at one VAMC told us that employment programs for the general population may not be suitable for clients with these disorders. HUD-VASH staff at one VAMC told us that there are not enough mental health providers in the VA system. Overall, staff from three VAMCs from the GPD and HVCES programs, five GPD service providers, and three SSVF service providers cited challenges related to substance use and mental illness. Aging homeless veterans. In 2018, USICH reported that the number of homeless veterans who were 62 and older increased by 54.3 percent between 2009 and 2016. VA officials told us that this trend is expected to continue and that this population has increased, in part, because the services that VA offers are not targeted to aging veterans. According to VA officials, there is a similar aging trend in the general veteran population. HUD-VASH staff at three VAMCs, HVCES staff at one VAMC, and GPD staff at two VAMCs told us that aging veterans require a higher level of care than what existing programs may be able to fully address. Some of these veterans may suffer from ambulatory and cognitive issues and have difficulties living alone but cannot afford an assisted living arrangement. HUD-VASH staff at one VAMC we visited told us the VAMC was able to hire five occupational therapists to assist aging clients with their specialized needs. Further, VA officials told us that HUD-VASH is collaborating with VA s Geriatrics and Extended Care programs office to explore how aging homeless veterans can be served through other programs, such as the Medical Foster Home and Community Residential Care programs. This collaboration would allow VA to provide funding for services while the HUD-VASH voucher would pay for housing costs. VA officials also told us that they are working to market HUD-VASH to developers and funders to increase the development of project-based HUD-VASH housing. This would give the program a dedicated housing stock and better serve subpopulations of veterans, such as veterans who are elderly or suffer from mental illness. Resistance to program participation. According to HCHV staff at two VAMCs, HVCES staff at three VAMCs, two PHAs that administer the HUD-VASH voucher program, and five service providers (two GPD, two HVRP, and one SSVF), a key challenge in addressing the needs of homeless veterans is their resistance to participating in a program, particularly if it places restrictions or requirements on them. This issue makes it challenging for outreach and treatment teams to deliver services. In addition to the challenges cited above, veterans must meet certain eligibility requirements to participate in homeless assistance programs which if not met can present challenges to VAMC staff and service providers when providing veterans with services. For example, veterans must meet certain criminal history requirements to be eligible for HUD- VASH. HVCES staff from one VAMC, DCHV staff from two VAMCs, HUD-VASH staff from another VAMC, and three service providers (one GPD and two SSVF) also told us that it is challenging to find housing and employment for homeless veterans with legal or criminal problems and landlords may be resistant to working with them. One PHA that works with HUD-VASH and one SSVF service provider told us that a veteran s ineligibility for VA health care services also presents a challenge to them. This is because a number of VA homeless programs require a veteran to be eligible for VA health care benefits as a condition to enrollment. Generally, veterans are eligible to receive VA health care benefits if they served in the active military, naval, or air service and were discharged under conditions other than dishonorable. Therefore, according to VA officials, veterans with dishonorable discharges cannot access VA homeless assistance programs and veterans with other-than- honorable discharges may have limited access to them. In addition to meeting the discharge status requirement, a person must also meet the definition of veteran to be eligible for VA health services. However, the definition of veteran depends on factors including length of service and if the individual served on active duty or was part of the National Guard or the Reserves. Therefore, even if an individual has the appropriate discharge status to be eligible, they may not meet other eligibility requirements for VA health benefits. One service provider told us that they have to work on alternative solutions to help veterans that do not meet eligibility requirements. In 2017, VA in partnership with HUD implemented a flexibility within the HUD-VASH program the HUD-VASH Continuum which according to VA officials, will permit PHAs to make up to 15 percent of their total HUD- VASH allocation available to veterans who are ineligible for VA health care services, with some exceptions. According to VA officials, this expands the availability of permanent supportive housing to service members who are not eligible for VA health care. In addition, the House and Senate are considering bills to expand HUD-VASH eligibility. DOL has also implemented statutory changes to HVRP eligibility requirements to provide veterans with better access to job training programs. The program s eligibility requirements have been broadened to include veterans participating in the HUD-VASH, Tribal HUD-VASH, and SSVF programs, and other veterans that were not previously eligible. <3. Broader Challenges, Such as Limited VA Staffing and Affordable Housing, Affected Assistance, According to VA and Service Providers> VAMC staff and services providers cited broader challenges not specific to veterans or the assistance programs themselves as impacting their ability to provide assistance to homeless veterans. Those challenges include VA staffing issues and external factors, such as the lack of affordable housing and limited transportation options. VA staffing shortages. VA officials, HVCES staff at three VAMCs, and HUD-VASH and DCHV staff at two VAMCs told us they faced difficulties with recruitment and retention, which have led to persistent understaffing. For example, staff at four VAMCs for the HUD-VASH, DCHV, and HVCES programs told us that the hiring and onboarding process can often take a long time, and by the time an offer is finalized, qualified applicants have moved on to other jobs. DCHV staff at two VAMCs and HUD-VASH staff at three VAMCs cited understaffed human resources offices and a taxing approval process as contributing factors. HUD-VASH staff from one VAMC told us that it is difficult to fill some positions because the outreach work requires extensive travel within large geographic areas. Further, they told us that in high-cost areas, the VA s local pay scale is not high enough to attract new recruits for case manager positions. HUD-VASH staff at one VAMC indicated that they have not been fully staffed for several years. Limited staffing may limit the number of veterans who can be served, according to VA officials. For example, DCHV program staff at one VAMC told us that they had to close 83 beds because there is not enough staff to keep them operational. One PHA working with the HUD-VASH program told us that VA s staffing challenges create a bottleneck of services to clients while staff at one VAMC working with SSVF told us that high turnover of program staff is disruptive for clients. Overall, staffing shortages were cited as a challenge by VAMC staff for several programs: HUD-VASH, DCHV, HCHV, GPD, and HVCES. VAMC staff we interviewed have taken some steps to limit the impact of staffing issues. For example, at one VAMC, staff from the HCHV and GPD programs have conducted cross-training so they can back each other up when staffing shortages occur. Two other VAMCs have brought in staff from other locations to help with the workload or have developed an action plan to address employee burnout. Our past work has highlighted VA s staffing challenges, including recruiting and retaining clinical staff. For example, we previously reported that difficulties in recruiting and retaining skilled health care providers and human resource staff at VAMCs make it difficult to meet the health care needs of more than 9 million veterans. We have also previously reported that, in addition to high attrition and increased workload, human capital shortfalls can lead to burnout among the staff whose job it is to implement these programs. Housing cost and availability. Limited and high cost housing exacerbate the other challenges VAMC staff and service providers identified. For example, HUD-VASH staff at one VAMC told us that even with subsidies, it is difficult for veterans to obtain housing because HUD-VASH vouchers may not be sufficient to cover rent. HUD-VASH staff from one VAMC and one PHA we interviewed told us that because housing costs are rising, and housing in metro areas remains limited, expensive, and competitive, veterans have fewer housing options available to them. Limited housing was cited as a challenge by VAMC staff and service providers for HUD-VASH and SSVF programs in all types of locations urban, suburban, and rural areas. Finding and recruiting landlords is a significant challenge in getting veterans housed, according to HUD officials, HCHV staff at one VAMC, HUD-VASH staff from two VAMCs, two PHAs, and five SSVF service providers. According to HUD-VASH staff from one VAMC and SSVF staff from another VAMC, the demand for housing exceeds supply and landlords have few incentives to accept homeless veterans. HUD-VASH staff from one VAMC, one PHA, and one SSVF service provider also told us that some landlords perceive veterans to be risky because some have criminal records or substance use disorders, and may be reticent to work with them out of fear of incurring damages to their property. Some service providers have taken steps to create incentives for landlords to participate in VA s programs. For example, HUD-VASH staff at one VAMC told us that local providers may partner to cover moving fees for veterans and encourage landlords to accept veterans housing vouchers. One PHA has put together landlord forums and is working to build relationships with landlords in their communities. Another PHA has held housing tours and fairs to bring landlords and clients together. VA has also implemented program changes to help address the lack of affordable housing. For example, the new Shallow Subsidies initiative that became effective in September 2019, allows SSVF service providers to provide very low-income veteran families a rental subsidy for a two-year period without requiring recertification. The two-year period ensures no reduction in subsidy even if a recipient s income situation improves within that time frame and the family is no longer considered very low income. This provides a strong incentive for employment gains because the assistance is not dependent on income during this two-year period. Launched in 2018, VA s Rapid Resolution initiative is another solution that is designed to prevent or resolve homelessness by providing immediate assistance when a veteran enters an emergency shelter system such as by offering landlord mediation and conflict resolution, or connecting the veterans to support networks in other places. According to VA officials, Rapid Resolution is being implemented through VA s SSVF program, in coordination with HUD and USICH. Limited resources (other than staffing). HUD-VASH staff at one VAMC told us they are short on equipment like laptops, office supplies, and office space. Additionally, HUD-VASH staff at another VAMC told us they do not have access to government cars for work- related travel. One HVRP service provider told us that case managers do not have enough vehicles to travel long distances or to remote locations to meet clients. To make up for shortages in resources, one SSVF service provider told us that it develops partnerships with local programs to meet the needs of the client. HUD-VASH staff at one VAMC indicated that they have communicated issues of insufficient resources to their leadership, but the issues have not been addressed to date. Resource limitations were cited as a challenge by HUD-VASH staff from two VAMCs, GPD staff from three VAMCs, DCHV staff at two VAMCs, HCHV staff at two VAMCs, and four service providers (two GPD and two HVRP). Transportation limitations. According to VAMC staff and service providers, the lack of transportation for veterans is a significant challenge for some programs. For example, according to DCHV and HVCES staff at one VAMC, HUD-VASH staff at another VAMC, and one GPD and two HVRP service providers, some veterans may not have vehicles or may live in areas with limited public transportation systems. This makes it difficult for the veterans to access resources, go to job interviews, or secure transportation for jobs. Some service providers told us they make alternative arrangements for their clients to help address these issues. For example, one HVRP service provider told us they might drive veterans to interviews or arrange for public transportation. DCHV staff at one VAMC and HVCES staff at another VAMC told us that they work with community partners to provide alternatives like shuttle services and bus passes. Additional challenges related to specific programs we reviewed are discussed in appendix II. Homeless Assistance Programs for Veterans Overlap in Services, but Address Different Needs Overlap Exists Among Some Program Services for Homeless Veterans We reviewed the services provided, eligibility requirements, and population served by the 16 programs that exclusively target homeless veterans to identify duplication and overlap. We determined that there is no duplication among the programs, but identified overlap across some program services. Specifically, we identified 18 main services that are commonly offered across the 16 programs and found that at least six of those services overlap across two or more programs (see figure 2). However, we also found these programs differed in meaningful ways, for example in terms of the different types of homeless veterans served or specialized services or focus. As we have previously reported, fragmentation, overlap, and duplication exists across the government, which can present benefits and challenges. Duplication occurs when two or more programs provide the same services to the same beneficiaries. Overlap occurs when two or more programs offer similar services to similar beneficiaries. As shown in figure 2, 15 of the 16 programs overlap in two or more of the following services that they offer. Case management is a process for managing a client s care that includes assessing the needs of the veteran and evaluating health care options and services to ensure those needs are met while maintaining a primary focus on resolving the veteran s homelessness through permanent housing. Eleven programs provide case management services: HUD-VASH, Tribal HUD-VASH, HCHV, HCRV, H-PACT, SSVF, GPD, DCHV, CWT-TR, HVRP and VJO. Supportive services might include providing meals, counseling, child care, housing assistance, transportation, and other services essential for achieving and maintaining independent living. Six programs provide supportive services: HUD-VASH, Tribal HUD-VASH, GPD, SSVF, HVRP, and Stand Down. Outreach involves directly contacting veterans in need of homeless services and connecting them with housing, health care, and supportive services. Six programs conduct outreach: HCHV, HVRP, SSVF, CRRCs, VJO, and HCRV. Referrals are the most common way for homeless veterans to find out about program services available to them. Referral services include conducting an assessment of the clients needs, connecting them to the appropriate programs, and following up with the clients as well as documenting all referral activities. Six programs provide referral services: HVRP, SSVF, CRRCs, NCCHV, HCRV, and Stand Down. Employment services include help with creating job opportunities for veterans, job searches, interviewing, and other employment assistance. Three programs provide employment-related services: HVCES, CWT-TR, and HVRP. Rental subsidies are offered to veterans through vouchers and grants, which help subsidize rental costs. Three programs offer rental subsidies: HUD-VASH, Tribal HUD-VASH, and SSVF. Although we identified overlap in these services across 15 of the 16 programs, the programs differ in meaningful ways. Specifically, some of these programs serve specific subpopulations of veterans and some provide a specialized service that other programs do not offer. For example, of the 11 programs that offer case management services, one program provides medical care (H-PACT), while others provide services in different areas such as transitional housing (GPD), housing subsidies (HUD-VASH), supportive services (SSVF), preparing veterans for employment (HVRP) and outreach (HCHV). Other programs that offer case management services serve unique subpopulations of homeless veterans such as those with mental health or substance use issues (CWT-TR and DCHV), American Indians and Alaskan Natives living in or near reservations or other Indian areas (Tribal HUD-VASH), or justice- involved veterans in local jails (VJO) and state and federal prisons (HCRV). According to a VA directive, more than one case manager may be involved in care planning and service delivery for veterans with complex needs. In addition, staff at the six VAMC locations we visited told us that clients may be co-enrolled in more than one program and can receive case management services from each of those programs. Figure 3 illustrates how case management may overlap across programs, but each program provides distinct services to the veteran. Similarly, of the three programs that provide employment services, HVCES focuses on establishing partnerships with employers to develop job opportunities for veterans and connect them with community services, while HVRP helps the veteran prepare to pursue and gain meaningful employment. The CWT-TR program, on the other hand, focuses on veterans with more complex issues such as substance use, mental health issues, and challenges in obtaining or sustaining employment that may accompany these conditions. Similar meaningful distinctions in subpopulations of beneficiaries and services exist across the programs that provide other types of services to homeless veterans that overlap supportive services, outreach, referrals, and rental subsidies. Additional information on program differences, including information on program beneficiaries and services, can be found in appendix II. <4. Overlap in Program Services Presents Potential Benefits and Challenges> As we previously reported, in some cases it may be appropriate or beneficial for multiple agencies or entities to be involved in the same programmatic or policy area due to the complex nature or magnitude of the effort. Overlapping programs may also facilitate access to services because persons experiencing homelessness are not steered toward one specific point of entry and, in contrast, can access services through several entry points. However, when multiple programs overlap, there is also a risk of program administrators making inefficient use of available resources if they do not coordinate their efforts. For example, according to VA officials, overlap may result in operational costs if the overlapping services are not coordinated well. Table 2 describes some of the potential benefits and challenges of overlap in services for homeless veterans, as identified by agency officials, VAMC staff, and others we interviewed. Effective collaboration among agencies and service providers can help address some of these potential challenges and may help avoid the potential inefficiency that overlapping services may create. VAMC staff and service providers told us that they have taken steps to limit duplication where appropriate. Additionally, they told us that they collaborate and communicate with each other to avoid or mitigate overlap. VA has also issued guidance directed at enhancing coordination between its homeless programs and eliminating or reducing duplication of services, including the following: Veterans Health Administration (VHA) Directive 1110.04, Integrated Case Management Standards of Practice. This guidance states that case management services should be coordinated, collaborative, and veteran-centered throughout the VHA. It also directs case management teams to develop procedures and processes to support cost effective, high quality case management across the VAMC to eliminate duplication of services where appropriate. VHA Handbook 1162.09, Health Care for Homeless Veterans Program. Under the HCHV program, program coordinators are responsible for ensuring coordination of HCHV services with other homeless programs at the VAMC such as GPD, HUD-VASH, DCHV, VJO, HCRV, SSVF, and CRRCs. GPD s Case Management Services Grant Program, Final Rule. This final rule stipulates that the case management grant may not be used for veterans receiving case management from certain other programs to ensure that there is no duplication of case management services. VHA Handbook 1162.01 (1), GPD. This guidance states that GPD liaisons are to ensure the coordination of care for homeless veterans in GPD-funded programs by following a plan that clearly delineates the roles of those responsible for the service provision to reduce duplication of services. VHA Handbook 1101.10 (1), Patient Aligned Care Team Handbook. This guidance directs staff to coordinate care in a manner that avoids unnecessary duplication. The following section of this report discusses how federal agencies collaborate more broadly on implementing federal homelessness assistance programs for veterans. Key Federal Efforts Incorporate Many, but Not All, Leading Practices on Collaboration We identified two key collaborative mechanisms that federal agencies use to help address veteran homelessness: (1) the Solving Veterans Homelessness as One (SVHO) working group, which coordinates VA, HUD, and USICH s efforts at the national level, and (2) VA s integration into Coordinated Entry, which seeks to ensure that homelessness services are coordinated at the local level. As shown in table 3 and as discussed in more detail below, both mechanisms follow leading practices for effective interagency collaboration we have identified in prior work, with some exceptions. <5. Solving Veterans Homelessness as One> According to USICH officials, in 2012, USICH convened the SVHO workgroup to coordinate with HUD and VA on key priorities and maximize efforts to end veteran homelessness. SVHO serves as an interagency decision-making body that plans and executes strategic actions through goal setting, policy gap identification, communication, and action. The SVHO working group fully followed all seven leading practices for effective interagency collaboration that we identified in prior work. A discussion of our assessment follows: Defining Outcomes and Monitoring Accountability. Ending veteran homelessness is one of the national goals listed in USICH s Federal Strategic Plan to Prevent and End Homelessness. SVHO s work is organized to support this goal. USICH reports SVHO s efforts in its annual Performance and Accountability Reports. For example, USICH reported that in fiscal year 2019, SVHO s efforts led to supplemental guidance and coaching to help sustain the efforts of communities that had been certified as having ended veteran homelessness. Bridging Organizational Cultures. To operate across agency boundaries, SVHO members hold regular meetings. During these meetings, SVHO members have updated one another on each agency s efforts, discussed strategic objectives, shared program data, and coordinated on technical assistance for service providers. SVHO also held a strategic planning retreat to discuss SVHO s priorities. Clarifying Leadership. SVHO has a Strategic Decision and Coordination Team that serves as the decision-making body and includes leadership from VA, HUD, and USICH. The team s decisions are made by consensus, and the role for facilitating the team rotates every four months among the three agencies. The Strategic Decision and Coordination Team s responsibilities, which include providing strategic guidance on cross-agency issues, providing joint oversight and decision-making, and facilitating the approval of decisions from the individual agencies are outlined in SVHO s charter. Clarifying Roles and Responsibilities. The SVHO charter outlines the roles and responsibilities of the Strategic Decision and Coordination Team and the Support Team, whose responsibilities include responding to priority projects and elevating issues requiring decision and coordination to the Strategic Decision and Coordination Team. Including Relevant Participants. SVHO members (USICH, VA, and HUD) are the relevant participants because they are the agencies centrally involved in implementing veteran homelessness programs. Identifying Resources. USICH, VA, and HUD contribute staff resources to the working group. Representatives from each of the agencies attend regular SVHO meetings to ensure continuity, provide the necessary subject matter expertise, and make decisions. SVHO has also developed resources to facilitate the group s meetings, such as agendas to guide discussions. Updating and Monitoring Written Guidance and Agreements. In March 2020, SVHO revised its charter to remove outdated information and to reflect the group s current structure and operations. The revised charter describes the purpose of establishing SVHO as a formal structure for coordination and decision-making (to enable member agencies to execute joint activities necessary for the goal of preventing and ending veteran homelessness), SVHO s structure (the group is comprised of a leadership team and support team with various responsibilities), and operating procedures (which involve holding regular meetings). USICH officials told us it was important to have an updated charter that solidified the commitments of the member agencies to the group. VA officials added that updating the charter would help serve as a reminder of the group s purpose. <6. VA s Integration into Coordinated Entry> Coordinated Entry is a process designed to help communities prioritize people who are most in need of assistance by standardizing the assessment process, defining community-wide prioritization policies, and coordinating referrals, among other things. HUD established minimal requirements for Coordinated Entry in a 2012 Continuum of Care Program Interim Rule. HUD officials said they established additional requirements in 2017 in coordination with other federal agencies, including VA. VA also issued a memo in 2017 stating that VAMCs must be actively engaged in their local Coordinated Entry. Efforts to integrate VAMCs into Coordinated Entry fully followed five of the seven leading practices on effective interagency collaboration and partially followed the other two (Bridging Organizational Cultures and Updating and Monitoring Written Guidance and Agreements). A discussion of our assessment follows: Defining Outcomes and Monitoring Accountability. VA established requirements for the VAMCs as they integrate into Coordinated Entry, which include active engagement with the CoC, involvement with case conferencing, and aligning standardized assessments. VA has a checklist that VAMCs use to assess their compliance with Coordinated Entry requirements. According to VA officials, they monitor VA integration into Coordinated Entry through self- assessment checklists that VAMCs are required to submit monthly through an internal VA system. VAMCs are also required to submit monthly operation plans to track their progress. Bridging Organizational Cultures. As we previously reported, collaborating agencies should establish ways to operate across agency boundaries and address their different cultures. VA requires VAMCs to actively engage with all coordinated entry systems within their catchment area. VA has provided some guidance to help VAMCs operate across organizational boundaries as they integrate into Coordinated Entry, but this guidance is broad in some areas. For example, it instructs VAMCs to collaborate with local CoC leadership to establish a clear process for making and receiving referrals and to share aggregate program data with each of their communities as needed. But the guidance does not describe steps that VAMCs can take to do so. In addition, two service providers and staff from two VAMCs told us that it can be challenging to work with multiple CoCs because each has their own processes. Additionally, staff from three VAMCs and one CoC entity told us that staff turnover creates challenges in their coordinated entry systems, including impeding relationship-building among partners. VA s guidance acknowledges that VAMCs may be working with multiple CoCs, but the guidance does not provide any best practices to help address this issue, nor does it expressly address relationship-building in light of staff turnover. Clarifying Leadership. As previously discussed, VA oversees the integration of the VAMCs into Coordinated Entry. Additionally, USICH and HUD officials told us there was an interagency working group on Coordinated Entry, where several agencies, including USICH, VA, and HUD, convened to discuss, among other things, what was happening in the field and barriers to Coordinated Entry implementation across all homeless programs, including those for veterans. HUD officials told us they also worked closely with VA to fully integrate VAMCs into Coordinated Entry. Clarifying Roles and Responsibilities. VA issued guidance that defined VAMCs roles in Coordinated Entry. For example, one or more representatives must be involved in the community planning process and in case conferencing, with sufficient knowledge and decision-making power to actively engage in each activity. Including Relevant Participants. All homeless assistance organizations should be involved in Coordinated Entry, according to HUD guidance. Coordinated Entry includes CoCs, VAMCs, service providers, and public housing agencies, among others. Staff from one VAMC, one service provider, and one CoC entity that we spoke with described their coordinated entry systems as being inclusive of all relevant stakeholders, including veteran homeless service providers. Identifying Resources. VA funded 86 Coordinated Entry Specialist positions through the HCHV program, of which 81 had been filled, as of January 2020, according to VA officials. Staff from two VAMCs and two CoCs told us that these new positions play an important role in VAMCs integration into Coordinated Entry because they serve as a liaison between the CoCs and the VAMCs. Additionally, VA requires that VAMCs dedicate a portion of VA resources (such as HUD-VASH vouchers or VA Homeless Program Residential Treatment beds) for their inclusion into the greater pool of homeless service resources that are accessed by veterans through Coordinated Entry. Updating and Monitoring Written Guidance and Agreements. We previously reported that agencies can strengthen their commitment to working collaboratively by formally documenting their agreements, and that those written agreements are most effective when regularly monitored and updated. As discussed earlier, VA has issued some guidance to help VAMCs integrate into Coordinated Entry. VA has also held webinar trainings and issued some program-specific documents, such as an SSVF Coordinated Entry fact sheet and a frequently asked questions document for HUD-VASH. VA has also provided technical assistance by request, according to agency officials. However, as noted earlier, VA s guidance is broad in some areas and neither provides best practices to help VAMCs working with multiple CoCs, nor expressly addresses relationship-building in light of staff turnover. VA officials told us they do not have plans to issue additional guidance on Coordinated Entry because they believe their current guidance provides sufficient direction. However, several interviewees (staff from three VAMCs, one service provider, and one PHA) told us they need additional guidance on Coordinated Entry, specifically about how to better collaborate among partners. For example, staff from one of the VAMC s said that while they understood that implementing Coordinated Entry required some flexibility, it would be beneficial if VA provided common parameters that communities could follow. Further, some VA guidance (such as the frequently asked questions document for HUD-VASH) may not be accessible by all service providers for VA s homeless programs because it is stored on the agency s internal system (the Homeless Programs Hub) or provided via technical assistance only upon request. Staff from two VAMCs stated that VA could better disseminate guidance. Additionally, one service provider and one PHA told us it would be helpful for VA to share best practices on collaboration used in other parts of the country. By providing additional information on how VAMC staff and service providers can collaborate with local partners, such as best practices, and making available guidance readily accessible, VA can help ensure that VAMCs and service providers are able to more effectively collaborate with other local providers to serve homeless veterans. Selected Programs Reported Meeting Most Targets, but Some Aspects of Performance Measurement Could Be Strengthened National Data Show Selected Programs Met Most Targets According to VA officials, since 2011, VA has focused on three primary outcome measures for the homelessness assistance programs we selected for review: 1) placement into permanent housing, 2) employment, and 3) negative exits from programs. DOL developed four critical measures for HVRP, including the placement rate for total enrollment, which tracks the total number of program participants employed in one or more jobs. VA and DOL officials told us they review their performance measures annually and adjust them as needed. National level performance data for fiscal years 2015 to 2019 show that five of the seven selected programs we reviewed have generally met their performance targets (see table 4). However, two programs, HUD-VASH and DCHV, have not met some of their targets. Specifically, in four of the last five years, HUD-VASH did not meet its targets for percent housed in HUD-VASH housing and percent housed within 90 days. In the last two years (2018 and 2019), HUD-VASH did not meet its targets for negative exits ; however, VA had decreased the target for those years (making it more difficult to meet). DCHV did not meet its targets for exits to permanent housing for the last three fiscal years, and negative exits for two of the last five fiscal years. According to VA officials, factors that have affected VAMCs abilities to meet HUD-VASH performance targets some of which are challenges identified by local VAMC staff and providers that we have discussed previously include an insufficient number of case management staff, which has led to fewer veteran admissions into HUD-VASH and a lack of safe and affordable housing for veterans. VA officials told us that DCHV program outcomes have been affected by factors including discharges to other transitional housing programs (which would not be included under an exit to permanent housing) and limited affordable housing resources. To help improve program outcomes for HUD-VASH, VA officials told us they are focusing on increasing HUD-VASH voucher utilization, such as by using vouchers for non-Veteran Housing Administration eligible homeless veterans through the HUD-VASH Continuum program and expanding project-based HUD-VASH efforts (discussed previously). To improve DCHV program outcomes, VA officials said they are holding in- depth discussions with DCHV staff to highlight lessons learned from those VAMCs that are meeting performance targets. <7. Performance Measurement Reflected Most Leading Practices, but Data Reliability and Communication Could Be Strengthened> The performance measures used for the selected programs we reviewed reflected most of the attributes of successful performance measures that we identified in prior work (see table 5). VA s measures fully reflected all six of these attributes. DOL s measures fully reflected five attributes and partially reflected one, the reliability attribute. Performance measures that include these attributes are effective in monitoring progress and determining how well programs are achieving their goals. A discussion of our assessment of VA s and DOL s performance measures follows: Clarity. VA s and DOL s policies clearly state the names and descriptions of the performance measures we reviewed. The names and descriptions are also consistent with the methodologies that were used to calculate them. Measureable Target. VA and DOL have established quantifiable, numerical targets for their performance measures, which allows them to compare expected and actual results. VA officials told us they developed the targets for their measures by first obtaining baseline data and then looking at historical and projected performance. HVRP service providers identify their own targets during the annual grant competition process, according to DOL officials. DOL officials told us they provide some parameters, such as the national targets, to help providers develop their individual targets. Objectivity. VA s and DOL s performance measurement policies describe what is expected to be measured (for example, the percent housed and percent employed). They also indicate which specific population (veterans) and under what timeframes (the relevant reporting period). Baseline and Trend Data. Nearly all the measures have baseline and trend data for the last five fiscal years. The exceptions are measures that have been recently discontinued. Having baseline and trend data allows VA and DOL to monitor changes in program performance. Linkage. DOL s performance measures for HVRP align with one of DOL s agency-wide strategic objectives to provide veterans with resources and tools to gain and maintain employment. DOL officials told us that information about the measures is communicated to grantees through local officials, who review a data dashboard created by DOL officials at headquarters. VA s performance measures are aligned with VA s agency-wide goal to end veteran homelessness, as outlined in VA s most recent strategic plan. VA officials told us they communicate information about the performance measures to the VAMCs and service providers through scorecards. Reliability. Measures reflect this attribute when they produce the same result under similar conditions. Reliability is increased when verification and validation procedures exist, such as checking performance data for significant errors by formal evaluation or audit. VA s performance measures fully reflected the reliability attribute; DOL s measures partially reflected it. VA officials told us they ensure data quality through the use of validation processes, error messages, and notifications that appear in real-time as data are entered. Additionally, there are dedicated program offices that work with the VAMC s and service providers to monitor and reconcile data. Finally, VA s policies describe steps that should be taken to review and verify the quality of the data. DOL officials told us they review HVRP performance data quality at different levels in the agency (regional and national) and use a data validation tool to identify potential errors. However, DOL officials acknowledge limitations with data quality, namely the lack of an electronic system to compile the data and the potential for human error when entering data into spreadsheets. Further, HVRP service providers may be unclear about the data quality steps to take because DOL s performance measurement policies provide limited information on data reliability procedures. DOL officials stated that they have conducted webinar training on the data validation tool, but acknowledge that no written policy exits for the data validation process. Without guidance from DOL on the quality control processes that should be applied to performance data, service providers may not understand how to improve data quality and DOL may not have reasonable assurance that these performance data are the most accurate and reliable available. While VA s measures reflected all the selected attributes of successful performance measures, including communicating linkage, we identified other areas where communication about these measures is not clear. For example, staff from three of the VAMCs we interviewed and two service providers described communication issues related to performance measures for four programs (HUD-VASH, GPD, HVCES, and DCHV). These issues included concerns that VA does not understand the realities on the ground that prevent VAMC staff and service providers from meeting the measures (such as limited housing availability) and VAMC staff being unaware they could use performance scorecards to drill down and learn more about why their performance targets were not met. Additionally, some VAMC staff and service providers we interviewed do not fully understand the measures. For example, DCHV and HCHV staff we interviewed from four VAMCs and three GPD service providers told us they have felt penalized for transitioning veterans from a VA homeless assistance program to another program or to substance abuse or mental health treatment because VA s performance measures count these transitions as negative exits. According to VA officials, however, there are only three instances where participant program exits are counted as negative: 1) when participants are asked to leave for failure to follow rules; 2) when participants leave for failure to comply with program requirements; and 3) when participants leave without telling program staff. VA officials told us they have implemented processes to obtain quarterly feedback from VAMCs and service providers through operation or actions plans about the measures, including feedback about not meeting performance targets. However, HUD-VASH staff from one VAMC said that they have reported their concerns about not having information on how to improve performance to VA leadership and GPD staff from another VAMC and two GPD service providers said they have reported their concerns about how negative exits are measured, but the concerns have not been addressed. Additionally, staff from another VAMC were unaware that VA had a way for them to provide formal feedback about the performance measures, suggesting that VA s feedback process and avenues of communication may lack clarity. We previously reported that improving the communication of performance information among staff and stakeholders can enhance or facilitate the use of performance information by agency managers. Performance information can be used to identify gaps in performance, improve organizational processes, and improve performance. Clearer communication by VA s Homeless Programs Office about performance measurement what performance measures capture and how to obtain and provide feedback would help VAMCs and service providers better understand how their program data are used to measure performance and therefore how to improve performance, which could also help VA better assess program outcomes. <8. Agencies Have Conducted Some Program Evaluations> VA, HUD, and DOL published some annual reports during the last five fiscal years that monitored the performance of some of the selected homelessness assistance programs for veterans we reviewed. In addition, they conducted a limited number of evaluations to assess their overall effectiveness or impact and conducted other studies that examined other aspects of the programs, such as characteristics of program participants. Program evaluations are systematic studies that use research methods to address specific questions about program performance. We identified two program evaluations conducted by or on behalf of HUD and VA that assessed the impact of HUD-VASH. Published in 2016, the Family Options study examined how the effects of three types of programs permanent housing subsidies (such as HUD-VASH vouchers), community-based rapid rehousing, and project-based transitional housing compared with one another and with the usual care available to homeless families. Findings from the Family Options study indicated that giving people experiencing homelessness priority access to deep permanent housing subsidies, such as housing choice vouchers, benefitted program participants by improving housing stability. However, as discussed in the study, heads of households that received permanent housing subsidies experienced a reduction in employment in comparison to participants in other programs. The permanent housing subsidy also cost more than the other programs. The second study was the HUD-VASH Exit study. Published in 2017, the study was part of an effort to improve program effectiveness. It assessed how and why veterans exit the HUD-VASH program, identified obstacles to their obtaining and maintaining housing with a HUD-VASH voucher, described the value of services, and identified barriers to successful collaboration between VA and HUD in administration of the program. Among other things, the study found that the program was successful, as demonstrated by high rates of retention in housing, and that relationships with community partners and the ability to connect veterans to community resources contributed to successful outcomes. While the several other studies and reports we identified did not assess the impact of programs, some did analyze program performance or outcomes (for example, the agencies annual performance plans and reports), and others assessed specific aspects of the programs (for example, factors associated with exiting homelessness programs and characteristics of program participants). VA officials noted that resource limitations constrain their ability to conduct impact evaluations. However, they stated that in the future, they plan to evaluate new programs and models, such as the SSVF s rapid resolution program (discussed previously). DOL officials told us they have commissioned an impact evaluation for HVRP, which is scheduled to be completed in 2022. The study is assessing the effectiveness of the HVRP program on improving homeless veterans employment outcomes and will build knowledge about program models including variations. We found that HUD and DOL have developed plans outlining the evaluations they plan to conduct and the steps they used to create their plans, but VA did not. VA s National Center on Homelessness Among Veterans, which conducts research and assesses the effectiveness of VA s homelessness programs, has an evaluation agenda listed on its website that describes the Center s planned studies, but not the steps taken to develop the agenda and prioritize what studies to conduct. HUD and DOL have also developed policies describing the steps the agencies take to ensure evaluation quality and rigor. VA s National Center on Homelessness Among Veterans, on the other hand, does not have written policies on evaluation quality. VA officials told us they ensure the quality and rigor of the Center s work by submitting study results for publication through a peer-reviewed standard scientific protocol, consistent with other VA research, but had not yet developed formal written policies as their processes are well known in the Center. However, the Foundations for Evidence-Based Policymaking Act of 2018 enacted in January 2019 will now require VA and other agencies to, among other things, designate an evaluation officer who is to establish and implement an agency evaluation policy and assess the coverage, quality, methods, consistency, effectiveness, independence, and balance of the portfolio of evaluations, policy research, and ongoing evaluation activities of the agency. The Act requires agencies to develop written annual evaluation plans that discuss steps taken to develop the plan such as how studies were prioritized to be submitted with their annual performance plan. In June and July 2019, the Office of Management and Budget released its initial guidance on implementing the Act, and additional guidance is forthcoming. The Act also includes provisions for GAO to conduct studies to review agency implementation efforts. Conclusions VA, HUD, and USICH have taken significant steps to ensure effective collaboration between the agencies and among local service providers when addressing veteran homelessness. However, VA can help local agency staff and service providers better collaborate by fully incorporating leading practices for interagency collaboration. More specific and accessible information on how to collaborate with partners through Coordinated Entry, including on key activities such as making referrals and sharing data, could position local VA staff and service providers to better aid homeless veterans with services at the local level. Opportunities also exist for the agencies to improve some performance measurement procedures. Documenting its data quality processes can help give DOL reasonable assurance that these performance data are the most accurate and reliable available. Additionally, providing clearer communication about performance measurement what the performance measures capture and how to obtain and provide feedback can help VA ensure that VAMCs and service providers have a better understanding of how their program data are used in measuring performance (and how to improve performance), which may also help VA better assess program outcomes. Recommendations for Executive Action We are making a total of three recommendations, two to VA and one to DOL. Specifically: VA s Under Secretary for Health should provide additional information, such as best practices, about how VA medical centers and service providers participating in Coordinated Entry can collaborate with local partners on key activities (for example, making referrals and sharing data) and ensure that this information and other resources are accessible to VA medical center staff and service providers. (Recommendation 1) The Assistant Secretary for DOL s Veterans Employment and Training Service should document its data quality validation processes for performance data for the Homeless Veterans Reintegration Program and disseminate these processes to service providers. (Recommendation 2) VA s Under Secretary for Health should clearly communicate with local VA staff and service providers about how it measures performance and how to obtain and provide feedback about performance measures. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to DOL, HUD, USICH and VA for review and comment. DOL and VA provided written comments, which are reproduced in appendixes III and IV, respectively. HUD and VA provided technical comments, which we incorporated as appropriate. A USICH official stated that USICH did not have concerns with the proposed recommendations and had no additional comments on the draft. In its comments, DOL neither agreed nor disagreed with our recommendation that it document and disseminate its data quality validation processes for performance data for HVRP (Recommendation 2). DOL stated that it agreed with the importance of data quality validation processes and noted that it uses a data validation tool (discussed earlier in our report). In addition, DOL provided new information in its comments on the draft report, stating that the agency released a user manual and training video for field staff and grantees on the validation tool and provided a hyperlink to additional information, including the user manual. While the user manual outlines the steps for downloading the validation tool and how to run validation tests, it does not describe what validation tests are run or the data quality reviews that DOL officials told us occurred at the regional and national level, as discussed earlier in our report. Therefore, we maintain our recommendation that DOL document all of its data quality validation processes for HVRP performance data and disseminate them to service providers to give the agency reasonable assurance that its performance data are the most accurate and reliable available. VA agreed with our recommendations in its written comments (Recommendations 1 and 3) and outlined actions it plans to take to address them, including: Providing additional information, such as successful strategies, about how VAMCs and service providers participating in Coordinated Entry can collaborate with local partners on key activities and enhancing communication through monthly calls on Coordinated Entry collaboration, including case conferencing, streamlined referral processes, and data sharing that will be recorded and accessible any time by staff. Clearly communicating with local VA staff and service providers about how it measures performance and how to obtain and provide feedback about performance measures. VA s target completion date for these actions is May 2021. In addition, the draft report we originally sent the agencies included recommendations to VA, HUD, and USICH to revise their SVHO working group charter. However, the agencies informed GAO that they had issued a revised charter in late March and VA and HUD provided a copy of the final charter. Based on our review of the charter, we revised our discussion of the charter in the report and removed the recommendations. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Veterans Affairs, the Secretary of the Department of Housing and Urban Development, the Secretary of the Department of Labor, the Executive Director of the U.S. Interagency Council on Homelessness, and other interested parties. In addition, this report will be available at no charge on GAO s website at https://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology This report focuses on federal programs that provide services to veterans that are experiencing homelessness or are at risk of being homeless and their dependents. Our report (1) describes the challenges agencies and service providers reported experiencing in implementing selected programs that assist homeless veterans; (2) assesses the extent, if any, of overlap and duplication among programs; (3) evaluates how well federal agencies collaborate to address veteran homelessness; and (4) reviews what is known about the performance of selected programs. We identified a total of 16 programs that specifically target homeless veterans by reviewing agency reports, guidance, and other documentation and past GAO and Congressional Research Service reports. From these 16 programs, we selected 7 that we focused on for our first objective on program challenges and our fourth objective on program performance: Housing and Urban Development-Veterans Affairs Supportive Housing (HUD-VASH); Grant and Per Diem (GPD); Supportive Services for Veteran Families (SSVF); Health Care for Homeless Veterans (HCHV); Domiciliary Care for Homeless Veterans (DCHV); Homeless Veteran Community Employment Services (HVCES); and the Homeless Veterans Reintegration Program (HVRP). We selected these programs based on size (largest programs based on funding and the number of veterans served) and services offered (a mix of programs addressing a variety of needs). The results of our review of these programs are not generalizable. For all objectives, we selected and interviewed representatives from the following national advocacy organizations for homeless veterans and other knowledgeable groups to obtain subject matter context: the National Alliance to End Homelessness; the National Coalition for the Homeless; the National Coalition for Homeless Veterans; and American Legion. We judgmentally selected these groups based on their knowledge about homeless veteran policy issues, their ability to share perspectives on a variety of homeless veterans subpopulations, and their knowledge about federal homelessness grants. We also interviewed officials from the Department of Veterans Affairs (VA), Department of Housing and Urban Development (HUD), Department of Labor (DOL), and the U.S. Interagency Council on Homelessness (USICH). Additionally, we conducted semi-structured interviews with staff from local VA medical centers (VAMCs) and service providers implementing the selected programs we reviewed; public housing agencies (PHAs) that administer HUD-VASH vouchers; and Continuum of Care (CoC) entities across different locations. Specifically, we interviewed staff from six VAMCs (staff for the HUD-VASH, GPD, SSVF, HCHV, HVCES, and DCHV programs); six CoC entities; six PHAs; and 23 service providers (eight GPD providers, seven SSVF providers, two HVRP providers, two providers that were HVRP, SSVF, and GPD grantees, two providers that were HVRP and GPD grantees, and two providers that were HVRP and SSVF grantees). The results of these interviews are not generalizable. The locations where we conducted these interviews were: Atlanta, Georgia; Kansas City, Missouri; Long Island, New York; Los Angeles, California; Helena, Bozeman, Fort Harrison, and Box Elder, Montana; and Seattle, Washington. We judgmentally selected this sample of sites based on several factors. To select those locations, we started with the 67 communities that were designated as Priority 1 communities by VA in 2015. We then judgmentally selected six of those communities based on the following factors: (1) to reflect a mix of communities with high concentrations of homeless veterans and communities certified as having ended veteran homelessness; (2) to reflect geographic diversity (a mix of urban, suburban, and rural locations); (3) proximity of CoCs and VAMCs (to ensure we could interview both local VAMC staff and service providers); and (4) the presence of our selected programs (to cover as many programs as possible). To identify challenges agencies and service providers reported experiencing in implementing selected programs, we interviewed agency officials, VAMCs, service providers, and PHAs. Specifically, we first asked them a general question about what challenges they face. We then analyzed their responses to develop a list of challenges. A second analyst then verified the steps taken to develop the list of challenges. We also reviewed agency reports, program documentation, and available information on trends on homeless veterans and the general homeless population. To determine the extent of duplication or overlap across programs, we reviewed agency guidance, program descriptions, and other documentation to obtain information on program services and beneficiaries for the 16 veteran homelessness programs we identified, using the process we described above. We then applied GAO guidance on duplication and overlap by comparing the programs using the following definitions: duplication occurs when two or more programs provide the same services to the same beneficiaries; overlap occurs when two or more programs offer similar services to similar beneficiaries. To identify potential benefits and challenges of overlap, we reviewed past GAO reports, and conducted interviews, as outlined above. To assess how federal agencies collaborate to address veteran homelessness, we first identified two collaborative mechanisms the Solving Veterans Homeless as One (SVHO) working group and VA s integration into Coordinated Entry by reviewing agency reports, guidance, and other documentation and interviewing agency officials. We then assessed the collaborative efforts against leading interagency collaboration practices identified in prior GAO work. Specifically, we assessed the extent to which the SVHO working group and VA integration into Coordinated Entry used each leading practice using three categories. Fully follows indicates that actions related to a practice reflected most or all of the issues to consider related to the practice; partially follows indicates that actions related to a practice reflect some, but not all, the issues to consider related to the practice; and does not follow indicates that there have been no actions taken related to the issues to consider for the practice. One analyst reviewed the reports, guidance, and other agency documentation related to the collaborative efforts and made the initial assessment. A second analyst then reviewed this information to make their own determination about the assessment and reach consensus with the first analyst. To determine what is known about the performance of the selected programs we reviewed, we analyzed national performance data for fiscal years 2015 to 2019 from VA and DOL. To assess the reliability of those data, we reviewed the data for obvious errors or inaccuracies by comparing the data to publicly available data from VA s and DOL s annual performance reports (to the extent the data were published). We also interviewed VA and DOL officials with knowledge of the systems and methods used to produce these data. We determined that the data we included in the report were sufficiently reliable for purposes of describing program performance for the selected programs we reviewed. To assess if the performance measures the agencies used are effective in monitoring progress, we reviewed VA s and DOL s performance measurement guidance. We then compared the measures against selected leading practices we identified in past GAO work. Specifically, our prior work identified ten key attributes for successful performance measures. Measures that include these attributes are effective in monitoring progress and determining how well programs are achieving their goals. We selected six attributes relevant to our analysis. We excluded the remaining four attributes because they are used to assess agency-wide performance and therefore were not applicable to our program-specific analysis. We assessed the performance measures as fully reflects if all the performance measures for the selected programs reflected most or all of the definition of the relevant key attribute; partially reflects if the measures reflected some, but not all, of the definition of the relevant key attribute; and does not reflect if the measures did not reflect the definition of the relevant key attribute. One analyst reviewed the performance measures and guidance and made the initial assessment. A second analyst then reviewed this information to make their own determination about the assessment and reach consensus with the first analyst. To determine the extent to which VA, HUD, and DOL had evaluated selected programs, we conducted a literature search for studies conducted during the last five fiscal years. We also obtained program evaluations from VA, HUD, and DOL. Additionally, we reviewed the agencies evaluation policies and interviewed agency officials to obtain additional information about the agencies program evaluation efforts. We conducted this performance audit from January 2019 to May 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Additional Program Information We identified 16 federal programs that target their services specifically to veterans who are homeless or are at risk of becoming homeless. These programs are funded through the Departments of Veterans Affairs (VA), Housing and Urban Development (HUD) and Labor (DOL). As shown in table 6, the programs provide permanent and transitional housing, health care, rehabilitation, employment assistance, and supportive services, such as assistance with rent, utility, or moving costs. Eligibility requirements vary by program. VA s Grant and Per Diem (GPD) program awards grants to community- based agencies for transitional housing and case management for homeless veterans. In 2017, VA implemented changes to the program and, as seen in table 7, the program now has six housing models. Each model targets a different population of homeless veterans or focuses on different areas of service. Some VA medical centers (VAMCs), service providers, and public housing agencies (PHAs) we interviewed told us the homelessness programs for veterans we reviewed are working well. Others identified additional challenges that were specific to individual selected programs we reviewed, in particular the GPD program that underwent recent changes. For example, with respect to GPD s new models, four service providers and staff from three VAMCs told us that the housing models and program guidelines are too restrictive and complex, which hinder the delivery of services. Staff from another VAMC told us that the new housing models are based on best practices but the implementation is challenging. For example, one of these models, Bridge Housing, generally limits the length of stay to 90 days which GPD staff from one VAMC and one provider told us is not enough time to meet the needs of some clients. However, VA officials said that veterans are not asked to leave Bridge Housing after 90 days if the housing plan has not been executed by this time. According to VA officials, GPD grantees can provide transitional housing and services to family members of a veteran, however, the program can only pay per diem for veterans, not their families. In addition, two GPD service providers told us that the bed reimbursement rate is inadequate to cover the cost of providing services to veterans, and GPD staff at one VAMC told us that the existing funding does not cover the full cost of the program. Despite these cited challenges, our review of national performance data shows that VA is generally meeting the performance targets for these six models. Finally, GPD staff at one VAMC told us that there is a shortage of shelters and beds in some areas, and as a result, they cannot accommodate all the homeless veterans that are referred to them. Appendix III: Comments from the Department of Labor Appendix IV: Comments from the Department of Veterans Affairs Appendix V: GAO Contact and Staff Acknowledgments <9. GAO Contact> <10. Staff Acknowledgments> In addition to the contact named above, Allison Abrams (Assistant Director), Erika Navarro (Analyst in Charge), Kimberly Bohnet, Emily Bond, Evelyn Calderon, Lilia Chaidez, Jill Lacey, and Jessica Sandler made key contributions to this report. Also contributing to this report were Ryan Cirillo, Ben Licht, Marc Molino, Sarah Veale, James Whitcomb, and Michael Zose. | Why GAO Did This Study
Despite a large decline over the past decade, an estimated 37,000 veterans in the United States experienced homelessness in 2019. GAO was asked to review federal assistance programs for homeless veterans. Among other objectives, this report (1) discusses challenges agencies and service providers cited in implementing selected programs; (2) evaluates how USICH, VA, and HUD collaborate; and (3) reviews selected programs' performance.
GAO analyzed federal guidance and performance data; interviewed VA, DOL, HUD and USICH officials; and met with local VA staff and service providers from selected programs at six sites. Programs were selected based on size (the largest based on funding and veterans served) and the kinds of services they offer; sites were selected for geographic diversity, among other factors. The results of these interviews are not generalizable.
What GAO Found
The Departments of Veterans Affairs (VA), Housing and Urban Development (HUD), and Labor (DOL) provide programs aimed at assisting homeless veterans. Local VA staff and service providers—who receive grants from federal agencies—provide services to homeless veterans within their communities. In interviews with GAO, they cited challenges in implementing selected programs:
Staffing shortages. Shortages in VA case managers may limit the number of veterans they are able to serve.
Housing cost and availability. High housing costs and limited stock make it difficult to find affordable housing for homeless veterans.
Transportation limitations . Service providers may cover large geographic areas and limited public transportation strains their ability to provide services.
Steps that VA and other agencies are taking to address these challenges include contracting out for services to address limited staffing, offering rental subsidies for very low-income veterans, and working with community partners to assist with transportation.
Two key federal collaboration mechanisms to address veteran homelessness are a U.S. Interagency Council on Homelessness (USICH) working group to coordinate agencies at the national level and a HUD initiative that coordinates stakeholders at the local level. Both efforts incorporate many leading practices for effective interagency collaboration identified by GAO in prior work. However, local VA staff and service providers stated that they would like additional information—such as on best practices—from VA on how to collaborate more effectively at the local level. While VA has issued some broad guidance, more specific information on effective collaboration on issues such as making referrals and data sharing could better position local VA staff and service providers to aid homeless veterans.
VA and DOL have multiple measures in place to assess the performance of the programs GAO selected for review, and most of the measures met their national targets from 2015 to 2019. The measures incorporated most leading practices for performance measurement—such as having measureable targets. However, DOL does not have a written policy on its process for validating its performance data, and as a result may not have reasonable assurance that these are the most accurate and reliable performance data available. Further, some local VA staff and service providers misunderstood how program data were used in assessing performance while others were unaware of VA's feedback processes on performance measures. Additional clarity and communication about VA's performance measures would help local VA staff and service providers better understand how program data are used to measure—and can be used to improve—performance.
What GAO Recommends
GAO is making three recommendations: VA should provide additional information on how local providers can collaborate; DOL should document data quality validation processes for its homeless veterans program; and VA should clearly communicate with local VA staff and providers about how it measures performance and how to obtain and provide feedback. VA agreed with the recommendations. DOL neither agreed nor disagreed. GAO maintains that DOL should document its data quality processes, as discussed in the report. |
gao_GAO-19-678 | gao_GAO-19-678_0 | <1. Background> <1.1. FMS Mission and Benefits> The FMS program is intended to strengthen the security of the United States and partner countries. To accomplish this mission, DOD sells a variety of types of items and services to foreign partners. These sales can range from fighter jets and integrated air and missile defense systems to combat helmets and training on the use of items. (See figure 1.) According to DOD and State officials, FMS provides multiple benefits to foreign governments and the U.S. government. Foreign governments that choose to use FMS rather than direct commercial sales receive greater assurances of a reliable product, benefit from DOD s economies of scale, improve interoperability with the U.S. military, and build a stronger relationship with the U.S. government. From the U.S. perspective, FMS expands the market for U.S. businesses and contributes to foreign policy and national security objectives. <1.2. Agency Roles and Responsibilities> While State reviews and approves FMS purchases, DOD is responsible for program implementation. The responsibilities of DOD components vary: DSCA: DSCA is responsible for administering the FMS program for DOD, including overseeing the FMS transportation accounts operations and balances. DSCA also sets policies for the FMS process, including for how FMS-purchased items can be transported and how DOD will calculate the fees purchasers will pay to reimburse DOD for any costs of transporting the items. DFAS: DFAS provides DSCA s accounting services for FMS and is responsible for accounting, billing, disbursing, and collecting funds for the FMS program. Military departments: The Departments of the Air Force, Army, and Navy are the primary DOD agencies that coordinate with purchasers to prepare and execute FMS agreements, including planning transportation, if necessary. U.S. Transportation Command (TRANSCOM): TRANSCOM supports transportation planned by the military departments to be conducted through the Defense Transportation System (DTS), which consists of military and commercial resources. Although FMS shipments may receive transportation support through TRANSCOM headquarters, the primary TRANSCOM components providing FMS transportation are the Military Surface Deployment and Distribution Command, which provides defense transportation by sea, rail, or highway, and the Air Mobility Command, which provides defense transportation by air. Contracts between TRANSCOM and private transportation service companies can provide additional commercial resources through DTS. DFAS processes bills to reimburse the TRANSCOM components and private transportation service companies for the costs of performing these transportation services. <1.3. FMS Funding, Transportation, and Fee Options> Foreign partners who purchase items and services through the FMS program may use their own funds or, if provided, U.S. funds, such as grants or loans provided through Foreign Military Financing. In addition, some FMS purchases are made using funds appropriated to DOD, State, or other U.S. government agencies for Building Partner Capacity (BPC) programs. These programs purchase items or services for foreign partners through FMS. Foreign partners and BPC programs have different options available to them for transporting items they purchase through FMS. With the exception of certain hazardous or sensitive items that must be transported via DTS, foreign partners have the option to arrange for their own transportation of FMS items they purchase, such as using a freight forwarder, for all or part of the transportation needed to reach the final destination. On the other hand, BPC programs use DTS to move all their FMS purchases. There are two ways DOD calculates the fees it charges FMS purchasers to use DTS that lead to collections into the FMS transportation accounts. Percentage of price. DOD most commonly calculates the FMS transportation fee using a percentage rate that is applied to the price of the item. The percentage rate varies depending on the extent of the U.S. government s responsibility for transporting the items purchased, such as whether the U.S. government will transport the items to their final destination or to an intermediate destination. As seen in table 1, since fiscal year 2007, DSCA changed the rates in fiscal years 2009 and 2018. Over the full period, the transportation fee has been as high as 22.25 percent of purchase price, or as low as 2.75 percent, depending on where purchasers want to take custody of their items. Price per item. DOD may instead charge the FMS purchaser an estimated transportation price per item for certain types of items, such as those containing sensitive or hazardous materials. <1.4. Structure and Use of the FMS Transportation Accounts> Eight transportation accounts within the FMS trust fund are used to hold transportation fees collected from FMS purchasers and to pay FMS transportation bills. In aggregate, we refer to these as the combined FMS transportation accounts: Main account. One main account holds transportation funds for all foreign partner purchasers and smaller BPC programs. BPC accounts. Seven segregated accounts hold transportation funds for certain larger BPC programs, such as the Afghan Security Forces Fund and the Iraq Security Forces Fund. DSCA created the first four BPC accounts in fiscal year 2012, one in fiscal year 2015, and two more in fiscal year 2018. Individual shipments trigger collections into and expenditures from the FMS transportation accounts. As shown in figure 2, after DOD ships an item and DFAS is notified of that shipment, DFAS moves the amount of the related transportation fee from the country account or BPC program account into the related transportation account and records the amount as a collection. Once DFAS collects funds into a FMS transportation account, funds are generally no longer segregated or tracked by their originating country or BPC program account. DFAS receives monthly bills from TRANSCOM that include the costs for FMS transportation, which DFAS pays out of the main transportation account, recording the amount paid as an expenditure. For FMS shipments associated with the seven larger BPC programs, the main account is then reimbursed from the appropriate BPC transportation account. <2. The FMS Transportation Account Balance Has Grown Substantially> Although aggregate FMS transportation fees are expected to approximate costs over time, we found that the combined FMS transportation account balance grew by over 1,300 percent from fiscal years 2007 to 2018. The ending balance for fiscal year 2018 was $680 million. Collections and expenditures for the account fluctuated from year to year, but collections have outpaced expenditures since 2014, particularly for the main transportation account, which has grown more quickly than the combined seven BPC accounts. <2.1. The Combined FMS Transportation Account Balance Grew More Than 1,300 Percent from Fiscal Years 2007 to 2018> The combined balance of the eight FMS transportation accounts grew substantially from the beginning of fiscal year 2007 through the end of fiscal year 2018 from $46 million to $680 million, or by 1,378 percent. As shown in figure 3, much of that growth occurred from the end of fiscal year 2011 through fiscal year 2018, during which time the account grew by approximately $630 million. This substantial recent balance growth was in contrast to balance activity from fiscal years 2007 to 2011, when the collections into the account more closely approximated the expenditures from the account. In fact, the FMS transportation account was at risk of insolvency starting in fiscal year 2009. In response, DSCA redistributed $80 million in fiscal year 2009 and $50 million in fiscal year 2011 from the FMS administrative fee account to the main FMS transportation account to ensure it contained sufficient funding to pay transportation bills. If not for the redistributions between accounts, the transportation account may have been unable to disburse payments from the account, for at least some parts of fiscal years 2009, 2010, and 2011. Collections and expenditures both fluctuated from year to year, as shown in figure 4. Year-to-year changes in collections ranged from decreases of 54 percent to increases of 121 percent, while year-to-year changes in expenditures ranged from decreases of 52 percent to increases of 133 percent. According to DSCA officials, demand for transportation of FMS purchases through DTS is unpredictable, and the accounts balances may experience volatile swings due to inconsistencies involved in billing the accounts. For example, delays in billing or reporting a particular shipment can result in DOD collecting the fee into the transportation accounts and reimbursing the transportation cost from the accounts at different times. Further, the fees collected and the costs expended for an individual shipment may differ because DOD uses different factors to calculate the transportation fee to charge the purchaser (e.g., the item s value) than it uses to calculate the cost to bill the FMS transportation accounts (e.g., the shipment s origin, destination, and weight, among other factors). Despite this volatility over time, from fiscal years 2014 to 2018, collections consistently exceeded expenditures, which drove the substantial balance growth. In figure 4, we show this relationship in a collections-to- expenditures ratio, for which a value of 1.0 would indicate collections equaled expenditures for the fiscal year. A ratio greater than 1.0 indicates an increasing account balance that fiscal year. The average collections- to-expenditures ratio for fiscal years 2007 to 2018 was 1.26; from fiscal year 2014 to 2018, this ratio ranged from 1.46 to 4.97. At the end of each fiscal year, any collections that exceed expenditures remain in the account and are carried over to the next fiscal year s beginning balance, which contributes to balance growth from year to year. <2.2. The Main Account Balance Has Grown More Quickly than the Balances of the Combined BPC Accounts> Much of the recent combined balance growth has been driven by growth in the main account s balance, as shown in figure 5. The main account grew more quickly than the combined balance of the BPC accounts from fiscal year 2013 the first full year of operation for the BPC accounts to fiscal year 2018. The main account grew by 316 percent, from $140 million at the beginning of fiscal year 2013 to $582 million at the end of fiscal year 2018, while the combined BPC accounts grew by 88 percent, from $52 million to $98 million, during the same time period. As seen in figure 6, our analysis shows that, for fiscal years 2013 to 2018, collections exceeded expenditures more frequently and by a greater extent in the main account than in the BPC accounts, which has driven balance growth. On average during this period, collections exceeded expenditures for the main account by $74 million per year, as compared to $7 million per year for the BPC accounts. DSCA officials speculated that BPC programs may use more air transportation for shipments to areas without regular TRANSCOM shipment routes, which may result in higher expenditures. DSCA officials could not provide any further explanation for why the main account s balance has grown more quickly than the balances of the BPC accounts. <3. DSCA s Limited Management Oversight Guidance Contributed to Substantial Growth in the FMS Transportation Account Balances> DSCA has limited management oversight guidance for the FMS transportation accounts, which has contributed to their substantial balance growth. DSCA has established internal guidance for its two main management oversight processes to monitor for significant changes in the FMS transportation account balance a daily review and annual review but this guidance is unclear and lacks key details. As a result, DSCA s implementation of these processes lacks rigor and DSCA s reporting to its management has not included complete information about the causes for recent balance growth. In addition, DSCA has no internal guidance to ensure that funds remaining in BPC-specific transportation accounts after the related programs close are transferred to the miscellaneous receipts of the Treasury, which risks these funds not being transferred as DOD officials told us DOD intends to do. <3.1. DSCA Established Management Oversight Procedures> In fiscal year 2016, DSCA established a Managers Internal Control Program (MICP) for overseeing the FMS transportation accounts, according to DSCA officials. These procedures formalized two management oversight processes for the FMS transportation accounts that DSCA officials had performed previously: daily and annual reviews. These reviews both serve the purpose of ensuring the accounts have sufficient funds to pay expenses. MICP documentation to help guide these processes includes flow charts that explain certain steps that should be included in each of these reviews, a risk assessment that explains how each of the MICP processes mitigates risks for the FMS transportation accounts, and test procedures that lay out expectations for how each MICP process should be conducted so that DSCA can periodically test to ensure the processes were carried out as intended. Daily review. MICP procedures indicate that DSCA staff should review a report from DFAS daily that includes the previous day s balances for each of the transportation accounts to ensure that the FMS transportation accounts do not drop below a healthy level. If DSCA staff identify a large decrease or significant level of change in the accounts, the procedures direct them to ask DFAS to explain what caused the change and to take corrective action, such as to ask for billing corrections, if necessary. According to the MICP risk assessment, the FMS transportation accounts experience volatile swings due to inconsistencies involved in billing the account, and reviewing the account balances on a daily basis helps to address this risk. The MICP procedures state that, if DSCA allows the FMS transportation accounts to drop below this healthy level, the accounts could become insolvent and be delinquent in disbursing transportation expenses. Annual review. MICP procedures indicate that DSCA should annually assess the financial health of the transportation accounts, which DSCA staff have stated they implement by preparing an annual report for DSCA leadership. To test whether the annual review has occurred, certain DSCA staff are to examine the annual report to confirm that DSCA assessed the FMS transportation account with the purpose of ensuring that the overall financial health of the accounts is strong and collections are sufficient to pay expenditures. <3.2. DSCA Inconsistently Implemented Daily Reviews Due to Unclear Internal Guidance> DSCA has inconsistently implemented its daily reviews due to unclear internal guidance on these reviews. Specifically, the guidance does not specify the level of change that warrants further examination or what DSCA staff should consider as a healthy level, or target range, for the accounts. <3.2.1. DSCA s Internal Guidance Does Not Define What Changes in Daily Transportation Account Balances Warrant Examination> DSCA s daily review procedures are meant to monitor for significant changes in the FMS transportation accounts so that such changes can be further examined and, if needed, corrected; however, MICP internal guidance does not establish criteria for determining what constitutes a significant change in these accounts balances. According to federal internal control standards, management should define the acceptable level of variation in performance, or risk tolerance, in specific and measurable terms. However, the MICP procedures use different and undefined terms when referencing the types of balance changes DSCA should look for in their daily review procedure. These terms include: change, significant change, and significant reduction. Although some of these terms could be interpreted as DSCA needing to monitor for any significant changes whether increases or decreases in the accounts DSCA staff have chosen to focus these reviews on decreases. As a result of DSCA s unclear internal guidance, DSCA staff have inconsistently determined which changes warrant examination and should trigger them to contact DFAS to examine the reasons for the change. This makes it less likely that DSCA will be alerted to and take corrective action to address significant changes in the account balances. From fiscal year 2018, DFAS was able to provide one documented instance of DSCA staff contacting DFAS as a result of the daily review. The contact was regarding an 11 percent balance decrease of approximately $6 million in the Afghan Security Forces Fund s transportation account that occurred on July 5, 2018. However, we identified a total of 30 instances of balance changes greater than 11 percent (12 decreases and 18 increases) in fiscal year 2018 across the eight FMS transportation accounts. For example, figure 7 shows the fiscal year 2018 daily balance changes for the Afghan Security Forces Fund. This figure includes the July 2018 balance decrease that resulted in DSCA contacting DFAS to examine the change, as well as nine other instances of balance changes greater than 11 percent that did not result in any documented contact between DSCA and DFAS. By inconsistently conducting daily reviews, DSCA weakens the effectiveness of this oversight mechanism to identify potential errors, which risks allowing either insufficient or excessive funds in the accounts. In particular, in recent years, the lack of clarity on what these reviews should monitor for has weakened DSCA s oversight and contributed to the substantial balance growth. <3.2.2. DSCA s Internal Guidance Does Not Establish a Target Range for the Transportation Account Balances> DSCA has not defined what it considers an acceptable target range for these accounts despite the unpredictability of transportation account balances and the MICP daily review procedures requiring DSCA officials to monitor account balances to ensure they remain at or above a healthy level. According to DSCA officials, DSCA has not determined an acceptable target range for the transportation accounts because future collections and expenditures are difficult to predict, making it difficult to know how much money DSCA needs in the accounts. However, this unpredictability makes it all the more important for DSCA officials to establish a target range for what is healthy account activity to enhance their oversight of the accounts. As we previously reported, to ensure the accountability of fee-funded programs and the ability to manage a program with sufficient reserves, federal agencies are advised to use a risk-based strategy to establish desired upper and lower bounds for account balances. DSCA has already established upper and lower bounds for two other FMS overhead fee accounts, the FMS administrative fee and contract administration services fee accounts. DSCA calculates these bounds based on the amounts of planned expenses from the accounts, which automatically adjusts the bounds over time to reflect the size and needs of the FMS program. DSCA s internal guidance states that setting upper and lower bounds of acceptable levels provides the agency with a control box to alert it to a dramatic change in the FMS operating environment that may require an agency response such as a fee rate review. Similarly, establishing a target range, with an upper and lower bound, for the FMS transportation account balances could strengthen DSCA s ability to use its daily reviews to manage the accounts volatility by identifying when the account balances are growing excessively high or falling excessively low. Such an upper bound could better inform DSCA leadership and help prevent excessive growth in the transportation accounts while a lower bound could help to ensure that the accounts have sufficient funds to pay for transportation bills. <3.3. DSCA Prepared Annual Reports Missing Key Details Due to Lack of Internal Guidance> DSCA has no internal guidance for its staff to follow when preparing annual reports on the health of the FMS transportation accounts, which has led the reports DSCA produced for fiscal years 2015 to 2018 to contain incomplete information on the underlying causes for the trends in the accounts and for the reports to lack key details about the source of some of the funds in the main FMS transportation account. <3.3.1. Lack of Internal Guidance Regarding Annual Reports Has Contributed to Incomplete Reporting> For fiscal years 2015 to 2018, DSCA produced annual reports assessing the financial health of FMS transportation accounts that contained incomplete information because DSCA did not use rigorous methods to determine the underlying causes for trends in the accounts. As a result, DSCA had a limited ability to make informed decisions about the accounts at a time when the balances were experiencing substantial growth. According to the DSCA staff who produce the annual reports, they distribute the reports within DSCA up to the agency s Director to provide information about the health of the FMS transportation accounts. DSCA s annual reports on the FMS transportation accounts for fiscal years 2015 to 2018 followed a consistent format. These reports contained information on the net change in balances for each of the transportation accounts during the fiscal year. The reports also included a summary of any major activity in each of the accounts. For example, the fiscal year 2018 assessment stated that the main FMS transportation account grew by $77.8 million during that fiscal year due to several large collections significantly greater than billings. All of the reports end with a conclusion regarding the health of the accounts, which for fiscal years 2015 to 2018, was that the accounts were healthy and should remain financially solvent. All of these annual reports also include statements regarding the underlying causes of account trends, which we found to be incomplete and unsupported by rigorous data analysis. When discussing reasons for year-to-year account balance increases, DSCA s reports stated they were mainly due to a decline in oil prices and a legal change that DOD implemented in July 2014 that allowed TRANSCOM to charge lower DOD rates for FMS air shipments, both of which could likely affect expenditures from the account. However, DSCA officials said that they conducted no specific analysis to support the extent to which these two factors affected the account balance increases. As seen in figure 8, our analysis shows that these reasons could not fully explain the account balance increases in each of the annual reports from fiscal year 2015 to 2018. In particular, while FMS transportation expenditures began to decrease in fiscal year 2012, the price of oil did not begin to significantly decline and the legal change did not come into effect until 2014. Further, the annual reports did not discuss underlying reasons for trends in collection activity, which also affect the account balance. DSCA s analysis for its annual reports is limited by the lack of internal guidance for completing these reports. Specifically, the MICP guidance for the annual review process does not specify how to prepare the annual report. Without such guidance, according to DSCA officials, DSCA s analysis for the annual reports has involved re-reviewing the documentation related to the daily reviews as well as monthly reviews that DSCA performs for financial oversight purposes. DSCA officials completed no additional analysis to inform the annual reports, such as any quantitative analysis to understand annual changes or trends over time. Federal internal control standards state that effective internal guidance communicates the who, what, when, where, and why of what needs to be accomplished, and that management should obtain relevant data from reliable sources and process that data into quality information to aid decision making. Without clear internal guidance, the annual account reviews lack the rigor necessary to ensure DSCA management is provided reliable information for decision making. <3.3.2. DSCA Has Not Reported Redistributions between Accounts or Assessed Whether to Return Funds Due to Lack of Internal Guidance> According to DSCA officials, DSCA s annual review process should also involve an assessment of whether funds should be redistributed between the FMS overhead fee accounts; however, DSCA does not have specific internal guidance on when and how to perform such assessments or on what to include about this portion of the annual review in its resulting annual reports. This lack of guidance has led DSCA to produce annual reports without information related to redistributed funds and to not conduct assessments related to redistributed funds. According to DOD s financial management regulations, DSCA and DFAS should periodically review activity in the FMS overhead fee accounts to serve as a basis for decisions by DSCA management to, among other purposes, redistribute account balances between these accounts. According to DSCA officials, if they were to perform these periodic assessments, they would perform them as part of their annual account reviews. However, the MICP guidance for the annual reviews does not describe how to assess whether or how much to redistribute funds between the fee accounts, or how or when to assess returning previously redistributed funds. The annual FMS transportation account and administrative account assessments for fiscal years 2015 to 2018 do not report that $130 million in the main FMS transportation account came from funds redistributed from the FMS administrative account between fiscal years 2009 and 2011 that have not been returned. According to DSCA officials, they only report redistributions in the year that they occur. In addition to not including this information in its annual reports, DSCA has not assessed the need for other redistributions of funds between the FMS fee accounts since it last redistributed funds from the FMS administrative account to the main FMS transportation account in fiscal year 2011. DSCA officials indicated they intend to return the funds to the administrative account but have not done so because they have no urgency, given that the FMS administrative account balance has been consistently above its lower bound in recent years. As of the end of fiscal year 2018, the FMS administrative account balance was approximately $4.7 billion, which was approximately $3.1 billion more than the account s lower bound that DSCA determined was necessary to support FMS operations. The lack of specific internal guidance on how to assess and report redistributions has resulted in incomplete reports to DSCA management, which inhibits DSCA management s ability to make informed decisions in overseeing the FMS fees. In particular, without reports that clearly state the amount of redistributed funds and their source(s), and assess their continued need, DSCA management is less informed when determining whether and when to redistribute funds, including whether to return previously redistributed funds. According to our User Fee Design Guide, assigning costs to identifiable users can promote equity and more informed rate-setting; however, redistributing fees from the FMS administrative account to the main FMS transportation account has intermingled funds that have different sources. DOD charges the FMS administrative fee to all FMS purchasers while DOD charges the FMS transportation fee to only certain purchasers for the portion of the transportation of their FMS items that uses DTS. Distributing funds from the FMS administrative account to the main FMS transportation account intermingled these fees, which has two main effects. First, not returning redistributed funds if the transportation account no longer needs them raises concerns regarding the fees equity in ensuring only the beneficiaries of a service pay for the cost of providing it. Second, the appropriateness of DSCA management s rate-setting decisions for both fees is limited by incomplete information about the full expected balance of the fee accounts from which future expenditures could be paid. <3.4. DSCA Has No Internal Guidance to Ensure Proper Disposition of Unused Funds in BPC Transportation Accounts> DSCA has no internal guidance to ensure proper disposition of any funds remaining in the BPC-specific transportation accounts after the related programs close and those remaining funds are no longer needed. In fiscal year 2020, DSCA expects the first BPC-specific transportation account to close, which had a balance of approximately $42 million at the end of fiscal year 2018. DSCA officials have said that funds remaining in the BPC-specific transportation accounts after the related programs close should be transferred to the miscellaneous receipts of the U.S. Treasury. According to DSCA officials, this process was agreed to with DOD s Office of the Under Secretary of Defense (Comptroller) in November 2011 when DSCA met with that office to discuss how DSCA would handle creating the BPC-specific transportation accounts. DSCA officials also said that following this process would be in line with a requirement in DOD s financial management regulations for any collections that are authorized or required to be credited to an account after that account s closure to be deposited in the Treasury as miscellaneous receipts. However, DOD officials could not provide a documented agreement from the November 2011 meeting, and we do not consider the referenced regulation specific enough to this circumstance to alone serve as internal guidance that would ensure the funds are transferred. In particular, this regulation applies broadly to DOD collections received after an account s closure, and does not specifically address the disposition of funds that had already been collected into an account upon the closure of that account . Officials from relevant DOD components have different understandings of how this process should occur, which could risk the process not being completed as intended without related specific internal guidance. According to DSCA officials, DFAS will be responsible for moving any remaining funds in these transportation accounts to the miscellaneous receipts of the Treasury, but the pertinent DFAS officials have stated they are unaware of what should be done in such circumstances. According to DSCA officials, they intend to write a memo to DFAS related to each instance of a BPC-specific transportation account closure instead of providing DFAS written guidance to follow in any such instance because DSCA officials prefer providing specific directions to DFAS regarding moving such funds. DSCA officials said they do not need specific internal guidance to ensure they direct DFAS to complete such fund transfers because DOD s Office of the Under Secretary of Defense (Comptroller) would ensure that DSCA does so when that office reviews all DOD accounts. However, Comptroller s Office officials stated that, as part of DSCA s program oversight responsibilities for FMS, DSCA is responsible for ensuring any funds are identified and transferred to the miscellaneous receipts of the Treasury. Without clear internal guidance, DOD may not have accurate information on or sufficient oversight of its budgetary resources and account balances, and funds that could be put to other uses may remain in the BPC transportation accounts. Federal internal control standards state that effective internal guidance communicates the who, what, when, where, and why of what needs to be accomplished, thereby providing a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel. According to DSCA officials, the first BPC-specific transportation account likely to close is dedicated to the Iraq Security Forces Fund, which had a balance of approximately $42 million at the end of fiscal year 2018. According to DSCA records, this program s appropriations were canceled at the end of fiscal year 2017 and, according to DSCA officials, by sometime in fiscal year 2020, the program s FMS cases should go through their final reconciliation process. Through this process, DOD may pay outstanding bills or correct accounting errors and the related cases will close. According to DSCA officials, the BPC-specific transportation account would then be ready for closure. <4. DSCA s Processes for Setting Transportation Fees Have Not Ensured Fees Approximate Costs over Time and Contributed to Account Balance Growth> DSCA s processes for setting the FMS transportation fee do not ensure that aggregate fees DOD collects approximate aggregate transportation costs over time, thus contributing to recent growth in the FMS transportation account balances. DSCA s ability to set appropriate transportation fee rates is undermined by DSCA s unclear guidance to the military departments on what data they should provide DSCA to analyze in its transportation fee rate reviews, leading DSCA to review data that is not timely or systematically sampled. Further, the lack of clarity in its internal guidance for these reviews has led DSCA to complete these reviews infrequently, perform limited analysis, and burden the military departments with compiling data DSCA did not use. In addition, our analysis raises concerns about negative effects of the current transportation fee rate structure, including that the structure makes it more difficult for DSCA to determine appropriate transportation fee rates. Finally, DSCA s internal guidance to the military departments for estimating transportation prices, instead of rates, for certain items lacks key specific details. As a result, the military departments follow varying procedures for estimating these prices, and are unsure of the prices accuracy. <4.1. DSCA s Transportation Fee Rate Reviews Used Unsuitable Data, Were Completed Infrequently, and Involved Limited Analysis> DSCA s ability to set appropriate transportation fee rates is undermined by unclear guidance for its reviews of these rates. The lack of clear guidance has led the military departments to provide DSCA data that is not suitable for rate-setting decisions because, while the individual data points DSCA analyzed were accurate, they may not accurately predict future rates because they were not timely or systematically sampled. Unclear guidance also led DSCA to perform infrequent and limited analysis of these data. <4.1.1. DSCA s Rate Reviews Do Not Use Timely and Systematically Sampled Data> DSCA s ability to determine the appropriate FMS transportation fee rates is limited by the data analyzed in its rate reviews that are not timely or systematically sampled. According to its MICP documentation, DSCA is to review its FMS transportation fee rates to ensure the resulting transportation fees collected from FMS purchasers in aggregate cover the amount needed to pay for transportation expenses. DSCA requests the military departments provide historical data on transportation fees charged and transportation costs paid so that DSCA can analyze these data to determine appropriate fee rates. However, DSCA s data requests to the military departments are unclear in multiple key respects, which leads the military departments to provide data to DSCA that though it contains accurate cost and fee data are unsuitable to use for DSCA s resulting rate-setting decisions because it is not timely or systematically sampled. The combined effects of these deficiencies could skew DSCA s rate review process. When DSCA requests data from the military departments for the rate reviews, DSCA does not specify key elements about which data to provide or which information sources to use to obtain each data element. As a result, the departments have followed different processes and provided data that was not timely. As shown in table 2, the data submitted by the military departments varied significantly. Because DSCA s data requests did not specify where the data should be sourced, the military departments have had difficulty responding to these requests and the amount of data they have produced has been limited. Military department officials explained difficulties finding the necessary data in other DOD agencies systems, understanding those data s reliability, and accurately matching the data across multiple systems. In particular, transportation cost data is stored in multiple TRANSCOM billing systems, which military department officials responsible for responding to DSCA s data requests said they do not regularly access. In addition, DFAS has copies of transportation cost data in the monthly bills that it pays from the FMS transportation accounts. The bills include the individual costs of each shipment made during that month, but are stored in individual documents and are not accessible to the military departments. Transportation fee data is available in a DFAS system used to process the FMS transportation fee, but, according to DFAS documents and officials, this system is not built to easily extract such data and therefore neither DFAS nor the military departments can reliably pull fee data from this system specific to particular shipments or cases. According to DSCA officials, ultimately DSCA only used Army data for setting rates in 2018 because Navy and Air Force provided relatively small samples. Navy. Navy officials reported having particular difficulty finding data on transportation costs for the most recent rate review. After unsuccessfully requesting more specific guidance or assistance from DSCA and DFAS, according to Navy officials, Navy found a spreadsheet DSCA had provided Navy for an unrelated purpose that contained the costs for Navy FMS shipments moved by TRANSCOM s Air Mobility Command. According to Navy officials, because researching the individual transportation fees for each FMS shipment was time-consuming and they lacked clear guidance about how much data DSCA needed for its rate review, they decided to provide related fee data on 103, or 3 percent, of the 3,536 air shipments for which Navy had cost data. Air Force. The U.S. Air Force Security Assistance and Cooperation Directorate has developed a detailed process, described in a 280- page internal guidance document, to respond to DSCA s requests, but following this process does not yield much data. For the most recent review, Air Force provided DSCA with data for 639, or 2 percent, of 28,886 shipment orders for which they reviewed data because of the difficulty of finding relevant matching cost and fee data across the different systems used, as shown in table 3. Not only were the data DSCA reviewed not indicative of all FMS shipments since they included no Navy or Air Force data, the data were also not indicative of Army s shipments and included older data because the DSCA data requests were unclear. In particular, DSCA s data requests stated that each military department should provide at least 20 cost and fee comparisons for each fee rate for each of the FMS transportation accounts, and requested that these data include as many different foreign partners or FMS cases as possible. As a result, according to Army officials, the data Army provided to DSCA included a mix of different partners and cases of different dollar values; however, no systematic sampling methods were used that would have ensured that the resulting data were indicative of overall Army shipments during the time period covered. Also, DSCA s request did not specify a time period the data should cover. Army provided data for cases that likely were at least 5 to 7 years old. According to DSCA officials, if the rate review is to analyze case-level data, such as Army provided, it is necessary to analyze data on cases for which the FMS agreements were signed multiple years prior, because shipments may not take place until multiple years into cases. However, the Army officials we spoke to about the data Army provided were unaware how long ago the shipments occurred for the related cases, and stated that some may have occurred years before. TRANSCOM pricing changes annually, so cost information that is multiple years old and not adjusted to reflect such changes would be unlikely to predict future costs. As a result, DSCA set rates to cover future costs based on a sample of cases that was not systematically sampled and may have included shipments over the past 5 or more years. DSCA officials stated that their data requests are not more specific because they thought the military departments had direct access to these data and that more specificity would hinder the military departments ability to respond to the requests. However, related data are available in TRANSCOM and DFAS, instead of military departments , systems. Further, the current processes produce data that are not timely or systematically sampled, making it unsuitable to use to determine future costs and rates. In setting user fees, agencies should analyze timely and reliable data, consistent with applicable accounting standards, to avoid the risk of making skewed fee-setting decisions. DSCA s use of data that are not timely or systematically sampled for its rate reviews could skew its rate-setting decisions, ultimately affecting transportation account balances. <4.1.2. DSCA s Unclear Internal Guidance Has Contributed to Rate Reviews Completed Infrequently and with Limited Analysis> DSCA s internal guidance for its rate reviews is unclear regarding the timing of the reviews and lacks key details, which has limited DSCA s ability to use the rate review to set appropriate rates. Timing. DSCA s internal guidance for overseeing the FMS transportation accounts is unclear. In one part the guidance indicates that DSCA should conduct a rate review every 5 years, which is in line with the expectations explained by DSCA officials who oversee these accounts. However, other parts of DSCA s internal guidance indicate that DSCA should conduct such a review annually. How reviews should be conducted. DSCA s internal guidance states that the rate reviews should allow DSCA to determine whether current transportation fee rates are sufficient, based on predetermined criteria, to cover the related costs. However, this internal guidance does not specify how these criteria should be determined or contain any procedures regarding how DSCA should analyze the data collected for its rate review. DSCA has not completed its transportation fee rate reviews in a timely manner, which allowed the FMS transportation account balances to grow over recent years as collections consistently exceeded expenditures but fee rates remained constant. Since fiscal year 2007, DSCA has completed two reviews more than 9 years apart: in March 2009 and May 2018. For these reviews, DSCA officials did not predetermine criteria for the level of alignment between cost and fee that each review should achieve and DSCA s analysis considered few factors and involved a limited analysis of only Army data, which hindered DSCA s ability to set appropriate fee rates. In particular: Fiscal year 2009: For this review, DSCA compared the transportation cost to the transportation fee charged across seven transportation fee rates for 144 of the thousands of Army s FMS cases. In this sample, the transportation costs exceeded the fees paid by 19 percent overall. When briefing DSCA management on the review, DSCA officials reported a concentration of undercharges in two of the rates. As a result, DSCA decided to increase these two rates such that, if the new rates had applied to the full sample DSCA analyzed, fees on the cases in the full sample would have exceeded costs by 14 percent. Our analysis of the sample showed that while these two rates had the largest difference in value between the costs and fees, other rates also had large differences within this sample. Specifically, one other rate had a larger percentage of undercharges and three of the other rates had percentages of overcharges exceeding 1,000 percent. However, DSCA made no changes to these other rates. Fiscal year 2018: For this review, DSCA compared the transportation cost to the transportation fee charged across the seven transportation fee rates for a sample that contained data on 993 Army cases. For this sample, on average transportation fees charged to purchasers exceeded transportation costs by 158 percent, with all rates except one overcharging on average. However, when briefing DSCA management on the review, DSCA officials reported incorrect data to serve as the basis for decision making. In particular, according to the DSCA official responsible for the analysis, likely due to an oversight, DSCA included data on only 878 of these cases in the briefing to DSCA management. Total fees for this portion of the sample were 90 percent higher than the related total costs. Based on this limited data, DSCA decided to decrease all of its transportation fee rates such that, if the new rates had applied to the full sample DSCA analyzed, fees would still have exceeded costs by 77 percent, with five of the seven fee rates still exceeding the cost by more than 100 percent for that sample. DSCA officials stated that their intent in this rate review was to lower the rates modestly to see their effect on the account balances; however, their ability to accurately meet this goal is reduced by its lack of specificity and the limited analysis DSCA performed. Given that the data DSCA analyzed for both these reviews was not generalizable to all shipments, the above percentages do not indicate that the rates overall would have affected fees in these exact ways. Instead, DSCA s decision making may have been further skewed by its method of analysis. In addition to completing these two reviews, DSCA also initiated rate reviews by sending requests to the military departments three additional times for data DSCA did not use, thereby placing an unnecessary burden on the military departments. Specifically, DSCA requested data from the military departments in November 2011, September 2013, and November 2014. After obtaining the data from the military departments, DSCA officials said that management decided DSCA would not analyze the data due to competing priorities, and DSCA did not use these data for any other purpose. Air Force officials said that the months of work put into responding to each of DSCA s rate review requests seemed like a waste of resources because their data has consistently shown that the transportation fees collected were drastically higher than the related costs and yet the fee remained unchanged for years. To respond to DSCA s request for data for the fiscal year 2018 rate review, each military department spent between 2 to 4 months of staff time to collect and prepare the data, according to military department officials. Asking for and then not using such data put an unnecessary burden on the military departments and wasted DOD staff resources. Without clearer internal guidance for its rate reviews regarding their timing and the analysis needed, it will be difficult for DSCA management to make appropriate fee-setting decisions based on future rate reviews. Federal internal control standards state that effective internal guidance communicates the who, what, when, where, and why of what needs to be accomplished. According to DSCA officials, DSCA is considering conducting its next transportation fee rate review in fiscal year 2020, with a goal of lowering the FMS transportation account balances. DSCA officials ability to meet this goal could be hindered without more clarity about the timing of the reviews and more rigorous analysis that involves explicit goals, such as for the level of alignment between cost and fee or of the account balances. <4.2. DSCA s Rate Structure Hinders Its Ability to Set Appropriate Transportation Fee Rates> The structure of the FMS transportation fee rate further hinders DSCA s ability to set appropriate rates. According to DSCA officials, the current rate structure was developed to use data that are easily available, which limits DOD s administrative burden in calculating the fee. However, our analysis raises concerns about the extent to which the current rate structure may have negative implications for the transportation fee s equity, efficiency, and revenue adequacy. We have previously reported that fee design should balance ways to encourage greater efficiency, equity, and revenue adequacy while reducing administrative burden on the agency and payers of the fees, as shown in Table 4. These factors interact and often conflict with each other so that tradeoffs among these factors should be considered when designing a fee s structure. The current transportation fee rate structure limits DSCA s administrative burden because it relies on only a few factors, which involve easily accessible data, but these factors vary considerably from those TRANSCOM uses to price its transportation. The FMS transportation fee amount charged to purchasers is generally based on three factors, which should be identified in FMS agreements: (1) the price of the item; (2) the foreign destination rate area; and (3) the extent of U.S. government responsibility for transporting the item (e.g., to an inland destination in the continental United States or to a foreign inland or port destination). At the time of the FMS agreement, DSCA and the military departments lack information about other factors that would make it easier for DOD to set fee rates such that fees would approximate the actual cost of the transportation. For example: Mode. DOD may not know how it will move the items at the time of the FMS agreement, and costs vary depending on the mode of transportation, such as by air or a surface vessel. Route. Although DOD should be aware of the final destination for items, DOD may be unaware of where the shipment will originate or the specific route the items will take, and transportation costs can vary depending on the specific route. For example, to transport goods in a 20-foot container on a surface vessel door-to-door from a location on the East Coast of the United States to Afghanistan in fiscal year 2018, TRANSCOM rates ranged from $548.85 to $1,077.03 per measurement ton shipped, depending on the specific route, whereas DSCA s fee rates would be constant and applied to the price of the items. Also, even if DOD knew the exact mode and route, approximating the exact cost for each shipment would be difficult because TRANSCOM updates its rates annually, and shipments often occur years after signing the FMS agreement. The distinct factors used to determine the fee and cost for FMS transportation make it difficult for the cost and fee to align, which has potential implications for the fee s equity. Although the data DSCA obtained from the military departments for its fiscal year 2018 rate review was unsuitable for that purpose because it was not timely or statistically sampled, we performed extensive data reliability procedures to determine that the individual cost and fee data points are reliable and as a result analyzed these data to obtain insights into the extent to which the cost and fee were aligned within that sample. As shown in figure 9, we found extreme differences between the transportation cost billed to the FMS transportation accounts and the fee the purchaser paid. Within this nongeneralizable sample, costs and fees were within 10 percent of each other for only 30 of the 1,152 cases or shipments (3 percent), whereas the difference was more than 1,000 percent higher or lower for 492 of the cases or shipments (43 percent). In addition, we identified five instances of the difference between the cost and the fee exceeding 1,000,000 percent. Although these data were not systematically sampled to ensure they would be indicative of the full population of shipments, the high incidence of such large differences is concerning. Within this sample, we also found that certain countries were either always over-charged or always under-charged. Since the rate review data are not generalizable, this pattern may or may not be consistent across FMS shipments. However, such a pattern could plausibly occur due to the differences between TRANSCOM s and DSCA s rate areas. Potential concerns about the fee structure s efficiency and revenue adequacy also stem from the difficulty in aligning the current fee structure with related costs. Efficiency. The large disparities between cost and fee in the current FMS transportation fee rate structure may be leading some FMS purchasers to choose not to use DTS. According to Army officials, some FMS purchasers choose to use their own freight forwarders instead of DTS because of a perception that the FMS transportation fee is too high. These decisions could have broader effects on DTS. According to TRANSCOM, the additional demand from FMS purchasers allows TRANSCOM to better leverage DTS, such as by filling excess capacity with paying cargo and supporting training needs to maintain combat readiness. Revenue adequacy and stability. The potentially large differences between the transportation cost and fee resulting from the current FMS transportation fee rate structure has led to large fluctuations in collections and expenditures over time. For example, in fiscal years 2009 and 2011, DSCA had to redistribute a combined $130 million into the main FMS transportation account from the FMS administrative fee account to cover costs and avoid insolvency. Around the time of the fiscal year 2009 rate review, DSCA began reviewing the fee rate s structure as part of an overall attempt to address issues related to the transportation account nearing insolvency. As part of that review, DSCA worked with the military departments and TRANSCOM to assess factors such as administrative burden, data availability, and ability to more accurately charge transportation costs to FMS purchasers, which would have enhanced the fee s equity and efficiency. Specifically, they considered the benefits and costs of six alternative rate structures: Three of the six options would have involved replacing the rate-based fee for some or all shipments, by charging actual transportation costs or estimating likely actual costs per type of item. According to documentation from this review, the DOD agencies said these three options would have placed high administrative burdens on the military departments and required changes to military department or TRANSCOM information systems. The other three options the DOD agencies considered would have modified the structure of the current rate-based fee to take into account additional factors, such as transportation method (e.g., air) and item weight, or creating additional rate areas to target specific locations where costs of transportation were higher. The agencies determined that some of these options would have a lower administrative burden than the first three options. However, DSCA decided to maintain its current fee rate structure and address the potential insolvency through other approaches such as by redistributing funds from the FMS administrative fee account to the transportation account. According to DSCA officials, DSCA made this decision because it could not obtain agreement with the military departments and TRANSCOM on any of the other options. DSCA has not since reviewed the rate structure. <4.3. DSCA Internal Guidance to the Military Departments Does Not Specify Key Details on How to Estimate Transportation Prices for Certain Items> DSCA provides internal guidance to the military departments on how to estimate the transportation prices to be charged for certain items, but the internal guidance does not specify key details about how to calculate the estimates. As a result, the military departments follow different procedures for estimating these prices, and individual military department procedures may differ over time depending on staff turnover. Federal internal control standards state that management should use quality financial information that is complete and reasonably free from error, and that effective internal guidance informs users of the who, what, when, where, and why of what needs to be accomplished, thereby helping to retain organizational knowledge. Estimated Transportation Prices for Certain Items For certain items that need to be shipped via the Defense Transportation System, such as goods with sensitive or hazardous materials, and for which charging the transportation fee rate would significantly differ from transportation costs, DOD may instead charge a set transportation price per item. The fees collected from these estimated prices and the costs to transport these items are paid in and out of the FMS transportation accounts. These prices are not location-specific because DOD charges each purchaser of this item the same estimated price. According to DOD officials, such items are often low- weight, high-cost items, such as missiles, for which the usual transportation fee rate could greatly overcharge the FMS purchaser. DSCA s internal guidance for how to estimate these transportation prices includes limited information and does not take into account key information for accurately estimating transportation costs. Specifically, the guidance lists certain types of transportation cost elements to include and not to include in these price estimates. For example, estimated port handling costs should be included while security costs should be charged to the FMS purchaser separately. The guidance also indicates the estimates should be on a per-item basis with two potential prices to transport each item, one for any transportation within the United States and one for transportation to any foreign destination. Other key factors in transportation costs, such as the transportation mode or specific origin or destination, are not considered. Also, DOD charges these prices per item, although economies of scale can be gained by transporting batches of the same item together. The lack of specificity in DSCA s internal guidance has led the military departments to adopt inconsistent estimation processes that may not lead prices to approximate actual costs. These inconsistent processes could lead DOD to charge FMS purchasers more or less than DSCA intends and ultimately affect account balances. For example: Origin and destination. The three military departments take different approaches to compensate for having to estimate the cost of transporting an item without knowing its specific origin and destination. Although all military departments follow the same general process of estimating potential transportation costs for commonly used origin and destination ports and averaging these to attempt to estimate these prices, they all use different locations to create their estimates, which leads to different pricing. For example, one command within Army uses a central location within the United States as the origin for its estimates to simulate an average of potential costs for transportation from any continental United States location. However, according to Army officials, another command within Army attempts to ensure that the transportation price estimated will cover costs by simulating a worst case scenario by basing its estimates on locations distant from each other. Batch shipments. The military departments also vary in terms of how they estimate per item costs for items that could often be transported in batches. Air Force and Navy calculate how many of an item can fit in a container, and then divide the average price estimated to transport such a container by this batch size to determine final pricing, but Army does not. When Air Force and Navy estimate prices this way, they do not require shipments to be transported in a container of this size or for purchasers to buy or receive these items only in batches of this size, which could lead the price charged to vary greatly from the actual costs. For example, for one type of missile, Air Force determined that 20 of them could fit in a container and therefore divided the average price it had estimated to transport a container by 20 before submitting the price to DSCA. Therefore, if only one of the item were purchased, instead of the 20 built into the estimate, the transportation cost could be about 20 times the fee. The lack of specificity of DSCA s guidance has also led to large changes in one of the military departments estimated prices after staff turnover. According to the Air Force official who prepared Air Force s 2018 updates to these prices, that was the first year that official estimated these prices after another Air Force official had done so through 2015. The new Air Force official said that Air Force had not updated its prices during the previous 3 years because it lacked rates to estimate the costs of transporting explosive materials by ocean vessel. After receiving guidance from DSCA to exclude these rates from their estimates, the new Air Force official updated the prices for 2018. When doing so, this Air Force official found that some of the updated price estimates were much higher than the prior prices due to increased port handling rates, whereas the prices to transport items to foreign destinations were at times lower due to lower air rates used in the estimates. For example, the price to transport a certain item within the continental U.S. had been set at $278.00 per item for 2015 through 2017, and the 2018 price estimate was $8,447.00. DSCA initially accepted the updated prices, but Air Force later rescinded them after foreign partner countries voiced concerns about the increased prices affecting existing contracts and Air Force was unable to prove that the new estimates better approximated actual costs without the ability to compare actual bills with the price estimates. According to the responsible Air Force official, the calculation process from 2015 was used to recalculate the 2018 prices and was again used for 2019, albeit with current fiscal year rate information, due to continued uncertainty regarding this process. Since late 2016, the military departments have voiced concerns to DSCA regarding the difficulty of following DSCA s internal guidance to estimate these transportation prices. In particular, in late 2016, Army officials developed a white paper for DSCA that described challenges developing these estimated prices posed by updates to how TRANSCOM calculates its transportation pricing. In September 2018, Air Force officials also raised various concerns regarding the accuracy of the prices, such as concerns about how the batch size of a shipment affects per item costs and the lack of key details affecting transportation costs. Military department officials said they would prefer more specific guidance from DSCA that could help them to more uniformly calculate these prices. In January 2019, DSCA officials stated they were at an early stage of exploring possible changes to the information required to calculate these types of transportation prices. In May 2019, DSCA officials stated that they were still working to define the problem and how it could be addressed. Further research into the military departments difficulties in establishing these price estimates and the costs and benefits of the methodologies they use would better inform DSCA on what pricing process could most accurately reflect costs moving forward. <5. Conclusions> FMS is one of the primary ways the U.S. government engages in security cooperation with its foreign partners, by annually selling them billions of dollars in defense items and services. When transporting FMS items on their behalf, DOD charges purchasers a transportation fee such that, according to DOD, it should involve no profit, no loss foreign partners should not be charged excessive fees and fee revenue should cover the program s operating costs. However, from fiscal year 2007 to 2018, the FMS transportation accounts experienced substantial balance growth of over 1300 percent. To address risks such as the historical unpredictability of collections and expenditures prior to recent dramatic account growth, DSCA implemented processes to conduct daily and annual management oversight of the accounts. However, the effectiveness of these processes is limited by a lack of specific internal guidance. In particular, although the daily reviews are meant to keep DSCA aware of significant changes in the accounts and ensure that they maintain healthy balances, DSCA has not specified what should be considered as significant changes or how to calculate healthy target levels for the accounts. Lack of rigorous annual review processes has also led the annual reports provided to DSCA management to be missing key details. In particular, they have contained incomplete information on the causes for account trends and have omitted information on the source of $130 million that had been redistributed into this fee account from the FMS administrative fee account in fiscal years 2009 to 2011 to address a danger of insolvency that the FMS transportation accounts no longer face. The resulting reports inhibit DSCA management s ability to oversee the accounts at a time when they have grown so quickly. In addition, a lack of clear internal guidance explaining how to assess when redistributions are needed and when to return unused BPC-specific transportation funds may lead to a surplus of funds in the FMS transportation accounts that could be used for other purposes. Similarly, DSCA has established a process to review FMS transportation fee rates but this process has several weaknesses that may skew DSCA s rate setting decisions. DSCA s rate review process involves analysis of historical cost and fee data provided by the military departments, but due to unclear requests to the military departments, the process is burdensome and leads to data that are untimely and unsystematically sampled. Although DSCA requested such data from the military departments five times between fiscal years 2007 to 2018, DSCA only conducted rate reviews using these data twice because DSCA did not prioritize use of its resources for the other reviews. In addition, for the two reviews it did conduct, DSCA never used Air Force or Navy data because unclear guidance from DSCA and difficulties finding sufficient data across disparate DOD information systems limited the data Air Force and Navy could provide. Further, DSCA based their reviews on minimal internal guidance and used limited analysis and unclear criteria upon which to set new rates. The current rate review process and the overall fee rate structure reduce DSCA s administrative burden, but raise various concerns regarding the fee s equity, efficiency, and revenue stability. DSCA also has similarly unclear internal guidance for the military departments for situations when the FMS purchaser is charged a set transportation price per item instead of a transportation fee rate. By strengthening these rate setting processes, DSCA would enhance its ability to manage account balances and to make timely decisions to ensure the FMS transportation fee rate is set to cover related transportation costs but not overcharge FMS purchasers. <6. Recommendations for Executive Action> We are making the following 10 recommendations to DOD: The Secretary of Defense should ensure that the Director of DSCA clarify internal guidance for daily account reviews by specifying criteria for the level (such as percentage or dollar amount) of change in transportation account balances that would require DSCA to contact DFAS for further examination. (Recommendation 1) The Secretary of Defense should ensure that the Director of DSCA establish a methodology to calculate a target range, with desired upper and lower bounds, for FMS transportation account balances that could be used to better inform DSCA s account reviews. (Recommendation 2) The Secretary of Defense should ensure that the Director of DSCA modify the internal guidance for the annual review process to include the specific steps DSCA officials should take in preparing the annual report, including ensuring that they incorporate rigorous analysis into the annual reports. (Recommendation 3) The Secretary of Defense should ensure that the Director of DSCA develop internal guidance related to the redistribution of funds between the FMS trust fund fee accounts. Such internal guidance could include criteria for when to consider redistributing funds between accounts and for when to return those funds, how to analyze the amount of any redistributions needed, and how to clearly report any redistributions to DSCA management. (Recommendation 4) The Secretary of Defense should ensure that the Director of DSCA assess whether funds redistributed from the administrative account to the transportation account should be moved back to the FMS administrative account and document this decision. If the Director of DSCA determines that the funds should be moved back to the FMS administrative account, the Director should ensure the movement of funds in accordance with this decision. (Recommendation 5) The Secretary of Defense should ensure that the Director of DSCA develop internal guidance for the steps that DSCA, in combination with DFAS, should undertake when a BPC-specific transportation account closes to help ensure that any remaining unused funds are transferred to the miscellaneous receipts of the U.S. Treasury in accordance with DOD officials stated intention to do so. (Recommendation 6) The Secretary of Defense should ensure that the Director of DSCA create specific internal guidance for how and from where data should be obtained to be used for its transportation fee rate reviews and the timeframes the data should cover to ensure DSCA has a systematic sample upon which to base its rate setting decisions. This updated internal guidance should be based on consultations with the military departments, DFAS, and TRANSCOM on which sources of transportation cost and fee data are the most reliable and comparable for use in its FMS transportation fee rate reviews. (Recommendation 7) The Secretary of Defense should ensure that the Director of DSCA develop specific internal guidance to follow when performing transportation fee rate reviews. Such internal guidance could specify when these reviews should occur; a process to obtain management commitment to complete a review before DSCA requests that the military departments compile data for it; and a process for performing the reviews that includes developing clear, documented goals and an appropriate level of analysis to best ensure that DSCA s analysis meets those goals. (Recommendation 8) The Secretary of Defense should ensure that the Director of DSCA conduct a review of the current structure of the FMS transportation fee rate, in consultation with other relevant DOD agencies, to determine if other rate structures could better balance considerations related to administrative burden, equity, efficiency, and revenue adequacy. (Recommendation 9) The Secretary of Defense should ensure that the Director of DSCA clarify internal guidance for the military departments on how to calculate the estimated actual transportation prices to charge FMS purchasers for certain items, such as by specifying a calculation methodology. This updated internal guidance should be based on consultations with the military departments, TRANSCOM, and any other relevant DOD components on which sources of data and which calculation methodologies would be most accurate. (Recommendation 10) <7. Agency Comments> We provided a draft of this report to DOD and State for review and comment. DSCA provided written comments on behalf of DOD, which are reprinted in appendix II. DSCA concurred with all of our recommendations, and identified actions it plans to take to address them and initial steps it has begun to take toward addressing some of them. We also received technical comments from DOD, which we incorporated in our report as appropriate. State did not provide any written or technical comments. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of State, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6881 or BairJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope & Methodology This report examines (1) the balances maintained in the Foreign Military Sales (FMS) transportation accounts for fiscal years 2007 through 2018, (2) the extent to which the Defense Security Cooperation Agency (DSCA) established and implemented policies and procedures to help ensure management oversight of the transportation accounts, and (3) the extent to which DSCA processes for setting transportation fee rates ensure that these rates are set appropriately. To examine the balances of the FMS transportation accounts, we analyzed fiscal year 2007 to 2018 overall collections, expenditures, and balance data for each of the individual FMS transportation accounts maintained by the Defense Finance and Accounting Service (DFAS) in the Defense Integrated Financial System (DIFS). We chose to review data from these fiscal years based on data availability. To determine the reliability of these data, we reviewed the data for internal consistency by reviewing for duplicate entries, gaps, and obvious errors, and we compared the data to similar data obtained for a prior review of two other FMS fees. We also reviewed relevant documentation, including annual account assessments conducted by DSCA and the internal control procedures for conducting such reviews. Lastly, we interviewed DFAS and DSCA officials to clarify questions about how to interpret the data. We did not conduct any independent testing of the data obtained from DFAS to determine whether the amounts reflected correct payments made toward accurate billings. As such, when presenting collections and expenditures, we note that they reflect the amount of funds in the aggregate moved into and out of the FMS transportation accounts. We determined the collections, expenditures, and balance data to be reliable for the purpose of showing the movement of funds in and out of the FMS transportation accounts and the accounts balances over time. To analyze trends in collections into and expenditures from the FMS transportation accounts, such as in figures 4 and 6, we adjusted the data to remove the effects of two redistributions from the FMS administrative fee account that took place in fiscal years 2009 and 2011, as well as amounts that were moved into certain new Building Partner Capacity (BPC) transportation accounts to initially fund them in fiscal years 2012 and 2015. We reviewed documentation related to the two redistributions of funds from the FMS administrative fee account to the transportation account and the initial funding amounts allocated to new BPC transportation accounts, and interviewed DFAS and DSCA officials to understand how they accounted for these fund movements. To assess the extent to which DSCA established and implemented policies and procedures to help ensure management oversight of the FMS transportation accounts, we reviewed DSCA internal guidance included in DSCA s Managers Internal Control Program (MICP) procedures for daily and annual FMS transportation account reviews, federal internal control standards, our prior report on federal user fees, and documentation showing how DSCA officials implemented those procedures. We also interviewed DSCA officials responsible for these reviews. Daily reviews. We reviewed a DSCA spreadsheet in which DSCA officials documented the daily reviews they conducted in fiscal year 2018. We chose to review this one fiscal year of data because it was the most recent complete fiscal year and would thereby be most relevant to current implementation. We also analyzed these data against the related MICP procedures, interviewed relevant DSCA and DFAS officials, and requested documentation of related correspondence to determine the extent to which DSCA consistently took any actions in response to these reviews. Because the data in these daily reviews is sourced from the same balance data in DIFS as we analyzed for our first objective, we compared the data between the two sources to ensure its consistency, and interviewed DFAS and DSCA officials about how these data were pulled for the daily reports. Based on these steps, we determined these data to be sufficiently reliable for assessing DSCA s implementation of the daily review process. Annual reviews. We reviewed the annual reports DSCA created for fiscal years 2015 to 2018 all of the years for which DSCA created such reports and interviewed DSCA officials about their process for creating these reports and other aspects of the MICP procedures for the annual review. To determine the extent to which the annual reports accurately convey information about the causes of trends in the accounts, we compared account expenditures data to oil price data for fiscal years 2007 to 2018. We performed this analysis because DSCA s annual reports cite declining oil prices as a factor contributing to the increasing account balances in the FMS transportation accounts. For data on oil prices, we analyzed data from the U.S. Energy Information Agency on Cushing, Oklahoma, West Texas Intermediate oil prices by month, which is an established source for these data that is used commonly as a global benchmark for oil prices. As such, we determined these data to be reliable to use for this purpose. We also reviewed legislation that changed the rates the Department of Defense (DOD) can charge for FMS air shipments, and interviewed DSCA and U.S. Transportation Command officials about the effect and timing of this legislative change. We also reviewed the fiscal year 2015 to 2018 annual reports for the FMS transportation and administrative accounts to determine whether the redistributions that had been made from the FMS administrative account to the FMS transportation accounts were clearly reported, and reviewed related internal guidance in DOD s Financial Management Regulations. For BPC-specific transportation accounts, we reviewed DOD s Financial Management Regulations and related DSCA documentation against federal internal control standards regarding the clarity of internal control guidance. We also interviewed DSCA officials and received written responses to questions from DOD s Office of the Under Secretary of Defense (Comptroller) regarding the process DSCA should follow when any of the BPC-specific transportation accounts close. To review the extent to which DSCA processes ensure that transportation fee rates are set appropriately, we reviewed DSCA guidance and interviewed DSCA and military department officials about the different processes DSCA uses to set transportation fees. For the transportation fee rate review, we reviewed DSCA s MICP procedures and the requests DSCA sent to the military departments for data to analyze in its rate reviews against the Statement of Federal Financial Accounting Standards No. 4, our prior report on federal user fees, and federal internal control standards. To understand the reliability of the data the military departments submitted to DSCA and what these data showed in terms of the alignment between transportation costs and fees, we reviewed the data, including by performing internal consistency checks on the data, such as by reviewing it for duplicate entries, gaps, or obvious errors. We also reviewed any military department procedures for compiling these data and interviewed or received written responses from military department officials responsible for compiling the data. Based on these steps, we determined that these data were reliable for our purposes of making some comparisons between costs and fees for the sample provided. However, as noted earlier in this report, the departments could only provide partial data, which they did not select using systematic sampling techniques to ensure the data were indicative of the full population of shipments. Therefore, we determined that these data were unsuitable for DSCA s purpose of making fee-setting decisions. We also reviewed DSCA documentation of the analysis it performed for its 2009 and 2018 transportation fee rate reviews, including analysis spreadsheets and briefings to DSCA management on the reviews results. Regarding instances when DOD charges FMS purchasers estimated transportation prices instead of a transportation fee rate, we reviewed DSCA guidance on this process in the Security Assistance Management Manual against related federal internal control standards. We also reviewed any internal guidance the military departments have developed to further guide these estimation processes, examples of the military department estimation processes, and other documents that showed concerns regarding these processes that the military departments had previously raised to DSCA. We interviewed and sent questions for written responses to DSCA and military department officials regarding these processes and the military departments concerns. We conducted this performance audit from May 2018 to September 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments <8. GAO Contact> <9. Staff Acknowledgments> In addition to the contact named above, Cheryl Goodman (Assistant Director), Heather Latta (analyst in charge), Adam Peterson, Benjamin L. Sponholtz, John (Ryan) Bolt, Ming Chen, John Hussey, and Brandon Voss made key contributions to this report. Martin de Alteriis, Christopher Keblitis, Grace Lui, Susan E. Murphy, Laurel Plume, Heather Rasmussen, and Chanetta Reed also contributed to this report. | Why GAO Did This Study
The FMS program is one of the primary ways the U.S. government supports its foreign partners, by annually selling them billions of dollars of items and services. According to DOD, the FMS program is intended to operate on a “no profit, no loss” basis, with purchasers not charged excessive fees and fee revenue covering operating costs. Foreign partners can arrange for their own transportation of FMS items or pay DOD a transportation fee to cover the costs of DOD transporting them. The fees are collected into transportation accounts in the FMS Trust Fund.
House Report 114-537 and Senate Report 114-255 included provisions that GAO review DSCA's management of FMS fees. This report examines (1) the balances of the FMS transportation accounts for fiscal years 2007 through 2018, (2) DSCA's management oversight of the accounts, and (3) DSCA's processes for setting transportation fees. GAO analyzed DOD data and documents, and interviewed DOD officials.
What GAO Found
Fees charged by the Department of Defense (DOD) for the transportation of defense items sold through the Foreign Military Sales (FMS) program are intended to approximate DOD's transportation costs over time. However, GAO found that the FMS transportation accounts accrued a combined balance of $680 million by the end of fiscal year 2018. Much of the growth occurred from the end of fiscal year 2011 through fiscal year 2018, when the account grew by approximately $630 million.
The Defense Security Cooperation Agency (DSCA) has developed limited management oversight guidance for the FMS transportation accounts, which has contributed to the substantial balance growth. DSCA internal guidance requires daily and annual reviews of the accounts to monitor for significant changes in account balances and to ensure the accounts maintain a “healthy” level. However, internal guidance does not define a significant change or “healthy” level, such as a target range for the account balances. This has led to inconsistent reviews and limited oversight of the recent balance growth. DSCA also has no internal guidance on how to perform certain aspects of its annual reviews or what information to include in the resulting reports. As a result, DSCA officials have produced reports with incomplete information, such as on the causes for trends in the account balances, undermining DSCA management's ability to make informed decisions about the accounts.
DSCA's processes for setting the FMS transportation fee do not ensure that aggregate fees approximate aggregate costs. For its transportation fee rate reviews, DSCA sends requests to the military departments for historical cost and fee data that lack specificity, such as on timeframes, sampling methodology, and data sources. As a result, DSCA has analyzed data that are not timely or systematically sampled. In addition, military department officials reported difficulty providing the requested data in part because DSCA's guidance did not specify data sources. Consequently, for the most recent review, Air Force and Navy were unable to find sufficient matching cost and fee data for DSCA to consider them usable. Further, DSCA has established no goals for rate reviews and has no written procedures to follow in performing them. These factors together contributed to recent growth in the FMS transportation account balances and will continue to hinder DSCA's ability to make appropriate rate-setting decisions moving forward.
What GAO Recommends
GAO is making 10 recommendations to DOD, including six recommendations to strengthen DSCA's oversight of the transportation accounts—such as by clarifying internal guidance—and four recommendations to improve its transportation fee setting processes. DOD concurred with all of the recommendations and identified actions it plans to take to address them. |
gao_GAO-19-255T | gao_GAO-19-255T_0 | <1. The Coast Guard Did Not Establish a Sound Business Case for the Polar Icebreaker Program> In September 2018, we found the Coast Guard did not have a sound business case when it established the acquisition baselines for its polar icebreaker program in March 2018 due to risks in four main areas design, technology, cost, and schedule. Our prior work has found that successful acquisition programs start with solid, executable business cases before setting program baselines and committing resources. A sound business case requires balance between the concept selected to satisfy operator requirements and the resources design knowledge, technologies, funding, and time needed to transform the concept into a product, which in this case is a ship with polar icebreaking capabilities. Without a sound business case, acquisition programs are at risk of breaching the cost, schedule, and performance baselines set when the program was initiated in other words, experiencing cost growth, schedule delays, and reduced capabilities. At the heart of a business case is a knowledge-based approach. We have found that successful shipbuilding programs build on attaining critical levels of knowledge at key points in the shipbuilding process before significant investments are made (see figure 1). We provide additional information below on each of the four main risks that affect the soundness of the polar icebreaker program s business case. <1.1. The Coast Guard Plans to Have a Stable Design before Starting Construction but Did Not Assess Design Maturity Prior to Setting Program Baselines> The Coast Guard expressed a commitment to having a stable design for the polar icebreaker program prior to the start of lead ship construction, but it set the program s baselines before conducting a preliminary design review a systems engineering event that is intended to verify that the contractor s design meets the requirement of the ship specifications and is producible. Shipbuilding best practices we identified in 2009 found that design stability on a ship is achieved upon completion of the basic and functional designs. The basic design includes fixing the ship steel structure; routing all major distributive systems, including electricity, water, and other utilities; and ensuring the ship will meet the performance specifications. The functional design includes further iteration of the basic design, such as providing information on the exact position of piping and other outfitting in each block, and completing a 3D product model. At this point of design stability, the shipbuilder has a clear understanding of the ship structure as well as how every system is set up and routed throughout the ship. Consistent with our best practices, prior to the start of construction on the lead ship, the Coast Guard plans to require the shipbuilder to complete basic and functional designs, develop a 3D model output, and provide at least 6 months of production information to support the start of construction. Although the Coast Guard plans to have a stable design prior to ship construction, it set the program s acquisition program baselines prior to gaining knowledge on the feasibility of the selected shipbuilder s design. Program baselines inform DHS s and the Coast Guard s decisions to commit resources. Our best practices for knowledge-based acquisitions state that before program baselines are set, programs should hold key systems engineering events, such as a preliminary design review, to help ensure that requirements are defined and feasible and that the proposed design can be met within cost, schedule, and other system constraints. The Coast Guard has yet to conduct a preliminary design review for the program because DHS s current acquisition policy does not require programs to do so until after setting program baselines. However, in April 2017, we found that DHS s sequencing of the preliminary design review is not consistent with our acquisition best practices, which state that programs should pursue a knowledge-based acquisition approach that ensures program needs are matched with available resources such as technical and engineering knowledge, time, and funding prior to setting baselines. As a result, we recommended that DHS update its acquisition policy to require key technical reviews, including the preliminary design review, to be conducted prior to approving programs baselines. DHS concurred with this recommendation and stated that it planned to initiate a study to assess how to better align its processes for technical reviews and acquisition decisions. Upon completion of the study, DHS plans to update its acquisition policies, as appropriate. As of June 2018, DHS indicated that it had completed its study and was in the process of updating its acquisition policies. GAO will review the policies once complete to determine if the updates meet the intent of this recommendation. By setting the polar icebreaker program s baselines prior to gaining knowledge on the shipbuilder s design, the Coast Guard has established cost, schedule, and performance baselines without a stable or mature design. Although completing the preliminary design review after setting program baselines is consistent with DHS policy, this puts the Coast Guard at risk of breaching its established baselines and having to revise them later in the acquisition process, after a contract has been signed and significant resources have been committed to the program. At that point, the program will be well underway and it will be too late for decision makers to make appropriate tradeoff decisions between requirements and resources without causing disruptions to the program. <1.2. Coast Guard Intends to Use Proven Technologies for the Polar Icebreaker Program but Has Not Assessed Their Maturity> The Coast Guard intends to use what it refers to as state-of-the-market or proven technologies for the polar icebreaker program, but it has not yet conducted a technology readiness assessment to determine the maturity of key technologies prior to setting program baselines. This approach is inconsistent with our best practices for technology readiness. A technology readiness assessment is a systematic, evidence-based process that evaluates the maturity of critical technologies hardware and software technologies critical to the fulfillment of the key objectives of an acquisition program. According to our best practices, a technology readiness assessment should be conducted prior to program initiation. At the time of our earlier review, Coast Guard officials told us the polar icebreaker program does not have any critical technologies and thus, does not need to conduct a technology readiness assessment. From design studies and industry engagement, Coast Guard officials determined that the key technologies required for the polar icebreakers, such as the integrated power plant and azimuthing propulsors, are available commercially and do not need to be developed. Figure 2 provides additional information on the risks for these key technologies, as well as design risks for an icebreaker s hull form. Coast Guard officials stated that the integrated power plant is the standard power plant used on domestic and foreign icebreakers. Coast Guard officials told us that similarly, market survey data on azimuthing propulsors show that ice-qualified azimuthing propulsors in the power range required have been used on foreign icebreakers. However, according to our best practices, critical technologies are not just technologies that are new or novel. Technologies used on prior systems can also become critical if they are being used in a different form, fit, or function. Based on our analysis of available Coast Guard information, we believe the polar icebreaker program s planned integrated power plant and azimuthing propulsors should be considered critical technologies given their criticality in meeting key performance parameters, how the technologies are being reapplied to a different operational environment from prior uses of the technologies, and the extent to which they pose major cost risks. By not conducting a technology readiness assessment and identifying, assessing, and maturing its critical technologies prior to setting the program s program baselines, the Coast Guard is potentially underrepresenting technical risk and understating its cost, schedule, and performance risks. <1.3. Polar Icebreaker Program s Cost Estimate Substantially Met Best Practices but Is Not Fully Reliable> We found that the Navy s lifecycle cost estimate used to inform the polar icebreaker program s $9.827 billion cost baseline substantially adheres to most of our cost estimating best practices; however, the estimate is not fully reliable. The cost estimate is not fully reliable because it only partially met best practices for being credible. Highlights from our assessment of the polar icebreaker program s lifecycle cost estimate are detailed below: Comprehensive: substantially met. The estimate includes government and contractor costs over the full lifecycle of all three ships and documents detailed ground rules and assumptions, such as the learning curve used to capture expected labor efficiencies for follow-on ships. However, the costs for disposal of the three ships were not at a level of detail to ensure that all costs were considered and not all assumptions, particularly regarding operating and support costs, were varied to reflect the impact on cost should these assumptions change. Well-documented: substantially met. The estimate s documentation mostly captured the source data used as well as the primary methods, calculations, results, rationales, and assumptions used to generate each cost element. However, the documentation alone did not provide enough information for someone unfamiliar with the cost estimate to replicate what was done and arrive at the same results. Accurate: substantially met. The estimate was properly adjusted for inflation, and we did not find any mathematical errors in the estimate calculations we inspected. Officials stated that labor and material cost data from recent, analogous programs were used in the estimate. While the documentation does not discuss the reliability, age, or relevance of the cost data, Navy officials provided us with additional information regarding those data characteristics. Credible: partially met. The Navy only modeled cost variation in the detail design and construction portion of the program and excluded from its analyses any risk impacts related to the remainder of the acquisition, operating and support, and disposal phases, which altogether comprise about 75 percent of the lifecycle cost. Without performing a sensitivity analysis on the entire life cycle cost of the three ships, it is not possible for the Navy to identify key elements affecting the overall cost estimate. Further, without performing a risk and uncertainty analysis on the entire life cycle cost of the three ships, it is not possible for the Navy to determine a level of confidence associated with the overall cost estimate. By not quantifying important risks, the Navy may have underestimated the range of possible costs for about three-quarters of the entire program. The estimate provides an overly optimistic assessment of the program s vulnerability to cost growth should risks be realized or current assumptions change. This, in turn, may underestimate the lifecycle cost of the program. <1.4. Polar Icebreaker Program s Optimistic Schedule Is Driven by Capability Gap and Does Not Reflect Robust Analysis> The Coast Guard s planned delivery dates of 2023, 2025, and 2026 for the three ships were not informed by a realistic assessment of shipbuilding activities, but rather were primarily driven by the potential gap in icebreaking capabilities once the Polar Star reaches the end of its service life (see figure 3). The Polar Star s service life is estimated to end between fiscal years 2020 and 2023. This creates a potential heavy polar icebreaker capability gap of about 3 years, if the Polar Star s service life were to end in 2020 and the lead polar icebreaker were to be delivered by the end of fiscal year 2023 as planned. If the lead ship is delivered later than planned in this scenario, the potential gap could be more than 3 years. The Coast Guard is planning to recapitalize the Polar Star s key systems starting in 2020 to extend the service life of the ship until the planned delivery of the second polar icebreaker (see figure 4). Further, we compared the program s planned construction schedule to the construction schedules of delivered lead ships for major Coast Guard and Navy shipbuilding programs active in the last 10 years as well as the Healy, the Coast Guard s only medium polar icebreaker. We found that the polar icebreaker s lead ship construction cycle time of 2.5 to 3 years is optimistic, as only 3 of the 10 ships in our analysis were constructed in 3 years or less. Further, as another point of comparison, the Healy was constructed in just under 4.5 years. An unrealistic schedule puts the Coast Guard at risk of not delivering the icebreakers when promised and the potential gap in icebreaking capabilities could widen. Just as importantly, our prior work on shipbuilding programs has shown that establishing optimistic program schedules based on insufficient knowledge can create pressure for programs to make sacrifices elsewhere, which can lead to work being performed concurrently, costly rework, and further delays. To address the risks we identified and establish a sound business case, we made a number of recommendations in our September 2018 report to DHS, Coast Guard, and the Navy, including: Conducting a technology readiness assessment in accordance with best practices, identifying critical technologies, and developing a plan to mature any technologies not designated to be mature before detail design of the lead ship begins; Updating the program s cost estimate in accordance with best practices before the contract option for construction of the lead ship is awarded; Developing a program schedule in accordance with best practices to set realistic schedule goals for all three ships before the contract option for construction of the lead ship is awarded; and Updating the program s acquisition program baselines prior to authorizing lead ship construction, after completion of the preliminary design review, and after it has gained the requisite knowledge on its technologies, cost, and schedule. DHS concurred with all of our recommendations and identified actions it planned to take to address them. For example, earlier this month, the Coast Guard indicated that it has identified a preliminary list of potential critical technologies and is in the process of developing a technology readiness assessment plan. The Coast Guard also plans to update the program s cost estimate within 8 months of the contract award and update the program schedule within 3 months of the contract award. <2. How the Polar Icebreaker Program Will Be Funded Moving Forward is Unclear> Of the $9.827 billion estimated for the lifecycle costs of the polar icebreaker program, about $3 billion is for acquisition costs. From 2013 through 2018, the polar icebreaker program has received $360 million in funding $60 million in Coast Guard appropriations and $300 million in Navy appropriations. In addition, according to Coast Guard officials, in fiscal year 2017, Coast Guard reprogrammed $30 million in fiscal year 2016 appropriations for the polar icebreaker program from another program (see figure 5). According to Coast Guard and Navy officials, the Navy plans to use the $300 million in Navy appropriations in fiscal year 2019 to fund the advanced planning, design, engineering, and long lead time materials for the first polar icebreaker. As part of the polar icebreaker program s acquisition strategy and reflected in the March 2018 request for proposals, the Navy plans to establish options for the subsequent detail design and construction of each of the three ships. The request for proposals specified that the options will be priced as fixed-price incentive type (see table 1). The Navy did not request any funding in fiscal year 2019 for the polar icebreaker program, while Coast Guard requested $30 million. Subsequently, after discretionary budget caps were relaxed by Congress, the administration s fiscal year 2019 budget addendum requested an additional $720 million in fiscal year 2019 Coast Guard appropriations for the program. As the program prepares to award a contract in fiscal year 2019 worth billions of dollars if all the options are exercised, it is unclear to what extent the program will be funded using Coast Guard or Navy appropriations or how much total funding will be provided. In conclusion, as the Coast Guard embarks on the acquisition of its new polar icebreakers to address capability gaps in the Arctic and Antarctic regions, it faces a number of key acquisition and funding risks. DHS, the Coast Guard, and the Navy must gain key acquisition knowledge before committing significant resources to the program while Congress faces key funding and tradeoff considerations. To put the polar icebreaker program in a position to succeed, Congress and the agencies must remain committed to establishing and executing a sound business case for the program. Chairman Mast, Ranking Member Garamendi, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions. <3. GAO Contact and Staff Acknowledgments> If you or your staff have any questions about this statement, please contact Marie A. Mak, (202) 512-4841 or makm@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Rick Cederholm, Assistant Director; Peter Anderson; Kurt Gurka; Claire Li; and Roxanna Sun. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
To maintain heavy polar icebreaking capability, the Coast Guard, in collaboration with the Navy, plans to acquire up to three new heavy polar icebreakers. The Navy plans to award a contract in 2019 for the polar icebreaker program. GAO has found that before committing resources, successful acquisition programs begin with sound business cases, which include plans for a stable design, mature technologies, a reliable cost estimate, and a realistic schedule.
This statement addresses, among other things, the key acquisition risks facing the polar icebreaker program. This statement is primarily based on GAO's April 2018 and September 2018 reports examining the Coast Guard's polar icebreaker acquisition, and also draws from GAO's extensive body of published work examining the Coast Guard's and the Navy's shipbuilding efforts. In its prior work, GAO analyzed Coast Guard and Navy guidance, data, and documentation, and interviewed Coast Guard and Navy officials.
What GAO Found
The Coast Guard—a component of the Department of Homeland Security (DHS)—did not have a sound business case in March 2018, when it established the cost, schedule, and performance baselines for its heavy polar icebreaker acquisition program, because of risks in four key areas:
Design. The Coast Guard set program baselines before conducting a preliminary design review, which puts the program at risk of having an unstable design, thereby increasing the program's cost and schedule risks. While setting baselines without a preliminary design review is consistent with DHS's current acquisition policy, it is inconsistent with acquisition best practices. Based on a prior GAO recommendation, DHS is currently evaluating its policy to better align technical reviews and acquisition decisions.
Technology. The Coast Guard intends to use proven technologies for the program, but did not conduct a technology readiness assessment to determine the maturity of key technologies prior to setting baselines. Coast Guard officials indicated such an assessment was not necessary because the technologies the program plans to employ have been proven on other icebreaker ships. However, according to best practices, such technologies can still pose risks when applied to a different program or operational environment, as in this case. Without such an assessment, the program's technical risk is underrepresented.
Cost. The lifecycle cost estimate that informed the program's $9.8 billion cost baseline was not fully reliable because it only partially met GAO's best practices for being credible. It did not quantify the range of possible costs over the entire life of the program. As a result, the cost estimate may underestimate the total funding needed for the program. However, the estimate substantially met GAO's best practices for being comprehensive, well-documented, and accurate.
Schedule. The Coast Guard's planned delivery dates were not informed by a realistic assessment of shipbuilding activities, but rather driven by the potential gap in icebreaking capabilities once the Coast Guard's only operating heavy polar icebreaker—the Polar Star —reaches the end of its service life (see figure).
GAO's analysis of selected lead ships for other shipbuilding programs found the icebreaker program's estimated construction time of 3 years is optimistic. As a result, the Coast Guard is at risk of not delivering the icebreakers when promised and the potential gap in icebreaking capabilities could widen.
What GAO Recommends
In September 2018, GAO recommended, among other things, that the polar icebreaker program update program baselines following a preliminary design review, conduct a technology readiness assessment, re-evaluate its cost estimate, and develop a schedule according to best practices. DHS concurred with all of GAO's recommendations and identified actions it plans to take to address them. |
gao_GAO-20-32 | gao_GAO-20-32_0 | <1. Background> In 2017, three sequential hurricanes Harvey, Irma, and Maria created an unprecedented demand for federal disaster response and recovery resources. According to the Federal Emergency Management Agency (FEMA), these hurricanes ranked among the top five costliest on record, costing $125 billion (Harvey); $90 billion (Maria); and $50 billion (Irma). As a result of these storms, Florida, Texas, and Puerto Rico faced hardships, including devastation to infrastructure, such as highways and bridges. The island of Puerto Rico in particular was severely affected, which created multiple challenges for federal response efforts. Specifically, within a 2-week period Puerto Rico was hit by both hurricanes Irma and Maria, resulting in power outages that lasted up to 11 months and the need for commodities, such as food and water, and requiring one of the largest recovery efforts in history. The federal response was complicated by several factors, including the remoteness of the island, limited local preparedness, outdated infrastructure, and workforce capacity constraints. The Emergency Relief Program provides assistance to repair or reconstruct highways and bridges on federal-aid highways and roads and bridges on federally owned public lands that have sustained serious damage from natural disasters or catastrophic failures. FEMA is responsible for providing funds to repair and replace roadways damaged as a result of disasters that are not eligible for federal-aid highway funding. For natural disasters or other events to be eligible for emergency relief funding, the President must declare the event to be an emergency or a major disaster under the Robert T. Stafford Disaster Relief and Emergency Assistance Act, or the governor must declare an emergency with the concurrence of the Secretary of Transportation. Damage to highways must be severe, occur over a wide area, and result in unusually high expenses to the highway agency. Congress has provided funds for highway emergency relief since at least 1938 and, since 1972, has authorized $100 million annually in contract authority for the Emergency Relief Program to be paid from the Highway Trust Fund. Accordingly, FHWA may obligate up to $100 million of funds from the Highway Trust Fund in any one fiscal year for the program. Congress also regularly provides funds to the Emergency Relief Program from general revenues through supplemental appropriations. Most recently, Congress passed the Bipartisan Budget Act of 2018 in February 2018, and the Additional Supplemental Appropriations for Disaster Relief Act, 2019 in June 2019, which included more than $3 billion for the FHWA Emergency Relief Program to repair damages caused by a number of natural disasters. According to FHWA officials, these funds will be used to address damaged related to the 2017 hurricanes. FHWA s Emergency Relief Program regulations further define policies for the program and the eligibility requirements for selecting projects. These regulations state that emergency relief funds are not intended to correct preexisting deficiencies or duplicate assistance available under another federal program or compensation from insurance or other sources. Emergency relief projects are to be promptly constructed, and construction funds must be obligated within two years (i.e., by the end of the second fiscal year following the disaster) unless suitable justification is provided to FHWA. Emergency relief regulations specify the activities that emergency relief funds may be used for as well as those activities they may not be used for, such as reconstruction of facilities affected by long-term, predictable developing situations or deficient bridges scheduled for replacement with other funds. Because the statute and its regulations are, by necessity, fairly broad, FHWA publishes guidance to further assist the agency in administering the Emergency Relief Program. The Emergency Relief Manual, updated in 2013, is a guide for FHWA and state and local agency personnel for requesting, obtaining, and administering emergency relief funds. The manual provides additional information and examples of the types of activities and projects that are both eligible and ineligible for funding, the process for states to apply for emergency relief funding, and the documents and reports that are required to be prepared. FHWA s Emergency Relief Order, issued in 2016, further defines the application and review process and the roles and responsibilities of FHWA and state personnel. As with other federal-aid highway programs, the Emergency Relief Program is a partnership in which states plan and execute projects to complete necessary repairs, and FHWA provides assistance to states in applying for funds and conducts oversight to determine eligibility and ensure that federal requirements are met. States and territories are required to conduct damage inspections, submit documentation to their respective FHWA division office to determine if repairs are eligible for federal funds, enter into project agreements, and complete final project inspections. The FHWA division office is responsible for reviewing damage inspections to determine whether proposed projects are eligible for emergency relief funds. FHWA headquarters officials use the information collected from these inspections to allocate funds to each state or territory for particular events; division offices obligate those funds and ultimately reimburses the states for allowable expenses. The Emergency Relief Program s authorizing statute and FHWA s regulations and guidance distinguish between federal share payable for emergency and permanent repairs. Specifically, according to FHWA regulations, emergency repairs are undertaken during or immediately after a disaster to restore essential traffic, minimize the extent of damage, or protect the remaining facilities. Emergency repairs are eligible to receive 100 percent federal reimbursement if they are accomplished within 180 days of the disaster. By statute, this deadline may be extended taking into consideration any delay in the ability of the state to access damaged facilities to evaluate damage and the cost of repair. FHWA and federal regulations also state that emergency repairs can be completed by state and local maintenance workforces, and qualify for categorical exclusions from the National Environmental and Policy Act s (NEPA) requirements. FHWA s Emergency Relief Manual further characterizes emergency repairs as repairs that can be completed relatively quickly, may be temporary in nature, and typically require little preliminary engineering or design effort, e.g., erecting barricades and detour signs. States and local transportation agencies may begin emergency repairs without prior FHWA authorization. Permanent repairs are undertaken after the occurrence of a disaster to restore the highway to its pre-disaster conditions. Permanent repairs receive a federal share, between 80 and 90 percent, depending on the type of roadway being repaired. However, in response to the level of devastation in Puerto Rico, Congress provided a 100 percent federal share for all emergency relief projects, including permanent repairs, necessary to address damage caused by hurricanes Irma and Maria in Puerto Rico. FHWA s regulations state that permanent repairs are to be done through a competitively bid contract, unless the state demonstrates that another method is more cost effective (e.g., the use of abbreviated plans or a shortened advertisement period). In addition, many, but not all, permanent repairs meet the criteria for categorical exclusions from NEPA s requirements. FHWA s Emergency Relief Manual indicates that typically permanent repairs (1) should have obligated funds for construction within 2 years, (2) require the development of plans, specifications, and estimates, and (3) must receive prior FHWA authorization. Our prior work has raised concerns about FHWA s management and oversight of the Emergency Relief Program. In 2007 we reported on the expanding scope of eligible activities funded by the Emergency Relief Program over time, resulting in projects that went beyond the original intent of the program. We recommended to FHWA and suggested that Congress consider tightening the program s eligibility standards, but this recommendation has not been implemented and FHWA does not plan to do so. In 2012, we raised concerns about FHWA s partnership relationship with the states, particularly its oversight of the Emergency Relief Program, which we first reported in November 2011. For example, we were unable to determine the basis of FHWA s eligibility decisions on 81 emergency relief projects representing $193 million in federal funds because of missing or incomplete documentation. In addition, we identified cases where FHWA showed a lack of independence in decisions, placing its partners interests above federal interests. For example, FHWA allowed two states to retain unused Emergency Relief Program allocations to fund new emergencies, despite FHWA s policy that these funds are made available to other states with potentially higher- priority emergencies. We concluded that while FHWA s partnership relationship with the states yields benefits such as proactively identifying issues before they become problems, it also poses risks. Thus we recommended that FHWA develop a strategy to mitigate these risks. In March 2014, FHWA announced it had established an enhanced risk- based oversight approach that, while not targeting the specific risks we identified related to state partnerships, addressed the intent of our recommendation to increase transparency and consistency. <2. To Date, States and Puerto Rico Have Identified $1 Billion in Highway and Bridge Damages Caused by the 2017 Hurricanes> Following hurricanes Harvey, Irma, and Maria, state and local officials prepared damage assessments that identified more than 2,500 projects eligible for emergency relief funds costing approximately $1 billion. Projects range in size and cost from replacing signage and traffic signals to multi-million dollar bridge and highway repairs (see fig. 1). Following a number of natural disasters in 2017 including hurricanes Harvey, Irma, and Maria Congress appropriated more than $1 billion to the Emergency Relief Program in February 2018 to help states repair and rebuild federal-aid highways. As of September 2019, FHWA has allocated $634 million to repair hurricane-related damage in Florida, Texas, and Puerto Rico. Specifically, immediately following the hurricanes in August, September, and November 2017 FHWA allocated $122.5 million in quick release funding to Florida, Texas, and Puerto Rico. In April 2018, FHWA allocated an additional $242 million to Florida, Puerto Rico, and Texas. Further, on February 6, 2019 FHWA allocated $130 million more to Puerto Rico for damages caused by hurricanes Irma and Maria (see fig. 2). FHWA subsequently de-allocated $69 million from Florida on February 27, 2019, because state officials determined the funds were no longer necessary for hurricane-related repairs. Most recently, in September 2019, FHWA allocated an additional $208 million to Puerto Rico. While the estimated repair costs exceed the amount of funds allocated by FHWA, officials stated that additional emergency relief funds are allocated and reimbursed approximately every 6 months and states and territories will be reimbursed for all eligible expenses related to hurricanes Harvey, Irma, and Maria as they are completed. These funding decisions are to be made as FHWA continues to review and approve projects and Congress appropriates additional funds. As we have noted in prior work, the $100 million in annual authorized funding has not been enough to meet the needs of the program. Therefore, states have relied on supplemental appropriations to fund repairs caused by natural disasters and catastrophic events. <3. FHWA Did Not Justify Key Decisions and May Have Inappropriately Classified Emergency Relief Projects> We identified a number of cases in which FHWA did not document decisions to classify emergency relief projects as emergency repairs (those necessary to restore essential traffic, undertaken during or immediately after a disaster and generally accomplished within 180 days) as opposed to permanent repairs (those undertaken to restore a facility to pre-disaster conditions). Specifically, 22 out of 25 emergency repair projects we reviewed which account for approximately $50 million in emergency relief funds did not include a documented justification for classifying repairs as an emergency repair instead of a permanent repair. In addition, out of approximately 1,200 eligible projects in Puerto Rico, FHWA officials reported undertaking 34 more than 180 days after the hurricanes and continuing to classify them as emergency repairs without documenting the basis for doing so. Without documentation it is not possible to definitively determine the justification for why projects were classified as emergency repairs and we identified at least three projects that may have been inappropriately classified because they (1) may not have been necessary to restore essential traffic, or (2) were not undertaken during or immediately after the disaster. For example: The Lynchburg Ferry ($10.7 million project in Texas). This project rebuilt the ferry docks and landings, which are used to transport up to 10 passenger vehicles at a time across the Houston Ship Channel (1,100 feet). FHWA classified the project as an emergency repair to restore essential traffic but did not document the basis for this decision. When asked, FHWA officials from the Texas Division stated that engineers used their professional judgment to determine that the ferry route provided essential traffic. It is not clear, however, that the ferry was necessary to restore traffic as several alternative routes were available immediately following the disaster on existing highways that service the same locations and typically result in faster travel times than the ferry (see fig. 3). According to officials, engineers did not assess these alternative routes and there is no requirement for them to do so. This project was a significant commitment of emergency relief funds, representing approximately 11 percent of the emergency relief funding Texas received in the aftermath of Hurricane Harvey. Because the project was classified as an emergency repair, Texas was permitted to use a non-competitive bidding process to solicit and hire contractors to complete the work, instead of a competitive bidding process designed to achieve the best possible price and quality of work. The project was completed within the required 180 day time frame required to receive 100 percent federal reimbursement. FHWA s oversight of this project raises issues we cited in past work concerning its partnership with the states, namely putting the partner s interest above federal interests. Had FHWA classified this project as a permanent repair instead of an emergency repair, state and local agencies would have been responsible for paying approximately $2.1 million in matching funds on the $10.7 million project. Moreover, prior to the hurricane, the ferry docks and landings were in poor condition and local officials were in the initial stages of planning a project to replace it, including hiring a consultant to identify potential sources of federal funds. Because substantive planning and design work had not yet been completed, this project was eligible for emergency relief funds, which, according to officials, resulted in a new, state-of-the-art facility. Ciales Bridge ($4.9 million project in Puerto Rico). This project will install a temporary 80 meter long bridge over the Rio Grande de Manati River. FHWA classified this project an emergency repair to restore essential traffic and extended the project beyond 180 days but did not document the basis for either decision, as described below. FHWA officials said that they were not aware of another route to carry essential traffic at the time they approved the emergency repair. However, we identified an alternative route on a nearby roadway that uses another bridge less than a mile away. When we asked officials about this nearby route, they said that it is not sufficient for essential traffic, because it is too narrow to safely accommodate two-way traffic, has load limitations, and lacks lighting and pavement markings. Officials stated that the temporary bridge was necessary to quickly restore essential traffic until a new permanent bridge could be built. However, construction on the temporary bridge is not planned for completion until October 2019 more than 2 years after Hurricane Maria hit, raising questions about whether an emergency situation exists and the project is needed to quickly restore essential traffic. FHWA also continued to classify this project as an emergency repair even though the contract for the project was not signed within 180 days after the emergency occurred and FHWA did not document the rationale for doing so. By statute, emergency repair projects must be accomplished within 180 days to receive a 100 percent federal share, but may be extended taking into consideration any delay in the ability of the state to access damaged facilities. According to FHWA officials in Puerto Rico, while division offices should document decisions regarding emergency repair projects, the statutory provision that projects can only be extended beyond 180 days if the damaged facilities are inaccessible does not apply to Puerto Rico because it is funded at a 100 percent federal share, and therefore, such a determination and documentation was not necessary. There are, however, statutory and regulatory provisions other than the percentage of costs covered by the federal government that apply to emergency projects, including contracting and environmental requirements. Because this project was classified as an emergency repair, officials used a bidding technique called short-list bid that limited the number of firms which were permitted to submit proposals. This project also received a categorical exclusion for emergencies and was not subject to further environmental review under NEPA. However, although these projects went forward, FHWA s policy regarding time limits on the use of expedited contracting and environmental procedures is not clear. After we raised this and similar issues on other projects with FHWA, officials stated that the administration s position was that emergency repair projects using expedited contracting and environmental procedures are only permitted within the first 180 days of a disaster. According to these officials, as a matter of policy, 180 days after the disaster is a pencils down moment when projects should be subject to permanent repair requirements, including environmental and contracting requirements. Officials acknowledged this policy is not well documented, and stated they planned to address this gap in future updates to program guidance. These updates initially planned for 2019 have taken more time than anticipated and are currently planned for 2020, but officials were unable to provide a specific timeline. The classification of the project as an emergency repair raises questions about whether the project was an efficient use of federal funds. The $4.9 million temporary bridge involves considerable construction such as building footings with 5-million pounds of concrete and reinforced steel (see fig. 4) and, as stated previously, is not planned for completion until October 2019. FHWA officials stated this structure will be torn down within a couple of years and replaced by a $6.4 million permanent structure. PR-14 Bridge ($1.4 million project in Puerto Rico). This project will construct a temporary bridge across one of a few main routes on the south-central side of the island that is located in one of Puerto Rico s mountainous municipalities that is rural and relatively sparsely populated. FHWA officials classified the temporary bridge as an emergency repair to restore essential traffic, including the transportation of people and commercial goods but did not document the basis for this decision. According to officials, this bridge was necessary to restore essential traffic because damage caused by the hurricane led to a reduction in the vehicle load limit from 5 tons to 3 tons. However, the basis for this determination is not clear since the bridge was never closed to traffic and a reduced load limit from 5 to 3 tons would not significantly affect the type of vehicle traffic able to safely cross the bridge. For example, the pre-existing 5-ton limit would have already prevented most types of ambulances and commercial trucks from using the bridge, and the 3-ton limit still permits most passenger vehicles and some types of light-duty trucks. In addition, according to officials, one of the reasons for installing a temporary bridge instead of waiting on the planned installation of a permanent bridge was to quickly restore traffic. However, the temporary bridge will not be completed until February 2020 almost 2 and a half years after the hurricanes, which raises questions about whether or not the project was necessary to quickly restore essential traffic. As with the Ciales Bridge, FHWA did not document the basis for classifying this project as an emergency repair even though it was undertaken more than 180 days after the emergency occurred. The project was contracted using a pre-existing contract and not competitively bid and received a categorical exclusion from NEPA requirements. Similar to the Ciales Bridge, this $1.4 million temporary bridge will be torn down within a couple of years and replaced by a $4.2 million permanent structure. While officials did not document decisions to classify emergency relief projects as emergency repairs, FHWA did improve the documentation of emergency relief projects in some areas since the last time we examined the program in 2011. Specifically, we found more consistent documentation of the onsite damage inspections, cost estimates, and FHWA oversight of eligibility determinations. For example, 39 out of 39 emergency relief projects we reviewed included photographs of the damage and a repair cost estimate; whereas, only 24 out of 83 projects we examined in 2011 included this information. According to Federal Internal Control Standards, to achieve objectives and identify and respond to risks, management should clearly document all transactions and significant events, and define objectives clearly, including specific terms so that they can easily be understood. FHWA did not clearly document transactions and significant events because: (1) in the case of classifying projects as emergency repairs, there is no requirement to do so, and (2) in the case of extending emergency repair projects in Puerto Rico, existing requirements did not apply. FHWA officials stated that these decisions were made as part of an ongoing dialogue between FHWA, the states, and Puerto Rico that is done through emails and in-person and telephone meetings. However, by not documenting emergency repair decisions, such as whether alternative strategies or repairs were considered and the rationale for classifying projects as emergency repairs after the emergency has passed, FHWA lacks definitive explanations for its decisions. This, in turn, raises questions as to whether those decisions were appropriate. When questioned about individual projects, including the examples in Texas and Puerto Rico previously discussed, officials often could not provide concrete rationales for these decisions. In addition, because guidance in the Emergency Relief Manual is intentionally flexible and written to apply to a wide range of circumstances, key terms are not clearly defined and easily understood and applied. This is particularly true for the term essential traffic, which is being broadly applied to provide support for repairs necessary to restore any type of traffic without fully considering potential alternatives. While FHWA s manual generally describes projects to restore essential traffic (e.g., detours that relieve excess traffic directly attributable to the disaster), it does not discuss how to determine whether a project will relieve excess traffic or require officials to evaluate alternative routes. Moreover, FHWA s guidance and policy are not clear on the time frames for when emergency repair projects must adhere to contracting and environmental requirements. This lack of clearly defined and easily understood terms in emergency relief guidance could result in FHWA inappropriately classifying projects as emergency repairs, which affects: the federal fiscal exposure in a disaster, the level of FHWA oversight because projects may begin without prior authorization, the extent to which projects must be competitively bid, and potentially the level of environmental review accorded a project. Moreover, unclear guidance increases the chances that program guidance could be inconsistently applied, potentially giving access to emergency relief funds to one state and not another. We identified several instances in which officials in one Division Office made emergency repair decisions that differed from another division office. For example, FHWA officials in Florida did not include highway finishes, such as pavement markings, as part of emergency repair projects, while officials in the Puerto Rico Division did. FHWA officials in Puerto Rico also reported that FHWA officials from different division offices who came to assist in the aftermath of the 2017 hurricanes had substantively different interpretations of emergency relief guidance, including how to define emergency repairs and what was and was not essential traffic. <4. Conclusions> For many years, FHWA s Emergency Relief Program has provided crucial funding to states and territories to rebuild transportation infrastructure, including in the aftermath of hurricanes Harvey, Irma, and Maria. The consecutive timing and scale of these disasters overwhelmed local, state, and territorial governments, and Puerto Rico was hit particularly hard. Given the level of devastation, it was imperative for the federal response to be quick and effective, and that essential services be quickly restored to help people rebuild and recover. However, it is not clear that emergency relief funds are always being used for the purposes intended or put to the highest use. In the absence of well-documented rationales for classifying projects, more clearly defined terms and circumstances for making these decisions, and time frames for accomplishing them, FHWA may have inappropriately classified projects as emergency repairs. While this represent a small percentage of projects undertaken in response to the 2017 hurricanes, FHWA s actions may have resulted in the federal government forgoing millions of dollars in state contributions, thus increasing the federal fiscal exposure in disasters. Moreover, permitting projects to proceed under expedited contracting requirements many months after the disaster deprived the federal government of a valuable tool intended to ensure the best price for services it receives. Finally, in an environment where needs outweigh funding, multi-million dollar bridge projects are being constructed that will be torn down in a couple of years to make way for other multi-million dollar bridge projects. FHWA s decision-making invites questions we have raised before about the partnership relationship between FHWA and the states. In high stress and politically sensitive situations like natural disasters in particular, the relationship could lead FHWA to put states interests before federal ones or give the appearance of having done so. If FHWA s decisions are, in fact, appropriate, documentation and clearer guidance could reduce unnecessary skepticism, enhance transparency, and result in more effective use of limited resources. <5. Recommendations for Executive Action> We are making the following two recommendations to FHWA: The Administrator of FHWA should require FHWA division offices to document the rationale for classifying projects as emergency repairs, such as a description of why an emergency repair is necessary and which alternative strategies or repairs were considered, and to more clearly define the circumstances under which projects are classified as emergency repairs, including what constitutes restoration of essential traffic. (Recommendation 1) The Administrator of FHWA should identify a specific timeline for clarifying the policy on the acceptable time frames for accomplishing emergency repair projects undertaken under expedited contracting and environmental requirements, and require FHWA division offices to document the rationale for decisions to extend projects beyond these time frames. (Recommendation 2) <6. Agency Comments> We provided a draft of this report to DOT for review and comment. In comments, reproduced in appendix II, DOT concurred with our recommendations. DOT also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of the Department of Transportation. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Results of GAO s File Review of the Emergency Relief Project Documentation Available in Three FHWA Division Offices In 2011, we reported on how Federal Highway Administration (FHWA) officials applied Emergency Relief Program guidance to a selected group of projects that received funding. In that review, we selected a nongeneralizable sample of eligible Emergency Relief Program projects in three states New York, Texas, and Washington that matched criteria such as receiving more than $1 million in obligated federal funds and approval by FHWA between fiscal years 2007 through 2010. We reviewed those projects files to determine whether they included required or recommended documentation cited in federal statute, regulations, and FHWA program guidance. In our 2011 report, we found many instances of missing or incomplete documentation, such as required repair cost estimates, because FHWA lacked clear requirements for how states submitted and FHWA approved key project documentation, leading to FHWA division offices applying eligibility criteria differently. We recommended that FHWA standardize their procedures for reviewing emergency relief documentation and making eligibility decisions, including retaining damage inspection reports with detailed repair cost estimates. In response, FHWA issued an Order in February 2016 that included procedures to ensure that FHWA makes eligibility determinations consistently and transparently that we determined addressed our recommendation. To evaluate how FHWA officials applied Emergency Relief Program guidance to selected projects in recent emergency events and whether documentation had improved since our 2011 report, we conducted a file review of 39 nongeneralizable emergency relief projects 25 of which included emergency repairs in Texas, Florida, and Puerto Rico. These projects, which FHWA determined were eligible for Emergency Relief Program funding, were necessary to repair damage caused by three 2017 hurricanes: Harvey, Irma, and Maria. The purpose of this review was to determine whether each project file included information showing the project met eligibility requirements or information required or recommended in federal statute, regulations, and FHWA program guidance. To select these 39 project files (13 projects each from Texas, Florida, and Puerto Rico), we used the following criteria: We reviewed those with the highest estimated cost to ensure the inclusion of projects likely to receive the most federal funds. The 39 project files we selected represented over 38 percent of Emergency Relief funds allocated to those three states for the 2017 hurricanes, as of February 2019. We selected a mix of road and bridge projects to ensure we reviewed a selection of projects that could include different types or amounts of documentation. States typically have more data and oversight processes in place for bridges than other roads, as most bridges are required to be inspected at least every 2 years. We selected a mix of a state and local agency projects to ensure we reviewed a selection of projects that may have been prepared with different levels of detail. Though state agencies ultimately submit all Emergency Relief Program requests to FHWA, local agencies prepare some of the paperwork for projects within their jurisdictions and could provide a different level of detail in their project files than state agencies. For each of the 39 projects in our review, the FHWA division offices in Texas, Florida, and Puerto Rico provided associated project files. Through discussions with state officials, we determined that FHWA s Mobile Solution for Assessment and Reporting (MSAR) was sufficiently reliable for our purposes of obtaining documentation for file reviews for projects located in Texas. For Puerto Rico, because state officials acknowledged some files were not included in MSAR, we asked for state officials to directly send us additional documentation as needed. As Florida does not use MSAR to record project information or documentation, we asked for state officials to send us relevant project documentation directly. Project files from these locations included information on project type and estimated costs as well as other relevant documents, such as engineering reports, bridge inspection reports, or photographs of the damage. Two analysts reviewed those files for information that is required or recommended by statute or FHWA guidance. This information included much of the same information we had previously evaluated in our 2011 review. To conduct the review, one analyst reviewed the documentation provided by FHWA s division offices and completed a data collection instrument, then a second analyst reviewed the same documentation to verify the results of that review. Afterwards, the two analysts discussed and resolved any discrepancies and questions. The analysts then analyzed and summarized the results for the 39 eligible projects of this review to determine whether each file included documentation for damage and cost information, emergency repair requirements, and eligibility determination, as detailed below: Damage and cost information: We reviewed whether the project file included a complete detailed damage inspection report (DDIR), which documents an on-site inspection of the damage. FHWA s Emergency Relief Manual states that a complete DDIR should include a number of details including: the type of federal-aid highway, such as an interstate, freeway, or expressway; the average daily traffic or the typical traffic volume in a location over a 24-hour period; the nature or type of damage, such as a bridge collapse or landslide, and extent or amount of damage, such as fully or partially collapsed; a field site sketch or drawing that shows details of the damage site such as the width of the road or bridge; a total estimated cost for repair; and documentation related to an environmental review recommendation, which would include the potential effects of repairs on nearby species or waterways. For the 39 projects we included in our file review, we found that DDIR documentation generally improved compared to the 2011 review. For instance, each of the 39 projects included a DDIR, photographs of the damage, and the repair s cost estimate; only 24 of the 83 eligible projects we reviewed in 2011 included each of those pieces of information. However, we found other recommended DDIR documentation to be lacking. For example, of the 39 projects in our review, 36 did not include Average Daily Traffic and 22 did not include the type of federal-aid highway. Figure 3 represents the results of our review of damage and cost information. Emergency repair requirements: We reviewed whether eligible emergency repair projects included a documented rationale or justification for classifying the project as an emergency repair instead of a permanent repair. As discussed in the body of this report, by statute, emergency repairs are repairs undertaken during or immediately after a disaster specifically to restore essential traffic, to minimize the extent of damage, or to protect the remaining facilities. As discussed in the body of this report, classifying a project as an emergency repair affects the percentage of costs covered by federal funds, level of FHWA oversight, and the extent to which environmental and contracting requirements apply. We found that of the 25 project files that included an emergency repair (out of the 39 in our review), 22 did not include a documented rationale or justification for classifying the project as an emergency repair instead of a permanent repair. Eligibility determination: We reviewed whether a representative of FHWA signed and recommended eligibility for Emergency Relief funding and whether the applicant or state representative signed and agreed with FHWA s recommendation. The Emergency Relief Manual states that documentation should include an eligibility recommendation by an FHWA representative and acknowledgement of that recommendation by the applicant. For the 39 projects we included in our file review, we found that documentations of FHWA and applicant signatures generally improved compared to the 2011 review. In our current review, we found that the FHWA and applicant or state representatives signed each of the 39 eligible project files; in our 2011 review, only 36 of the 83 eligible projects included a signature from an FHWA representative and 47 of the 83 eligible projects included a signature from the applicant or state representative. Appendix II: Comments from the Department of Transportation Appendix III: GAO Contact and Staff Acknowledgments <7. GAO Contact> <8. Staff Acknowledgments> In addition to the contact named above, Steve Cohen (Assistant Director), Matthew Cook (Analyst in Charge), Pedro Almoguera, Aditi Archer, Danielle Ellingston, Lauren Friedman, Kathryn Godfrey, Hannah Laufe, Leslie Locke, Cheryl Peterson, Malika Rice, Amy Rosewarne, and Elizabeth Wood made key contributions to this report. | Why GAO Did This Study
In 2017, hurricanes in Texas, Florida, and Puerto Rico caused $1 billion in estimated damage. FHWA's Emergency Relief Program provides funding for states to repair or reconstruct federal-aid highways damaged or destroyed by natural disasters, including funding for emergency and permanent repairs. As of September 2019, FHWA has allocated $634 million in federal funds to the two states and Puerto Rico. By statute, emergency repairs are undertaken during or immediately following a disaster to quickly restore essential traffic and minimize further damage. These repairs receive 100 percent federal reimbursement if accomplished within 180 days and may proceed under expedited contracting and environmental procedures.
GAO was asked to evaluate the federal response to the 2017 disasters. This report assesses how FHWA applied program guidance to classify selected emergency relief projects, among other objectives. GAO visited 33 out of approximately 2,500 projects in Texas, Florida, and Puerto Rico; analyzed 25 emergency repair project files; and interviewed FHWA, state, and local government officials.
What GAO Found
GAO found that the Federal Highway Administration (FHWA) did not document the bases for decisions to classify projects as emergency repairs in 22 of the 25 project files reviewed. Without such documentation, it is not possible to definitively determine the justification for these decisions; GAO identified at least three projects that may have been inappropriately classified. For example, FHWA classified a $10.7 million ferry project in Lynchburg, Texas as an emergency repair to restore essential traffic. Several highways, however, were available immediately following the disaster that service the same locations and result in faster travel times than the ferry. FHWA guidance does not require officials to document decisions to classify projects as emergency repairs or clearly define what constitutes restoration of essential traffic. Designating projects as emergency repairs can increase the federal fiscal exposure in disasters. Had FHWA classified the ferry project as a permanent repair—instead of an emergency repair—the state would have been responsible for paying approximately $2.1 million in matching funds.
GAO also identified two temporary bridge projects in Puerto Rico classified as emergency repairs even though (1) work did not start within180 days of a disaster, as generally required; (2) the bridges are not to be completed until late 2019 and early 2020; and (3) both are to be replaced by permanent bridges within a couple of years. Out of approximately 1,200 eligible projects in Puerto Rico, FHWA officials reported undertaking 34, including the two bridges GAO identified, after 180 days. Officials also stated they did not document the basis for continuing to classify these projects as emergency repairs. FHWA officials in Puerto Rico stated they were not required to complete repairs within the 180 day limit established in law because Congress exempted Puerto Rico from federal matching share requirements. Further, emergency repair projects are allowed to expedite contracting and environmental procedures. After GAO raised this issue with FHWA, the agency stated that emergency repair projects are only permitted to use these expedited procedures within the first 180 days. While officials stated they plan to update guidance to include this policy, there is no specific timeline for doing so.
What GAO Recommends
FHWA should (1) document decisions to classify projects as emergency repairs and more clearly define what constitutes restoration of essential traffic, and (2) identify a specific timeline for clarifying the policy on when expedited contracting and environmental procedures are permitted. DOT concurred with GAO's recommendations and provided technical comments that GAO incorporated as appropriate. |
gao_GAO-20-98 | gao_GAO-20-98_0 | <1. Background> <1.1. Factors Affecting Water Scarcity in the United States> Water scarcity occurs when the demand for water in a given area approaches or exceeds available water supplies. In April 2016, we reported that drinkable water has traditionally been assumed to be reliable, cheap, and abundant. However, with parts of the United States especially the Southwest facing recurring drought and persistent water scarcity, that view has been challenged. Water is also not always available when and where it is needed, in the amount or quality desired, or in a cost-effective manner. In times of water scarcity, there are often competing demands for water such as irrigation, power production, municipal water supplies, and supporting aquatic life. As we reported in May 2014, state water managers expect freshwater shortages to continue into the future. According to the United States Global Change Research Program s Fourth National Climate Assessment, significant changes in water availability are evident across the country and are expected to persist in the future due to changes in precipitation and rising temperatures. For example, droughts occurring from deficits in precipitation, soil moisture, and snow runoff will likely occur more frequently. Further, since a warmer atmosphere holds more water, when rain does fall high-intensity events can occur more frequently. These sudden downpours will increase the mobility of pollutants, such as sediments and nutrients, and of algae, which can reduce the quality and quantity of available drinking water. The assessment noted that in some regions of the United States, the supplies of water are already stressed by increasing consumption, and continued warming will add to this stress, adversely affecting the availability of water in parts of the United States and increasing the risk of water scarcity. <1.2. DOD s Reliance on Water for Mission-Critical and Support Activities> The military departments rely on water at installations to conduct and support their missions. For example, according to military department officials, water is necessary to operate missions such as rocket launches for cooling and for noise and fire suppression (see sidebar), to maintain temperatures to properly store equipment such as parachutes, and for firefighting training (see fig. 1). Rocket Launch at Vandenberg Air Force Base, California According to Vandenberg Air Force Base officials, water is used in multiple ways during rocket launch activities. For example, water is necessary for noise and vibration suppression, heat reduction, and fire suppression as needed. The officials stated that between 60,000 to 100,000 gallons of water are needed for each launch. In 2018, there were nine launches. With an anticipated increase in launches in the future, they expect the demand for water to increase as well. OSD officially reorganized its acquisition organization on January 31, 2018, in response to Section 901of the National Defense Authorization Act for Fiscal Year 2017. Under the reorganization, responsibilities of the former Under Secretary of Defense for Acquisition, Technology and Logistics were divided between two new offices the Under Secretary of Defense for Research and Engineering and the Under Secretary of Defense for Acquisition and Sustainment. According to DOD, responsibilities for energy, installations, and environment were transferred from the Office of the Undersecretary of Defense for Acquisition, Technology and Logistics to the newly created Office of the Under Secretary of Defense for Acquisition and Sustainment in 2018. According to an OSD official, within this office, responsibilities for water management at military installations are delegated to two deputy assistant secretaries under the Office of the Assistant Secretary of Defense for Sustainment the Office of the Deputy Assistant Secretary of Defense for Environment, who is responsible for water resources management in general, and the Office of the Deputy Assistant Secretary of Defense for Energy, who is responsible for overseeing planning for water at the installation level. Each of the military departments has designated an office or multiple offices with responsibilities for water policy and implementing programs to support that policy at installations. Specifically: Air Force: The Assistant Secretary of the Air Force for Installations, Environment, and Energy is responsible for procedures to manage the Air Force s water consumption, throughput, and requirements, in alignment with policies and strategic direction. Within this office, the Deputy Assistant Secretary of the Air Force for Environment, Safety and Infrastructure provides strategic direction, policy, and oversight for water management. Navy: The Office of the Assistant Secretary of the Navy for Energy, Installations, and Environment is responsible for establishing policy and overseeing water resource management. This office, along with the Office of the Chief of Naval Operations Shore Readiness Division, and the Commander, Navy Installations Command, makes policy, guidance, and many major investment decisions related to installations water departments. Within the Department of the Navy, the Marine Corps also has its own offices responsible for water policy. Specifically, the Deputy Commandant for Installations and Logistics is responsible for establishing energy and water management policy for Marine Corps installations in accordance with the Commandant s direction. The Commander, Marine Corps Installations Command, is responsible for water management, such as overseeing program planning and execution, and serving as the Marine Corps Installations Energy Program Manager. Army: The Assistant Secretary of the Army for Installations, Energy, and Environment establishes policy, provides strategic direction, and supervises all matters pertaining to energy and environmental programs, among other responsibilities. Within this office, the Deputy Assistant Secretary of the Army for Energy and Sustainability provides strategic leadership, policy guidance, program oversight, and outreach for energy, water, and sustainability throughout the Army enterprise. <1.3. OSD s and the Military Departments Six Assessments Identifying Installations at Risk of Water Scarcity> OSD-level entities and the three military departments conducted six assessments between April 2017 and January 2019 that, despite having varied focus areas, all included at least one component focused on vulnerability to water scarcity. The Office of the Under Secretary of Defense for Acquisition and Sustainment conducted the most recently reported (January 2019) OSD-level assessment, in response to a congressional reporting requirement. OSD-level entities in place before OSD s 2018 reorganization conducted the other two assessments, reporting their results in January 2018 and July 2018 also responses to congressional reporting requirements. The Air Force s, Navy s, and Army s three assessments span different time frames, encompass different scopes, and respond to different internal reporting requirements. The Air Force reported its results in November 2018; the Navy s assessment conducted by CNA reported its results in December 2017; and the Army reported in April 2017. Table 1 provides a summary of these assessments, including responsible offices and focus areas. <2. DOD Does Not Have Assurance That It Is Using Reliable Information to Identify Installations at Risk of Water Scarcity> We found that DOD does not have assurance that it is using accurate and reliable information regarding which installations are at risk for water scarcity. When we compared the results of the OSD assessments and the military department assessments, we found that they varied markedly, raising questions about their quality and about which source of information DOD is using to determine which installations are vulnerable to water scarcity. An OSD official told us that the OSD assessments constitute the best DOD information available on installations at risk of water scarcity, but we found that the assessments do not align with leading practices for identifying and analyzing water scarcity practices that contribute to a reliable assessment of water availability. In contrast, we found that the military department assessments do align with these leading practices, but OSD officials disagree as to whether these assessments can and should be used to identify installations at risk of water scarcity across the defense enterprise. As a result, DOD cannot be assured that it is using reliable information for water resource management. <2.1. OSD and Military Department Assessments Differ on Which Installations Are at Risk of Water Scarcity> The three OSD assessments and the three military department assessments varied markedly in their results regarding which installations are vulnerable to water scarcity. Collectively, the six assessments identified a total of 102 individual installations at risk of water scarcity, as shown in figure 2. Only one installation, Vandenberg Air Force Base in California, was identified in all three OSD assessments and the applicable military department (Air Force) assessment. Of the 102 individual installations identified in the six assessments as vulnerable to water scarcity, 42 (41 percent) were included in multiple assessments. OSD identified more installations for each military department as at risk than did the military departments themselves. Specifically, across its three assessments, OSD identified 95 installations as being at risk 48 Air Force installations, 29 Navy or Marine Corps installations, and 18 Army installations. The military departments collectively identified a total of 27 installations as being at risk 14 Air Force installations, nine Navy or Marine Corps installations, and four Army installations. Below is a more detailed description of the installations identified as being at risk of water scarcity in the six assessments, by the military departments. Air Force: Of the 48 Air Force installations identified across the OSD assessments, only three Kirtland Air Force Base, New Mexico; McConnell Air Force Base, Kansas; and Vandenberg Air Force Base, California appeared in all of them. In addition, as noted above, only one Air Force installation was identified both in all three OSD assessments and the Air Force assessment Vandenberg Air Force Base, California. Of the 14 Air Force installations identified within the Air Force assessment, 13 appeared in at least one of the OSD assessments. Navy: Of the 29 Navy or Marine Corps installations identified across the OSD assessments, three installations Marine Corps Air Station Yuma, Arizona; Naval Base Coronado, California; and Naval Weapons Station Seal Beach, California appeared in at least two of the OSD assessments. Of the nine Navy installations, including the Marine Corps installations identified within the Navy assessment, four appeared in at least one of the OSD assessments. Army: Of the 18 total Army installations identified across the OSD assessments, only one White Sands Missile Range, New Mexico appeared in all three. However, the Army s assessment did not identify that installation as being at risk. In addition, one of the OSD assessments the climate vulnerability survey identified more than three times as many Army installations as being at risk as the Army s own assessment. Of the four Army installations identified within the Army assessment, three appeared in at least one of the OSD assessments. Given the different scopes of these assessments, it is understandable that they would produce different results. However, the substantial differences in results raise questions about whether the assessments that produced them were methodologically sound and about which source of information DOD is using to identify installations at risk of water scarcity information needed for water resource management. <2.2. OSD s Assessments Do Not Align with Leading Practices> Although an OSD official told us that the OSD assessments constitute the best DOD information available on installations at risk of water scarcity, we found that they did not incorporate four of five leading practices for identifying and analyzing water scarcity. Specifically, our analysis shows that, in conducting their assessments, OSD officials did not always (1) identify current water availability, (2) identify future water availability, (3) take into account all sources of water, or (4) precisely identify locations, as shown in table 2. Below is a detailed comparison of each OSD assessment against the five leading practices. OSD s climate vulnerability survey. Of the three OSD assessments, the climate vulnerability survey reflects the most (3 out of 5) leading practices. Specifically, we found that the methodology used in the climate vulnerability survey followed the leading practice for identifying current water availability. The survey collected and analyzed drought-related information in a timely and systematic manner by having a question about current drought conditions on its web-based self-reporting survey. did not follow the leading practice for identifying future water availability. The survey focused only on current and past water availability. did not follow the leading practice for taking into account all sources of water. The survey did not account for all sources of water (e.g., precipitation, soil moisture, streamflow, groundwater levels, reservoir and lake levels, and snowpack) because it did not include a question about the sources of the water. followed the leading practice for precisely identifying locations. The survey went directly to all DOD installations and inquired about drought conditions at sites owned or managed by the installation, in addition to the installation itself. This enabled DOD to know the precise location of installations and their associated sites relative to identified drought-prone areas of the state or region and vulnerable economic sectors, individuals, or environments. followed the leading practice for comprehensively including all locations. The survey was completed for all primary installations and associated sites worldwide. OSD s energy report and climate change report. OSD used the U.S. Drought Monitor map to conduct its assessments for both OSD s energy report and climate change report. According to an OSD official, use of the U.S. Drought Monitor map constitutes DOD s best approach for identifying military installations vulnerable to water scarcity. However, we determined that, in doing so, OSD did not follow four of the five leading practices. Specifically, using the U.S. Drought Monitor Map to produce the energy report and climate change report, OSD did not follow the leading practice for identifying current water availability and did not follow the leading practice for identifying future water availability. According to the cofounder of the U.S. Drought Monitor, the conditions reflected on the U.S. Drought Monitor maps are retrospective weekly assessments of drought conditions based on how much, if any, precipitation occurred from 1 week to several years before the day the map was issued. This is problematic because drought conditions can change from month to month (see fig. 3), and the months chosen may not be representative of the annual drought condition. An OSD official stated that OSD used data from the U.S. Drought Monitor map as of April 2018 for the energy report and only the summer months of 2018 for the climate change report, which is unlikely to reflect current water availability for an entire year. According to the cofounder of the U.S. Drought Monitor, the U.S. Drought Monitor maps also do not show projections of future water scarcity, which would be necessary to fully assess an installation s vulnerability to water scarcity. did not follow the leading practice for taking into account all sources of water. According to the cofounder of the U.S. Drought Monitor, U.S. Drought Monitor maps do not take into account all sources of water that might be available to a specific installation. The U.S. Drought Monitor maps do not fully assess the availability of water from groundwater sources (e.g., aquifers) or nonlocal sources (e.g., reservoir water delivered by canals). did not follow the leading practice for precisely identifying locations. According to the co-founder of the U.S. Drought Monitor, U.S. Drought Monitor maps only display regional drought conditions, not drought information applicable to precise locations. For this reason, the Drought Monitor Portal warns that the large-scale maps generated should not supersede locally provided information about water availability conditions. Therefore, OSD may have inaccurately identified installations as being at risk of water scarcity. followed the leading practice for comprehensively including all locations. Since the energy report used a map of all installations within the contiguous U.S. to conduct its analysis, and the climate change report included all 79 mission-assurance locations within its scope, these assessments constituted a comprehensive approach. The information we collected from installations identified by OSD as being at risk of water scarcity also indicates weaknesses in OSD s approach. Of the 17 installations that were identified in OSD s assessments as being at risk of water scarcity and that we contacted or visited, officials from 12 stated that they did not anticipate water scarcity affecting their future mission-related activities, disagreeing with the conclusions of OSD s assessments. For example: Officials at Naval Weapons Station Seal Beach, California, told us the installation does not expect water scarcity to affect its mission-related activities because none of its water-using facilities (i.e., administrative facilities) on the installation are particularly water-intensive. They stated the installation s water is provided by the City of Seal Beach, which in turn is supplied by a larger water company. According to the officials, there are proposed plans to construct a nearby desalination plant, which would prevent water scarcity issues. Officials at Moody Air Force Base, Georgia, stated that the installation is not vulnerable to water scarcity now or over the next 20 years because the base has its own water-treatment plant with wells that draw water from the Floridan aquifer, which spans an area of 100,000 square miles in the southeastern United States, underlying the entire state of Florida and parts of Alabama, Georgia, Mississippi, and South Carolina. According to the officials, use of the aquifer is unconstrained; in addition, Moody Air Force Base holds water permits that create a 64 percent surplus capacity of daily water availability to support current or new mission growth. Officials at Fort Bragg, North Carolina, stated that the installation is in the Southeast region of the United States, which is not known as a region with water scarcity issues. They stated that the region s primary threats, from a water scarcity perspective, are pollution and population growth. In addition, the officials said that the two public utilities from which it purchases its water are not expected to hit a critical demand for water until the year 2060 or later. When we informed an OSD official of the results of our analysis, the official stated that OSD did not have any concerns about the information it provided to the Congress in its three assessments. Specifically, the official said the climate vulnerability survey might have had different responses depending on the perspective of the responder, but it provided useful qualitative data. The official also maintained that the U.S. Drought Monitor was the best source of information, and is a resource produced by the federal government. However, as outlined above, while the drought monitor is a useful source of information, it is not intended to be used in the manner in which DOD has employed it. <2.3. Military Department Assessments Align with Leading Practices> Unlike the OSD level assessments, we found that the assessments produced by the military departments are aligned with all five leading practices (see table 3). Below are detailed examples of how the military department assessments were compared against the five leading practices. Specifically, we found that the military department assessments: followed the leading practice for identifying current water availability. For example, the Navy contacted installation staff directly and analyzed water use and billing data directly from departmental water- system databases to assess the extent to which the Navy was facing water-related challenges (which included water availability and quality). followed the leading practice for identifying future water availability. For example, the Air Force assessment considered future water availability by considering long-term effects from climate change, future water restrictions, and changes in water access rights. In addition, the Navy assessment considered future water availability by considering sea-level rise, water rights, diminishing groundwater supplies, and emerging water pollutants. followed the leading practice for taking into account all sources of water. For example, the Army assessment considered alternate water sources by requiring installations to identify and enumerate their potable sources of water as a measure of redundancy. followed the leading practice for precisely identifying locations. For example, the Navy assessment used geospatial data on hazards to water as well as data published by Naval Facilities Command. This enabled the Navy to precisely identify installation and site locations for water and sewer infrastructure, including pumps, storage, sewer lines, and water-treatment plants relative to those hazards. followed the leading practice for comprehensively including all locations. According to service officials and an agency document, the scope of each military department assessment included all respective installations within each military department. Installations we contacted that were identified in the military department assessments as being at risk of water scarcity generally agreed with the assessments. Of the seven installations that were identified in military department assessments as being at risk of water scarcity and that we contacted or visited, officials from six (86 percent) agreed that they anticipated water scarcity may affect their future mission activities or otherwise noted risks of water scarcity that could affect their installations. For example: Officials at Mountain Home Air Force Base, Idaho, stated that water use on the installation was significantly curtailed in 2017 and 2018 (and was anticipated to be curtailed in 2019) due to the inability to produce sufficient quantities of water to meet demand. Officials from F. E. Warren Air Force Base, Wyoming, stated that drought is a continual threat to the area. The officials stated that if the area does not receive adequate precipitation or snowmelt, the city may place a water restriction for the installation. Officials from Marine Corps Air Station Yuma, Arizona, stated that future mission activities could be impacted by water scarcity, especially as the population of the installation continues to grow with the arrival of additional air squadrons. <2.4. OSD Officials Disagree on What Information They Should Use for Identifying Installations at Risk of Water Scarcity> As noted earlier in this report, the Office of the Assistant Secretary of Defense for Sustainment is responsible for water management at all military installations. Individuals from this office with whom we spoke agreed that having accurate information about water scarcity data across DOD is important to help fulfill these responsibilities and inform senior decision-making, including budget development, resourcing, and risk management. However, these officials disagree about whether it would be feasible to rely on the military department assessments, which we found align with leading practices, to identify installations at risk of water scarcity across DOD. According to one OSD official, the military department assessments should not be used to consider water scarcity across DOD as a whole because their methodologies differed and therefore are not comparable to one another. The assessments do not reflect a coordinated, department- wide assessment. For example, the Air Force assessment reported vulnerability to water scarcity as four distinct qualitative ratings, each combining likelihood and severity, without any numerical data. The Army s assessment, in contrast, reported vulnerability using 34 distinct numerical scores for each installation, averaged into four distinct categories. While both assessments were aligned with leading practices, this OSD official believes that the differences in their specific approaches and subsequent results make it difficult to compare vulnerability to water scarcity across military departments. According to another OSD official, it would be appropriate for DOD to rely on the results of the military department assessments because responsibilities for prioritizing projects and for allocating funds to those projects lie with the military departments. As such, there is not a concern that the departments assessed vulnerability differently. According to this official, were the department to issue a new DOD-wide report on water scarcity, it would simply be a rollup of the military department assessments, with an update of current status. According to Standards for Internal Control in the Federal Government, management should use quality information information that is, among other things, appropriate, current, complete, and accurate to achieve the entity s objectives. In identifying information requirements, management should consider the expectations of both internal and external users, as well as the entity s objectives and related risks. Because the OSD-level assessments do not align with leading practices for identifying and analyzing water availability, OSD lacks assurance that it has quality information and risks potentially using or providing to Congress unreliable information. Further, while the military department assessments are aligned with leading practices, the Office of the Assistant Secretary of Defense for Sustainment has not determined whether they are sufficient for meeting its policy-making and oversight objectives and whether the risk presented by combining results from assessments that used varying methodologies is an acceptable level of risk. Until this question is resolved, the department will not have assurance that it is using accurate and reliable information to assess water scarcity. <3. Conclusions> DOD s installations rely on billions of gallons of water to operate and conduct their missions, but critical installations are at risk of water scarcity, and the risks are only projected to increase. The substantial differences in results of DOD s assessments to identify installations at risk of water scarcity raise questions about whether the assessments were methodologically sound and about which source of information OSD is using for water resource management. OSD s approach to assessing installations at risk of water scarcity did not consistently apply leading practices for identifying current and future water availability, taking into account all sources of water, and precisely identifying locations yet an OSD official told us that the OSD assessments constitute the best DOD information available on installations at risk of water scarcity. In contrast, the military departments did apply all leading practices in their assessments on installations at risk of water scarcity; however, OSD officials were not in agreement as to whether these assessments could be used at a departmental level. By assessing and documenting whether OSD should conduct a coordinated, department-wide assessment aligned with leading practices or should rely on the military department assessments for identifying and analyzing water availability, OSD would have greater assurance that it has the information that it needs to manage water scarcity across the department and that Congress needs to better understand the threat of water scarcity to DOD s mission. <4. Recommendation for Executive Action> The Secretary of Defense should ensure that the Assistant Secretary of Defense for Sustainment (1) assesses whether DOD should conduct a coordinated, department-wide assessment aligned with leading practices for identifying and analyzing water availability or rely on military department assessments to determine which DOD installations are at risk of water scarcity and (2) documents this decision. (Recommendation 1) <5. Agency Comments> We provided a draft of this report for review and comment to DOD. In written comments, DOD concurred with our recommendation. DOD comments are reprinted in their entirety in appendix III. DOD also provided technical comments, which we incorporated as appropriated. We are sending copies of this report to the appropriate congressional addressees; the Secretary of Defense; and the Secretaries of the Air Force, the Navy, and the Army. In addition, this report will be available at no charge on the GAO website at www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2775 or fielde1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objective, Scope, and Methodology In this report, we evaluate the extent to which the Department of Defense (DOD) has assurance that it is using reliable information to identify installations at risk of water scarcity. We reviewed statutes and congressional committee reports that directed DOD to conduct assessments for climate-related purposes, including for identifying installations at risk of water scarcity. We also analyzed information contained in the six DOD assessments conducted from April 2017 through January 2019 that identify installations at risk of water scarcity three Office of the Secretary of Defense (OSD) assessments and three military department assessments to determine the extent to which the assessments identified the same or different installations. Specifically, we analyzed the following DOD assessments: two OSD assessments that focused on climate-related risks to installations: Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, Department of Defense Climate- Related Risk to DOD Infrastructure Initial Vulnerability Assessment Survey (SLVAS) Report (January 2018). We analyzed information on military installations vulnerable to drought in this assessment. Office of the Under Secretary of Defense for Acquisition and Sustainment, Report on Effects of a Changing Climate to the Department of Defense (January 2019). We analyzed information on military installations vulnerable to drought in this assessment. one OSD assessment that focused on installation energy performance, which included an appendix with information on military installations vulnerable to water scarcity: Office of the Assistant Secretary of Defense for Energy, Installations, and Environment, Department of Defense Annual Energy Management and Resilience Report (AEMRR) Fiscal Year 2017 (July 2018). We analyzed the information on military installations vulnerable to water scarcity in this assessment. three military department assessments that contained information on water-related risks: U.S. Air Force, Summary Information on Installations with Water Hazards (Provided November 2018). We analyzed information on military installations with catastrophic and critical water hazards in this assessment. U.S. Navy, including the Marine Corps, CNA, Assessing Water Risk at DON Installations Identifying Hazards and Water Management Challenges (December 2017). We analyzed information on military installations with water availability risk in this assessment. U.S. Army, FY17 Installation Status Report (Mission Capacity) Water Data Analysis (April 2017). We analyzed information on military installations with minor and severe potable water risk. In analyzing these six assessments, we focused on active-duty military installations in the contiguous United States at risk of water scarcity. Further, to discuss the methodologies used in the six assessments, we interviewed officials who were knowledgeable about the various assessments: officials from the OSD s Office of the Assistant Secretary of Defense for Sustainment, each of the military departments with responsibilities for water management at military installations, CNA, which completed the Department of the Navy s assessment, and the University of Nebraska Lincoln s National Drought Mitigation Center, which hosts the U.S. Drought Monitor map that shows parts of the United States in drought. We compared the methodologies used to develop OSD s three assessments and the military departments three assessments with five leading practices for identifying and analyzing risks of water scarcity. We derived the five leading practices from the Department of Energy s and the United States Environmental Protection Agency s compilation of 14 water efficiency best management practices, and principles published in the University of Nebraska Lincoln s National Drought Mitigation Center s 10-Step Drought Planning Process. These leading practices are: (1) identify current water availability, (2) identify future water availability, (3) take into account all sources of water, (4) precisely identify locations, and (5) comprehensively include all locations. According to the 10-Step Drought Planning Process, data and information derived from these leading practices contribute to a reliable assessment of water availability. We discussed these five leading practices we identified with officials from the Office of the Assistant Secretary of Defense for Sustainment and the military departments and gained their agreement about using these practices for determining installations at risk of water scarcity. We then determined whether, in their respective methodologies, OSD s and the military departments assessments had followed each of these five leading practices. Specifically, we considered the identify current water availability leading practice as followed if OSD s and the military departments assessment was annually reporting water use or status of water supply, and the leading practice as not followed if the assessment was not annually reporting water use or status of water supply; identify future water availability leading practice as followed if OSD s and the military departments assessment noted whether climate change was a factor in their assessment or considered future water availability from non-climate-change-related factors and the leading practice as not followed if the assessment did not note whether climate-change was a factor in their assessment or consider future water availability from non-climate-change-related factors; take into account all sources of water leading practice as followed if OSD s and the military departments assessment noted consideration of alternate water sources (such as groundwater, purchase agreements, additional reservoirs, etc.) and the leading practice as not followed if the assessment did not note consideration of alternate water sources (such as groundwater, purchase agreements, additional reservoirs, etc.); precisely identify locations leading practice as followed if OSD s and the military departments assessment noted the specific location of the installation they were reviewing and provided data specifically from that installation, and the leading practice as not followed if the assessment did not note the specific location of the installation they were reviewing and provide data specifically from that installation; and comprehensively include all locations leading practice as followed if OSD s and the military departments assessment considered all the locations at potential risk of water scarcity within the scope of their assessment, and the leading practice as not followed if the assessment did not consider all the locations at potential risk of water scarcity within the scope of their assessment. Specifically, for OSD s Department of Defense Climate-Related Risk to DOD Infrastructure Initial Vulnerability Assessment Survey (SLVAS) Report and its Department of Defense Annual Energy Management and Resilience Report (AEMRR) Fiscal Year 2017, the scope of the assessments included all DOD installations; for OSD s Report on Effects of a Changing Climate to the Department of Defense, the scope of the assessment included 79 mission-assurance priority installations; and for the military department assessments, the scope included all respective installations within each military department. To obtain information about water scarcity at individual installations, we selected a nongeneralizable sample of active-duty military installations in the contiguous United States. To develop this sample, we included installations that were identified by DOD assessments as having water- related vulnerabilities and by military department officials in interviews as having ongoing pilot studies or issues related to water scarcity. We also included installations that had (1) historically experienced water scarcity (prior to 2014); (2) recently experienced water scarcity (from 2014 to 2019); and (3) are projected to experience severe water scarcity (over the next 20 years or longer). From these criteria, we selected a nongeneralizable sample of 17 installations that were identified in OSD s three assessments that reflected diversity in military service, mission, and water scarcity (see table 4). We visited five of these installations in person and contacted the remaining 12 installations by email. We selected the five installations to visit because three installations (Naval Air Facility El Centro, California; Marine Corps Air Station Yuma, Arizona; and Luke Air Force Base, Arizona) provided diversity among military services and were in close proximity to each other, which allowed us to visit multiple locations in one trip; one installation (Vandenberg Air Force Base, California) had been identified in all three OSD assessments and the applicable military department assessment as being at risk of water scarcity; and one installation (Fort Bragg, North Carolina) provided geographic diversity and inclusion of at least one installation per military service in our sample. For the remaining 12 installations, we developed and sent by email a list of similar questions and document requests that we used during our site visits. We received responses from all 12 installations. Results from our nongeneralizable sample cannot be used to make inferences about all DOD installations. However, the information from these installations provides valuable insights about how water is being used by these installations for their mission-related activities and whether water scarcity had affected or was expected to affect their mission-related activities. To determine the extent to which DOD has assurance it is using accurate and reliable information about installations at risk of water scarcity to manage water resources across the department, we compared the information DOD has from the various assessments with Standards for Internal Control in the Federal Government on using quality information to achieve agency objectives. We conducted this performance audit from September 2018 to November 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: List of Installations Identified in Department of Defense (DOD) Assessments as Being at Risk of Water Scarcity Table 5 provides a list of the 102 individual active-duty military installations in the contiguous United States that were identified in at least one of six DOD assessments three Office of the Secretary of Defense assessments and three military department assessments as being at risk of water scarcity. Appendix III: Comments from the Department of Defense Appendix IV: GAO Contact and Staff Acknowledgments <6. GAO Contact> <7. Staff Acknowledgments> In addition to the contact named above, Brian Lepore (Director), Jodie Sandel (Assistant Director), Barbara Wooten (Analyst-In-Charge), Tracy Barnes, Chane Gaskin, Gina Hoover, Mae Jones, Mary Jo LaCasse, Amie Lesser, Shahrzad Nikoo, Paulina Reaves, and Edward Rice made key contributions to this report. | Why GAO Did This Study
DOD reported in January 2019 that critical installations are at risk of water scarcity—that is, of not having sufficient water available to meet their mission needs. According to military department officials, installations depend on water for activities such as training, weapons testing, fire suppression, and sanitation. In its 2018 Fourth National Climate Assessment , the U.S. Global Change Research Program reported that warming temperatures will continue to cause worsening droughts and the decline of surface water quality.
Senate Report 115-262 included a provision for GAO to review DOD's identified or potential effects of water scarcity. For this report, GAO evaluated the extent to which DOD has assurance that it is using reliable information to identify installations at risk of water scarcity. GAO analyzed DOD's six assessments conducted from April 2017 through January 2019 to identify installations at risk of water scarcity and compared the assessments with five leading practices for identifying and analyzing water scarcity. GAO also interviewed officials from OSD and the military departments and contacted a nongeneralizable sample of 17 installations identified in OSD's assessments to reflect diversity in military service, mission, and water scarcity.
What GAO Found
GAO found that the Department of Defense (DOD) does not have assurance that it is using reliable information regarding which installations are at risk for water scarcity. When comparing the results of six Office of the Secretary of Defense (OSD) and military department assessments on installations vulnerable to water scarcity, GAO found that they varied markedly, raising questions about their quality and about which source of information DOD is using to determine which installations are vulnerable to water scarcity (see figure).
An OSD official stated that the three OSD-produced assessments provided the best information available on which installations are at risk of water scarcity. However, GAO found that these assessments did not reflect four of five leading practices for identifying and analyzing water scarcity—practices that contribute to a reliable assessment of water availability. Specifically, OSD did not always (1) identify current water availability, (2) identify future water availability, (3) take into account all sources of water, or (4) precisely identify locations. Further, although GAO found that the three military department assessments aligned with all leading practices, OSD officials disagreed as to whether these assessments can and should be used to identify installations at risk of water scarcity across the defense enterprise. Until OSD resolves the question as to whether it should conduct a department-wide assessment of installations that aligns with leading practices or whether it should rely on the military department assessments, the department will not have assurance that it is using reliable information to assess water scarcity.
What GAO Recommends
GAO recommends that the Office of the Secretary of Defense assess whether it should conduct a coordinated, department-wide assessment aligned with leading practices or rely on military department assessments to determine which DOD installations are at risk of water scarcity. DOD concurred with GAO's recommendation. |
gao_GAO-20-245 | gao_GAO-20-245_0 | <1. Background> <1.1. Federal Agencies Roles and Responsibilities for Processing Family Units> After Border Patrol agents or OFO officers apprehend noncitizen family units, they are to interview each individual, using interpreters if needed, and collect personal information such as their names, countries of nationality, and age. Agents and officers also collect biometric information, such as photographs and fingerprints, from certain individuals, including those in family units. Border Patrol agents and OFO officers use fingerprints to run records checks against federal government databases to determine whether individuals have any previous immigration or criminal history. Agents and officers are to enter information about the individuals in the appropriate automated data system as soon as possible, in accordance with CBP policy. According to Border Patrol and OFO officials, if noncitizens are determined to be ineligible for admission into the United States, agents and officers must determine whether to place them, including those arriving in family units, into full or expedited immigration removal proceedings, consistent with the Immigration and Nationality Act. In full removal proceedings, individuals have the opportunity to present evidence to an immigration judge to challenge their removal from the country and apply for various forms of relief or protection, including asylum. In expedited removal proceedings, the government can order individuals removed without further hearing before an immigration judge unless they express the intent to apply for asylum or a fear of persecution or torture if returned to their home country. Most arriving family units are eligible to be placed into expedited removal proceedings, with certain exceptions, according to Border Patrol and OFO officials. A 2015 CBP policy requires CBP s agents and officers to record such decisions for each family unit member in agency data systems. Further, Border Patrol agents and OFO officers print copies of the information they enter into data systems to create a paper file, known as an A-file, for each family unit member they apprehend. One of the key required DHS forms in the A-file is Form I-213, Record of Deportable/Inadmissible Alien (Form I-213). Among other things, this form captures biographic information and includes a narrative section for agents and officers to capture details about the circumstances of the apprehension. According to Border Patrol and OFO headquarters officials, each family unit member s A-file is reviewed and approved by a supervisor. If CBP or ICE determines that a family separation is warranted, agents or officers process the child or children as UAC, according to Border Patrol, OFO, and ICE officials. ICE s Office of Enforcement and Removal Operations is generally responsible for transferring these children, including those separated from a parent, as appropriate, to ORR. Under the Trafficking Victims Protection Reauthorization Act of 2008, children must be transferred to ORR within 72 hours after determining that they are UAC, except in exceptional circumstances. Table 1 provides additional details about DHS and HHS roles in processing family units. DHS officials told us that CBP typically holds family units together for a limited time before transferring them together to ICE, in accordance with CBP policy. During that time, agents and officers decide on a case-by- case basis whether to place each family unit in expedited or full immigration proceedings, according to Border Patrol and OFO officials. Individuals, including family unit members, placed in expedited removal proceedings and who express a fear of persecution or torture are generally subject to mandatory detention under the Immigration and Nationality Act pending a final credible fear determination. As a result, Border Patrol and OFO officials stated that its agents and officers typically determine whether ICE has space in its family residential centers before processing family units into expedited removal proceedings. From June 2014 through October 2019, ICE, during various periods, operated four family residential centers in Texas, Pennsylvania, and New Mexico for family units who may be subject to removal while they await the resolution of their immigration cases or who have been ordered removed from the United States. As of October 2019, ICE maintains three family residential centers in Dilley, Texas; Karnes City, Texas; and Leesport, Pennsylvania with a cumulative capacity of 3,326 beds. For information about these facilities, see table 2. <1.2. Timeline of Family Separation Policies> CBP has historically separated children apprehended in family units from their parent(s) in specific circumstances, such as if the parental relationship could not be confirmed, if there was reason to believe the adult was participating in human trafficking, or if the parent was otherwise a threat to the safety of the child. As we reported in October 2018, ORR officials began observing an increase in the percentage of children in its care who were separated from their parents beginning in 2017. ORR officials stated they saw a continued increase in separated children in their care in the first few months of calendar year 2018. In April 2018, the U.S. Attorney General directed federal prosecutors to implement a zero-tolerance policy along the southwest border for immigration offenses and to accept all improper entry cases referred for prosecution to the extent practicable. According to DHS officials, after the Attorney General s April 2018 memo, CBP began referring a greater number of adults apprehended at the border to the Department of Justice for criminal prosecution, including parents who were apprehended with minor children. CBP generally then separated the family unit, and after processing the children as UAC, CBP transferred them to ORR custody. According to CBP headquarters officials, the goal of the zero tolerance policy was to deliver a consequence to those crossing the border illegally by charging and convicting them of a crime, specifically a criminal conviction for improper entry, which is generally a misdemeanor. This could then lead to escalating criminal consequences for subsequent apprehensions, since noncitizens in this case, adults in family units entering the United States illegally for a second time could be charged with illegal reentry after removal from the United States, a felony offense. On June 20, 2018, the President issued an executive order directing that alien families generally be detained together. On June 26, 2018, a federal judge ruled in the Ms. L. v. ICE case, which was filed by the American Civil Liberties Union on behalf of certain parents (referred to as class members) who had been separated from their children. The June 2018 court order stated that certain separated parents must be reunited with their minor children, barring certain disqualifying criteria. On June 27, 2018, the CBP Commissioner issued a policy memorandum to provide direction on complying with the court order, to include potential reasons why a family separation may still be warranted. Figure 1 describes key actions since the Attorney General s April 2018 memo that have influenced how DHS determines when family separations are warranted. On July 10, 2018, the court approved reunification procedures for the class members covered by the June 2018 court order. At that time the approved class included those adult parents separated from their children by DHS whose children were in ORR custody as of June 26, 2018, barring certain disqualifying criteria. Subsequently, on March 8, 2019, the court ordered an expansion of the class members to include all adult parents, subject to the same disqualifying criteria, who entered the United States at or between designated ports of entry on or after July 1, 2017, and were separated from their children by DHS. As of January 15, 2020, the government provided to the plaintiffs 11 lists identifying a total of 1,556 children of potential expanded class members. This brought the total number of possible separated children of potential class members to 4,370. <2. Number of CBP Apprehensions of Family Unit Members Was Greater in the First Two Quarters of Fiscal Year 2019 Than in All of Fiscal Year 2018> CBP data indicate that the number of CBP apprehensions of family unit members was greater in the first two quarters of fiscal year 2019 than in all of fiscal year 2018. In addition, apprehensions of family unit members increased from approximately 22 percent of all southwest border apprehensions in fiscal year 2016 to approximately 51 percent of all such apprehensions in the second quarter of fiscal year 2019. The data also indicate that the majority of CBP apprehensions of family unit members were Central American nationals and the majority of apprehensions of children in family units were for children under the age of 12. Further, the data indicate that CBP placed family unit members in full removal proceedings before immigration courts at an increasing rate, and most were released into the United States to await their immigration court proceedings. Finally, CBP data indicate that CBP separated at least 2,700 children from their parents from April 2018 through March 2019. <2.1. CBP s Apprehensions of Family Unit Members Increased to Approximately 51 Percent of All Southwest Border Apprehensions in Second Quarter Fiscal Year 2019> CBP data indicate that the number of apprehensions of family unit members along the southwest border increased from about 120,400 apprehensions in fiscal year 2016 to about 160,400 apprehensions in fiscal year 2018. Further, CBP apprehensions of family unit members reached about 213,400 during the first two quarters of fiscal year 2019 alone approximately a 33 percent increase over the entire previous fiscal year. Cumulatively, along the southwest border, CBP apprehensions of family unit members reached about 599,000 apprehensions from fiscal year 2016 through the second quarter of fiscal year 2019 (see fig. 2). As shown in figure 3, CBP data indicate that apprehensions of family unit members grew from about 22 percent of total southwest border apprehensions in fiscal year 2016 to about 51 percent of such apprehensions during the first two quarters of fiscal year 2019. CBP data indicate that, during this period, OFO apprehensions of family unit members at U.S. ports of entry accounted for approximately 24 percent of all such CBP apprehensions. Border Patrol apprehensions of family unit members between ports of entry accounted for approximately 76 percent of all such CBP apprehensions. About 63 percent of CBP s total family unit member apprehensions occurred in just three Border Patrol sectors in Texas and Arizona (see fig. 4). <2.2. The Majority of CBP Apprehensions of Family Unit Members Were Central American Nationals, and the Majority of Children in Family Units Were under Age 12> CBP data indicate that most apprehensions of family unit members from fiscal year 2016 through the second quarter of fiscal year 2019 were of Central American nationals and that the majority of children in family units were under the age of 12. Figure 5 shows that from fiscal year 2016 through the second quarter of fiscal year 2019, the vast majority of these apprehensions about 82 percent were nationals of Guatemala, Honduras, or El Salvador. Additionally, about 10 percent of apprehensions of family unit members were of Mexican nationals and approximately 7 percent were nationals of other countries. From fiscal year 2016 through the first two quarters of fiscal year 2019, CBP apprehensions of children in family units totaled approximately 327,600. About 72 percent of these apprehensions were of children under the age of 12 when apprehended by CBP, and about 32 percent were under age 5 (see table 3). Border Patrol also maintains information in its data system that allowed us to analyze the composition of family units, that is, whether the family unit was headed by a male or female and how many children were in the family unit. Most family units apprehended by Border Patrol about 85 percent consisted of a single parent travelling with a single child. Most family units were led by a single female in fiscal year 2016; however, the number of households led by single males increased and, for the first two quarters of fiscal year 2019, accounted for almost half of the family units Border Patrol apprehended. Appendix II contains additional information about the composition of family units, including the immigration history of adult family members. <2.3. CBP Placed Family Unit Members in Full Removal Proceedings before Immigration Court at an Increasing Rate and Most Were Released Into the United States to Await Proceedings> From fiscal year 2016 through the first two quarters of fiscal year 2019, CBP placed an increasing percentage of family unit members into full removal proceedings. Specifically, CBP data indicate that around 46 percent of all apprehensions of family unit members in fiscal year 2016 resulted in the family unit members receiving Notices to Appear before an immigration court, which initiate full removal proceedings; around 88 percent received Notices to Appear during the first two quarters of fiscal year 2019. Conversely, CBP data indicate that CBP placed a decreasing percentage of all apprehensions of family unit members into expedited removal proceedings during this period. Specifically, the percentage declined from about 42 percent of all apprehensions of family unit members in fiscal year 2016 to about 6 percent during the first two quarters of fiscal year 2019. CBP officials stated that, since the volume of family units apprehended at the border increased in 2018, they have placed fewer family unit members into expedited removal proceedings, for which detention is generally mandatory, due to limited space for family units in ICE s family residential centers. Department of Homeland Security s (DHS) Migrant Protection Protocols In January 2019, DHS introduced the Migrant Protection Protocols, also referred to as the Remain in Mexico program, at selected ports of entry and, as of March 2019, within certain Border Patrol sectors. Under this policy, CBP issues eligible individuals, including family unit members, Notices to Appear before an immigration court, thereby initiating full removal proceedings. After CBP agents and officers complete processing duties, DHS officials stated that CBP returns the individuals to Mexico to await their court proceedings, rather than releasing them into the interior of the United States. According to CBP officials, through the end of fiscal year 2019, CBP processed approximately 44,200 individuals among which about 30,100 individuals, or 68 percent, were family unit members using the Migrant Protection Protocols. While ICE generally has the authority to detain individuals for the duration of their full removal proceedings, CBP and ICE officials stated that ICE faces constraints that typically prevents it from doing so for family units. Specifically, the limited amount of space at family residential centers is reserved for those family units placed in expedited removal. Therefore, according to ICE and CBP officials, with few exceptions, during the period of our review, family units placed into full removal proceedings were released into the United States to await their court proceedings. According to ICE officials, even if there was more detention space for family units, there are other constraints that would prevent ICE from detaining family unit members (placed into full removal proceedings at any point) for the duration of their court proceedings. Specifically, children may generally only be held in federal immigration detention for 20 days pursuant to the Flores Agreement. Due to the duration of full removal proceedings, most full removal proceedings take longer than 20 days. <2.4. CBP Separated at Least 2,700 Children from Their Parents from April 2018 through March 2019> According to Border Patrol and OFO data, CBP separated at least 2,700 children from April 19, 2018, through the second quarter of fiscal year 2019. As we discuss later in this report, CBP may have separated additional children from their parents during this period and not recorded this information in its data systems. As a result, we are reporting approximate, rounded figures on family separations. Specifically: Border Patrol updated its data system to track family unit separations on April 19, 2018, and issued written guidance to its agents about these changes on May 7, 2018, and August 2, 2018. From April 19, 2018, through March 31, 2019, Border Patrol data indicate that agents separated at least 2,670 children. OFO updated its data system to track family unit separations on June 26, 2018, and issued guidance on these changes to its officers on June 29, 2018. From June 30, 2018, through March 31, 2019, OFO data indicate officers separated at least 30 children. As shown in table 4, CBP data indicate that the number of family unit separations was highest between April 19, 2018 and June 27, 2018, due to DHS s response to the U.S. Attorney General s April 2018 zero tolerance policy (see table 4). CBP data also indicate that a small percentage of all children that arrived in family units fewer than 2 percent were separated from their parents during these time frames. Appendix II provides additional information about the characteristics of family units separated by CBP. Border Patrol and OFO data indicate that the reasons for these family unit separations varied. Regarding Border Patrol, as of April 19, 2018, agents were able to record a family separation and select from options to explain the reason for it in Border Patrol s automated data system. Border Patrol data indicate that the reasons that 97 percent of the adults and children separated from April 19 through June 27, 2018, were because agents referred the parent to the Department of Justice for criminal prosecution on charges for criminal history or other reasons, or due to a prior immigration violation(s) and a removal order. Table 5 shows the reasons for family separations indicated in Border Patrol data from April 19, 2018 through March 31, 2019. Regarding OFO, as of June 30, 2018, officers were to record the reason for any family unit separation with the child s record in OFO s automated data system. From June 30, 2018 to March 31, 2019, OFO data indicate that about 50 percent of adults and children were separated due to the criminal history of the adult or a child safety concern. Table 6 shows the reasons for family separations indicated in OFO data from June 30, 2018 through March 31, 2019. <3. CBP Developed Some Policies and Procedures for Processing Family Units but Does Not Have Sufficient Controls to Ensure Effective Implementation> <3.1. Border Patrol and OFO Have Policies and Procedures for Collecting Data on Family Units, and OFO Is Updating Its Data System to Link Parents and Children s Records> Since 2015, Border Patrol and OFO have issued policies and updated procedures regarding the information to be collected about family units and family separations, increasing the amount of data collected for family units. For example, Border Patrol and OFO have updated their data systems to better track the number of individuals apprehended as part of family units and to record when and why family separations occur. Specifically, Border Patrol updated its data system in October 2015 to track whether individuals were apprehended as members of a family unit and again in 2018 to track family separations. On October 2, 2015, the Chief of the Border Patrol issued policy guidance requiring agents to process family units together in its data system with a unique identifier called a family unit number, which links the records of parents and children apprehended together. Border Patrol updated its system on April 19, 2018 and on August 2, 2018, to track the number of separated adults and children and the reasons for the separations, and issued guidance to its agents about these updates. New Border Patrol agents also receive mandatory training on, among other topics, recording information into agency data systems, including procedures specific to family units. OFO updated its data system to track whether children under the age of 18 arrived as part of a family unit and whether they were separated from a parent (or other family member) with whom they arrived. OFO headquarters officials stated the updates were made during fall 2015. On June 29, 2018, OFO issued a policy memorandum that, among other things, required officers to track family separations in OFO s data system, and announced system updates to allow officers to select a separation reason. This and subsequent data system updates allowed officers to identify which separations were temporary (in which the family was reunited while still in OFO custody), and which were permanent (resulting in OFO referring a child to ORR), according to OFO officials. All OFO officers hired as of March 2011 receive mandatory training on certain processing procedures, including recording information into agency data systems. As of October 2019, OFO s data system does not have the capability such as by using a family unit number to link the records of noncitizen parents and children apprehended together and thus cannot determine the total number of adults involved in family separations. OFO is implementing a new data system across all ports of entry that includes a function to link the records of parents and children in family units using a unique identifier. According to OFO officials, OFO began developing the new data system in August 2017 and, as of October 2019, has implemented it at 90 ports of entry, none of which are land ports of entry along the southwest border. In June 2019, OFO officials stated they planned to train OFO officers on the new data system at land ports of entry along the southwest border in late summer 2019, but as of October 2019 that timeline had been delayed due to the high volume of family units apprehended that summer. According to OFO headquarters officials, they expected to deploy the new system to locations along the southwest border on an ongoing basis as conditions allow. It is too soon to determine whether the new data system will enable OFO to link children apprehended at ports of entry to their parents and allow for OFO to track the total number of family members separated in its aggregated data. It is also too soon to determine whether the new system will provide OFO officers with more readily available information that could help reunify separated family units, if necessary. <3.2. Border Patrol Updated Policy Documents for Processing Family Units in April 2019, but Border Patrol and OFO Training Materials Still Include Inconsistent Definitions of Family Units> Since October 2015, some Border Patrol and OFO documents have included inconsistent guidance on how agents are to define a family unit for processing purposes. CBP s 2015 policy defines a family unit to include one or more non-U.S. citizen juvenile(s) accompanied by his or her parent(s) or legal guardian(s), which Border Patrol agents confirmed is the agency s official definition that should guide how its agents process family units. However, as shown in table 7, certain Border Patrol policy documents since October 2015 have also stated that all members of the apprehended family unit must be non-criminal and/or non-delinquent and have no history of violence or substance abuse. As a result, individuals in family units that Border Patrol considered criminal, delinquent, or to have a history of violence or substance abuse may not have been included in Border Patrol s aggregated data on apprehended family units and family separations (once the agency began tracking separations in April 2018), because agents did not define and process them as family units. We raised these inconsistencies to Border Patrol headquarters officials in April 2019, and they acknowledged that certain policy and training documents contained inaccurate definitions and guidance, which could have led some agents to process certain parents and children separately, without a family unit number to link their records. Specifically, they stated that the language requiring that all members of the family unit must be non-criminal and/or non-delinquent, and have no history of violence or substance abuse should not be included in Border Patrol s definition of a family unit. In addition, officials noted that any guidance directing agents to process a family unit separately in the data system, as a single adult and UAC rather than linked together with a family unit number, due to a planned prosecution referral is inconsistent with Border Patrol s processing procedures. They stated this was an oversight and not an intentional change to the agency s official definition as indicated in CBP s 2015 policy. The Border Patrol headquarters officials were unsure of how often the inconsistent definitions and guidance may have led agents to incorrectly process family units. On the basis of our analysis of Border Patrol and ORR data, we found evidence that agents processed some family units separately, as single adults and UAC, without a family unit number or record of their separation. Specifically, for children apprehended from June 28, 2018 through March 31, 2019, we compared ORR numbers on UAC involved in family separations to Border Patrol apprehension data on separated children. During that period, ORR records indicated that DHS separated 396 children, while Border Patrol apprehensions data indicated that it separated 180 children. Border Patrol headquarters officials confirmed that the discrepancy we identified between Border Patrol data and ORR records may be attributable, in part, to the agents processing family units incorrectly and separately, without assigning them a family unit number. To better understand the discrepancy between the ORR and Border Patrol data, we selected a random, nongeneralizable sample of 40 ORR records for UAC involved in family separations from June 28, 2019 through March 31, 2019, and found matches for each of the children in Border Patrol apprehensions data. In 14 of the 40 selected ORR records, Border Patrol data indicated the agent had not recorded the child as a member of a family unit linked to a parent s record with a family unit number. Thus, Border Patrol agents had not recorded the subsequent separation when agents referred the children to ORR as UAC. A Border Patrol headquarters official stated that it is also likely that some agents were processing family units separately, rather than linking them with a family unit number, from May to June 2018 when agents were referring parents for criminal prosecution in response to the April 2018 zero- tolerance policy. The official stated that agents may not have realized that assigning a family unit number was necessary to track the separation in the Border Patrol data. During the course of our audit, we discussed this issue with Border Patrol and, as a result, Border Patrol issued new guidance to its sectors in April 2019 with an updated definition of family units consistent with CBP policy. According to Border Patrol officials, Border Patrol also removed previous policy documents, with the incorrect definitions and guidance, from a website accessible to all Border Patrol agents. However, as of late November 2019, Border Patrol training materials still direct agents to process a parent and child separately, without a family unit number, if a family member has a history of criminality, delinquency, violence, or substance abuse, or if Border Patrol plans to prosecute the parent. This definition and guidance, inconsistent with CBP policy, has been included in training provided to all new agents at Border Patrol s basic training program since at least October 2017. According to officials from the Border Patrol Academy, which is responsible for updating training materials in coordination with program officials, they plan to update the training materials in 2020. In the meantime, since September 2019, the Border Patrol Academy has been providing trainees with a handout that includes a definition of family units consistent with CBP policy. Regarding OFO, we also found that since 2012, training materials for new officers have included a definition of a family that is inconsistent with CBP and OFO policy. Specifically, OFO training materials issued in January 2012 and in use as of November 2019 define a family group as a juvenile who is accompanied by closely related adults (parent, grandparent, brother, sister, or legal guardian) and considers the juvenile to be UAC if the juvenile is accompanied by relative(s) not closely related. The training document does not include a definition for family unit. However, other key OFO policy documents issued subsequently define family units in a way that is consistent with CBP policy namely, a February 2016 memo on processing family units in OFO s data system and a June 2018 memo on tracking family separations. We raised the discrepancy in OFO s training materials with OFO headquarters officials in June 2019. OFO officials were unsure whether this definition had led any officers to incorrectly process adults and children as family units when they did not meet CBP s definition of a family. OFO headquarters officials stated the training materials were inconsistent with CBP and OFO policy, and officials from CBP s Office of Training and Development stated they updated the training materials and provided them to OFO in late November 2019. However, as of December 2019, CBP had not provided us with the updated materials to verify that the revisions are consistent with CBP policy. Standards for Internal Control in the Federal Government states that management should design control activities, including by providing the right training tools to achieve operational success. In addition, in GAO s Guide for Strategic Training and Development Efforts, we have reported that senior managers need to continually observe and assess how changes, such as in policies or practices, may affect the agency s training needs. This is one way, among others, to help ensure that the agency has a framework to achieve its mission. Border Patrol and OFO officials acknowledged the need to update training materials with definitions and guidance, consistent with CBP policy; they explained that they had not yet done so due to the considerable time and coordination it requires. Issuing updated training materials that reflect CBP policy would help CBP ensure that Border Patrol agents and OFO officers are processing family units appropriately and tracking all separations. <3.3. CBP Has Policies and Procedures to Address Concerns about the Validity of Family Unit Relationships but Does Not Have Sufficient Guidance to Ensure Cases Are Well Documented> CBP has policies and procedures for assessing the validity of family units, but does not have written guidance to help ensure that these cases are well documented, as required by CBP policy. <3.3.1. Assessing the Validity of Family Relationships> CBP has policies and procedures for assessing the validity of family unit relationships. During processing, Border Patrol and OFO officials said that it is standard practice for agents and officers to assess whether (1) adults and children apprehended together meet CBP s definition of a family unit and (2) whether agents and officers deem the claimed family relationships to be potentially invalid. A CBP policy issued on June 27, 2018, states that fraudulent claims of family relationships should be processed under current CBP policies and procedures. In practice, this means that agents and officers are to consider the validity of family relationships on a case-by-case basis with the information they have available at that time, according to Border Patrol and OFO headquarters officials. For example, these officials stated that agents and officers review any available documentation, such as birth certificates, presented by individuals; monitor interactions between adults and children to assess whether interactions are typical of that of a parent and child; and generally use their law enforcement training, such as interviewing skills, to help assess the validity of family relationships. Border Patrol and OFO officials noted that, in some instances, individuals have admitted to falsely posing as a family, while other times agents and officers have to make an assessment based on the totality of the information available to them. In accordance with CBP policy, Border Patrol and OFO are to generally hold individuals no longer than 72 hours, so Border Patrol and OFO officials stated they must assess the validity of the family units based on available information during the time they have individuals in custody. According to Border Patrol and OFO officials and documents, they have observed cases in which (1) a family unit claims a child is under 18 years of age, but agents suspect the child is older, and thus they do not meet CBP s definition of a family unit, or (2) the adult claims to be the parent, but Border Patrol has concerns that the adult is another family relation, such as an aunt or older sibling, or the adult is not related to the child at all. In June 2019, the Acting Secretary of Homeland Security testified that CBP identified almost 4,800 migrants this year in family units that CBP agents and officers determined to be fraudulent in nature. In cases when Border Patrol agents or OFO officers, with approval from their respective supervisors, assess that the relationship of a family unit may not be valid, the child is to be processed as a UAC and transferred to ORR. Specifically, according to Border Patrol and OFO officials and documents, agents and officers are to indicate in their data systems that the adult and child were separated and the reason why, and then refer the child to ORR as a UAC. For Border Patrol, this process involves removing the family unit number linking their records. Border Patrol and OFO do not consider these cases to be family separations, since CBP assessed that the individuals may not be part of a valid family unit. According to CBP s June 2018 policy, if a child arrives with an adult claiming to be the child s parent, a supervisory-level OFO or Border Patrol official must give approval before an agent or officer transfers a child to ORR as a UAC. According to Border Patrol and OFO headquarters officials, if an adult wishes to appeal CBP s assessment, the adult may raise the issue with ICE officers when transferred to an ICE detention facility. Border Patrol headquarters officials told us that its agents generally explain to the adults that Border Patrol is processing them separately from the children they arrived with due to concerns about the validity of the family relationship. OFO headquarters officials told us that OFO does not generally notify adults when processing the adults and children in potentially invalid family units because they stated they do not want to jeopardize the safety of the child if they suspect fraud, smuggling, or trafficking. According to Border Patrol and OFO officials, they may separate adults and children who they are concerned might not be valid family units to ensure the safety of the child for example, if agents and officers cannot be certain that the child has not been a victim of trafficking by the accompanying adult. Further, Border Patrol, OFO, and ICE officials stated that ICE and ORR are better positioned to further investigate these cases if an adult refutes CBP s assessment that the family unit was invalid, because CBP must generally hold individuals for a short period. In addition, ICE and ORR are the agencies most involved in reunifying family units, when appropriate. <3.3.2. Documenting Cases of Potentially Invalid Family Relationships> CBP began tracking the number of potentially invalid family units in 2018. On April 19 and June 29, 2018, Border Patrol and OFO, respectively, issued guidance about updates to agency data systems and issued guidance to enable agents and officers to record potentially invalid family units. That is, if the appropriate Border Patrol and OFO managers give approval, agents and officers separate potentially invalid family units, and record the separation and the reason for it in agency data systems, according to Border Patrol and OFO officials and documents. More specifically: Border Patrol agents are to delete the family unit number from the parents and children s records, and indicate the reason from options that include child is over the age of 18, no family relationship, or no family relationship prosecuted. OFO officers are to indicate that a child was separated from the adult with whom they arrived, and are to indicate the reason as fraudulent relationship. Border Patrol and OFO officials noted observing cases of potentially invalid family units, and Border Patrol data indicate an increase in the number since Border Patrol began tracking the cases in April 2018. Specifically, during our fall 2018 visits to ports of entry and border stations in Texas and California, Border Patrol and OFO officials stated they have observed suspected or confirmed cases of adults falsely claiming to be a child s parent, including occasional instances of seeing the same child apprehended multiple times, but with different adults claiming to be their parents. From April 19, 2018 through March 31, 2019, CBP data indicate that CBP referred at least 921 children to ORR (918 by Border Patrol and 3 by OFO) due to CBP s concerns that the family relationships were potentially invalid. During the same period, Border Patrol data also indicated that 2,245 adults were processed separately from the children with whom they were apprehended due to concerns about the validity of the family relationships. By comparison, Border Patrol data indicated that agents processed 256,743 adults and children in valid family units during this period. However, from July 1, 2018 through March 31, 2019, the number of individuals Border Patrol assessed as part of potentially invalid family units grew at a faster rate than the number of individuals apprehended in valid family units. Specifically, the average monthly increase in adults and children Border Patrol assessed to be in potentially invalid family units rose by about 70 percent per month, on average, during this period. Meanwhile, Border Patrol data indicate the rate of increase for adults and children in valid family units was about 53 percent per month, on average. However, some of the family units that CBP assessed to be potentially invalid are subsequently found to be valid, according to ORR and ICE officials. According to ORR officials and records of UAC involved in family separations from June 28, 2018 through June 28, 2019, ORR was aware of only 46 cases in which CBP referred a child to ORR care because CBP had assessed the family unit to be invalid. In at least 10 of those cases, the family was later determined to be valid and the child reunited with his or her separated parent, according to ORR officials, as of June 2019. Anecdotally, ICE headquarters officials stated that there are occasionally cases in which CBP referred a child to ORR because agents or officers assessed the family to be an invalid family unit, but ICE or ORR later determined the family was valid and eligible to be reunified. For example, ORR s records on family separations included instances in which the validity of family relationships was determined through DNA testing. ICE officials stated that its officers are able to conduct additional research about the validity of family relationships, as needed, once an adult has been transferred to its custody and the child to ORR. However, ICE does not track how often potentially invalid family units are later assessed to be valid and reunited, and, therefore, could not provide an exact number of how often this has occurred. DHS and ICE officials have tracked the outcomes of some deployments of ICE officers to help CBP assess the validity of family relationships. Specifically, the Acting Secretary of Homeland Security testified before the Committee on Oversight and Government Reform in the House of Representatives on July 18, 2019 that CBP agents and officers referred 2,475 family unit members they suspected had invalid family relationships to go through an additional assessment by ICE officers who, among other training and skills, have specialized forensic interviewing skills. The ICE officers assessed 352 of the 2,475 individuals approximately 14 percent to be invalid family members. Additionally, an ICE official also testified before the Senate Committee on Homeland Security and Governmental Affairs on June 26, 2019, and described a May 2019 pilot that involved voluntary rapid DNA testing for some individuals. According to this official, 16 of the 84 family units tested, around 19 percent, proved not to be the parent of the child with whom they arrived. According to ICE officials, those family units it determined to have valid family relationships while still in CBP custody, based on the available evidence, remained together as a family unit and were not separated. On June 27, 2018, the CBP Commissioner issued a policy memorandum requiring that fraudulent claims of parental or legal guardianship relationship should be well-documented to support such claims ; however, CBP does not have guidance to clarify how agents and officers are to fulfill that requirement. Border Patrol and OFO headquarters officials indicated that taking the aforementioned steps to record information in agency data systems meets the CBP policy requirement for documentation. In addition, according to Border Patrol and OFO officials, agents and officers also record details of the apprehension on the Form I- 213, which is required for all individuals. However, neither Border Patrol nor OFO has guidance about whether or what details about a family unit being assessed as potentially invalid should be included on the Form I- 213. Learning about the details of these cases and why CBP made its assessment is important to ICE officials in the event an adult refutes the assessment and ICE must take additional steps to determine the validity of the family relationship. ICE officials can view the information CBP agents and officers record on the Form I-213, since ICE officers can access the form in a database it shares with CBP. However, the headquarters official responsible for coordinating ICE s family and juvenile programs stated that the level of detail included in the forms varies by location and the narrative often does not include details about the reason why CBP considered a family unit potentially invalid. ORR officials also stated this information would be helpful to ORR because it may be relevant to the decisions ORR staff make for UAC, such as selecting sponsors. ORR intake staff told us that the documents they receive from CBP accompanying UAC referrals typically do not contain narrative information about agents or officers concerns about potentially invalid family relationships. While CBP tracks cases on potentially invalid family units in its data systems, this tracking does not (1) document the circumstances to support the assessment of invalidity or (2) provide complete and timely information for ORR and ICE to help them to fulfill their responsibilities, including to review cases in which CBP initially determined a family to be invalid but further investigation is needed. Rather, CBP s data systems only track the assessment of invalidity by allowing agents to select that as a reason from a set of options, but do not track the circumstances to support that assessment. However, CBP policy directs that these cases be well-documented to support such claims. Standards for Internal Control in the Federal Government state that management should use quality information to achieve the entity s objectives. In doing so, management should identify the information needed to achieve objectives and address risks, and should consider the expectations of both internal and external users. Providing guidance on what narrative information Border Patrol agents and OFO officers are to document on a child s and the accompanying adult s Forms I-213 about potentially invalid family units could help better ensure that the events are well-documented to support such assessments, in accordance with CBP policy. Further, this could help ensure that ICE and ORR officials have relevant details they need to make decisions for adults and children in their custody, including reuniting valid family units, where appropriate, before adults are removed from the United States. <3.4. CBP Developed Policies and Procedures for Family Separations, but Border Patrol and OFO Do Not Have Sufficient Controls to Ensure Information Is Accurately and Consistently Captured> CBP, Border Patrol, and OFO have developed policies and procedures for those agents and officers responsible for recording and approving family separations; however, Border Patrol and OFO do not have sufficient controls to ensure (1) Border Patrol agents are accurately and consistently recording family separations in their data systems, (2) Border Patrol and OFO s data systems accurately capture separation reasons that are consistent with CBP policy, and (3) required forms include sufficient details about the circumstances of the separations. Regarding policies and procedure for family separations, according to CBP s June 2018 policy, a Border Patrol watch commander, or equivalent position, must approve every family separation. Border Patrol and OFO officials told us that higher-level officials, such as Border Patrol sector chiefs or Port Directors, are often involved in decisions to separate family units. A 2015 CBP policy requires that agents and officers record family separations in agency data systems. Further, after updating data systems to track family separations in 2018, as previously described, Border Patrol and OFO issued written guidance to agents and officers with specific instructions on how to record family separations in its data systems. For example, Border Patrol issued guidance about how to record family separations in its data system in May 2018, August 2018, and April 2019. In addition, Standards for Internal Control in the Federal Government state that management should use quality information to achieve the entity s objectives, and identify the information needs to address risks. In doing so, managers should also consider the expectations of both internal and external users when collecting information. Further, changes in conditions affecting the entity and its environment often require managers to revise the internal control system, on a timely basis to maintain effectiveness. <3.4.1. Recording Border Patrol s Family Separations> Our analysis of Border Patrol and ORR data indicates that Border Patrol agents have not accurately and consistently recorded family separations in the data systems. Specifically, we reviewed a random, nongeneralizable sample of 40 ORR records for UAC involved in family separations between June 28, 2018, and March 31, 2019 and found matches for all 40 of the children in Border Patrol apprehensions data. Among the 40 records, we identified cases in which agents had not documented the family separation in Border Patrol s data system, as required by CBP and Border Patrol policy. Specifically, Border Patrol data indicated the agent had not processed the separation in the Border Patrol data system for 10 of the 40 UAC involved in family separations. That is, in these cases, Border Patrol agents processed the parents and children together with a family unit number, but did not take the necessary steps in the system to separate them and document the reason why the separation occurred. We shared the results of our analysis with Border Patrol officials, and these officials acknowledged that the discrepancy between Border Patrol and ORR data on family separations may be attributable, in part, to human error that agents had not correctly recorded family separations in Border Patrol s data system. However, the officials were unsure of the extent of the problem. Thus, it is unclear whether Border Patrol has accurate records of all separated parents and children in its automated data system. Border Patrol officials stated that data entry errors may have grown with increased processing demands and strained resources faced by Border Patrol as the volume of family units apprehended increased in fiscal years 2018 and 2019. However, as mentioned, federal internal control standards provide that changes in conditions such as increased processing demands agents faced during periods of increased apprehensions of family units often require managers to revise the internal control system. Developing and implementing additional controls to ensure that Border Patrol agents accurately record family separations in the data system, consistent with CBP and Border Patrol policies, would better enable Border Patrol to maintain complete and accurate information on all family separations. For example, an additional control could be to require Border Patrol or OFO managers conducting supervisory review of each apprehension to check that family separations have been accurately recorded in the data system. <3.4.2. Recording Reasons for Family Separations in Border Patrol and OFO Systems> CBP, Border Patrol, and OFO have policies and procedures in place for those agents and officers responsible for approving family separations and recording the reasons in agency data systems. On June 27, 2018, the CBP Commissioner issued a memorandum to the Chief of the Border Patrol and to the Executive Assistant Commissioner of OFO to provide direction on complying with the June 26, 2018, federal court order in Ms. L. v. ICE that generally prohibits the government from separating parents from their children, to include potential reasons that may warrant continued family separations. Specifically, the memorandum states that separations may occur only for the following reasons: (1) the parent has criminal convictions for violent misdemeanors or felonies, (2) CBP plans to refer the parent for a felony prosecution, (3) the parent poses a danger to the child, or (4) the parent has a communicable disease. On June 29, 2018, OFO issued a policy memorandum reiterating the potential separation reasons included in CBP s June 27, 2018 policy memorandum. According to Border Patrol headquarters officials, Border Patrol did not issue any further implementing guidance. Border Patrol and OFO officials stated that agents and officers are to use all available information to determine whether a family separation is warranted. Such information may include available birth certificates, personal observations of the family unit s behavior, results of background checks for criminal and immigration history, and results from available medical assessments. In some instances, Border Patrol and OFO officials stated that agents and officers may not always have complete information, such as when a database indicates a parent s arrest but does not indicate whether he or she was convicted of the charge, but that agents and officers are to weigh the totality of the circumstances. For situations in which agents and officers are unsure whether to separate a family, CBP s policy states that agents and officers should contact their local Office of Chief Counsel for guidance. Although Border Patrol and OFO data systems allow agents and officers to select among options to indicate the reason for a family separation, the reasons available in the systems do not fully align with CBP policy. For example, Border Patrol s data system does not include an option that indicates the parent poses a danger to the child one of the reasons articulated in the Commissioner s June 2018 memorandum. Table 8 shows how the separation reasons available in Border Patrol and OFO data systems compared with the potential separation reasons established in CBP s June 2018 family separations policy. Border Patrol and OFO headquarters officials stated they were unsure why the separation reasons available in the data systems do not fully align with CBP policy on family separations, but stated that the data system reasons have an implicit link to CBP policy. They stated that Border Patrol and OFO officials review and approve each family separation to ensure it meets CBP policy. In addition, OFO headquarters officials stated that it issued guidance in June 2018 that reiterated CBP s policy on potential reasons for family separations. However, as illustrated in table 8, it is sometimes not clear how separation reasons in Border Patrol s and OFO s data systems align with CBP policy. For example, Border Patrol s option for family member prosecuted for other reasons does not provide enough information to determine whether Border Patrol is referring a parent for the prosecution of a felony, as required by CBP policy. Both Border Patrol and OFO have previously changed separation reasons in agency data systems, and in June 2019 Border Patrol officials stated they continue to analyze the need for updates. As of October 2019, these officials stated that Border Patrol and OFO do not have any current plans to update the separation reasons in their data system. CBP officials who conduct supervisory review of files and approve family separations rely, in part, on the information agents and officers record in Border Patrol and OFO data systems, in conducting reviews and sharing information, according to Border Patrol and OFO officials. Updating Border Patrol s and OFO s data systems to ensure that options for separation reasons clearly align with CBP policy could help ensure that CBP makes decisions about family separations in accordance with CBP policy and that data CBP collects reflects that. <3.4.3. Recording Information about Family Separations on Border Patrol s Form I-213> CBP s policies related to family separations do not include written requirements that agents and officers record a description of the family separation. However, Border Patrol and OFO officials stated that they expect agents and officers to record the circumstances surrounding family separations on a narrative section of each family member s Form I-213. Yet we found that Border Patrol agents are not consistently recording detailed information about family separations on the Form I-213 the official record of the apprehension. Specifically, we analyzed a nongeneralizable sample of Forms I-213 for family units whom Border Patrol separated and found that, for most of the family separation cases, one or more of the selected forms had missing or inconsistent information in the narrative descriptions. Specifically, we reviewed a sample of Forms I-213 for 23 family separation cases, involving 27 children and 25 parents. These separations occurred across each of the Border s Patrol s nine southwest border sectors between June 28, 2018 and March 30, 2019. In particular, we assessed (1) whether the forms included a reason for the separation, (2) whether the descriptions of the cases provided enough information to determine whether or not the reason met CBP policy, and (3) whether the information recorded for each separation case was consistent across the parents and children s forms. On the basis of our review of the forms, we found there was missing or inconsistent information on one or more of the family members forms for 18 of the 23 separated family units. Specifically, we found for three of the 23 family separations, there was no indication that a separation had occurred on one or more of the family members forms; for 20 of the 23 family separations, all of the family members forms included some indication of a family separation; Seven of the 25 parents forms and seven of the 27 children s forms did not contain a narrative description explaining why the separation occurred; and 17 of the 25 parents forms included sufficient narrative information to determine whether the separation met CBP policy; 12 of the 27 children s forms included enough information to make that determination. In addition, even among those forms with sufficient information to determine whether the reason met CBP policy, we found inconsistencies. For example: Three parents and four children s forms included information that implied that the parent could potentially present a danger to the child, but the actual separation reason noted on the form was something different, such as the parents criminal history. For example, the criminal history information provided on one parent s Form I-213 included information about an arrest for kidnapping, but did not include evidence that the arrest resulted in a conviction, making it difficult to determine whether the separation aligned with CBP policy and, in particular, what reason the separation would fall under. For nine of the 23 family separations, the separation reason was listed as the parent s criminal history on one of the family member s forms, but there was missing or inconsistent information on the other family members forms. For example, in one instance, the father s Form I- 213 indicated he had been convicted of sexually assaulting a 12-year- old child, but there was no separation reason and no information about the parent s criminal history provided on the child s form. According to ICE officials responsible for monitoring family separations, and reunifying family units where necessary, the narrative information on the Form I-213 is ICE officers primary source of information about the circumstances of a family separation. ICE officers need detailed information, according to officials, to help conduct additional research to confirm whether a separation was warranted or respond to requests for information from ORR. In addition, ORR officials told us that they would benefit from CBP recording certain information on a child s Form I-213 such as the type and timing of a parent s criminal conviction or whether the parent may pose a danger to the child and sharing that information, to better inform ORR s decisions about where and with whom to place UAC when they leave ORR custody. However, CBP has not issued guidance on what descriptive details surrounding family separations agents and officers are to record on the Form I-213, based on our review of CBP documents. In addition, Border Patrol officials stated that they do not have written guidance for agents about what information should be captured on the Form I-213. Conversely, OFO issued guidance stating that the Form I-213 must be annotated with the reason for the family separation, the name of the approving manager, and, at a minimum, the biographical information and A-number a unique identifier for noncitizens apprehended by CBP of the parent(s) and children. Border Patrol and OFO headquarters officials acknowledged that the level of detail documented on the Form I- 213 about separations may vary by agent or officer, and rely on their supervisory review process to ensure that family separations are consistent with CBP policy. Border Patrol headquarters officials attributed missing separation reasons or inconsistent information about the circumstances of the family separations on the Forms I-213 to multiple factors. Specifically, they acknowledged that Border Patrol has not issued guidance specifying what descriptive details agents should include on the forms, and does not have, for example, specific information that supervisors check for during their review of each individual s file. In addition, Border Patrol headquarters officials noted that there have been great demands placed on Border Patrol agents to expedite processing during periods of high numbers of family units apprehended and crowding at Border Patrol facilities. However, as noted previously, federal internal control standards state that changes in conditions affecting the entity and its environment like an increase of family units apprehended along the southwest border often require management to change the entity s internal control system, as existing controls may not be effective for meeting objectives or addressing risks under changed conditions. As of October 2019, Border Patrol and OFO had no plans to (1) implement additional controls to ensure that reasons for family separations are included on individuals Form I-213 or (2) issue guidance to agents and officers about what descriptive information about family separations they should record on the forms. Developing and implementing additional controls to ensure that Border Patrol agents and OFO officers include a reason for the family separations on the parent s and child s Forms I-213 could help CBP ensure its agents and officers are separating family units in accordance with CBP policy. For example, an additional control could be to require the Border Patrol or OFO manager reviewing the information recorded on the Form I-213 to check that certain information, such as the specific separation reason with relevant details, has been included. In addition, without additional guidance on what specific details Border Patrol agents and OFO officers are to include in the narrative information about the family separation events on the parent s and child s Forms I-213, ICE and ORR do not have complete or consistent information to use in determining when it may be necessary to reunify family units in accordance with the Ms. L. v. ICE court order. <4. ICE Developed Procedures for Processing Family Units, But Does Not Systematically Track ICE s Family Unit Separations in Its Data System> <4.1. ICE Developed Procedures for Processing Family Units Referred from CBP> ICE has procedures for processing family units whom CBP apprehended and for releasing family units from ICE custody (see fig. 6). <4.1.1. Procedures for Taking Family Units into Custody from CBP> According to ICE field office officials, upon referral by CBP, ICE officers generally review the family unit s files to ensure that CBP agents and officers completed the forms sufficiently and, if not, ICE officers can return the case to CBP. For example, ICE officers typically ensure that the appropriate family unit member signed his or her copies of paperwork provided by CBP. Additionally, according to ICE field office officials, ICE officers have the discretion to decline the transfer of a family unit that they determine is not suitable for detention in a family residential center or for release. When ICE accepts CBP s referral of a family unit and receives custody from CBP, ICE officers are to enter information about each family unit member in ICE s data system, both for family units that ICE plans to detain and those it plans to release. ICE s data system pulls some information from CBP s data systems. For example, ICE officers can find basic biographic information about individual family unit members apprehended by Border Patrol by searching for an individual by his or her A-number, a unique identifier. In addition, ICE officers are to enter new information, such as the location(s) where officers detained or released the individual family unit members and the documents officers served to them, among other things. For information about the family unit members that ICE detained at its family residential centers, see appendix II. <4.1.2. Procedures for Releasing Family Units from ICE Custody> Family units placed into expedited removal by CBP and detained in ICE family residential centers who express an intention to apply for asylum, a fear of persecution or torture, or a fear of return undergo screenings conducted by an asylum officer. These screenings occur during detention and are to determine whether one or more family unit members have a credible fear of persecution or torture. The outcome of the screening (and review by an immigration judge, if requested after the screening) determines whether ICE will remove the family unit from the United States or release the family unit into the interior of the country to pursue immigration relief or protection in full immigration proceedings. Additionally, as stated previously, children may generally only be held in federal immigration detention for 20 days pursuant to the Flores Agreement. Thus, if members of the family unit do not receive a credible fear determination within 20 days, ICE generally releases the family unit into the interior of the United States with a notice to appear before an immigration court, which initiates full immigration proceedings. From fiscal year 2015 through fiscal year 2018, ICE data indicate that 99 percent of family unit members who were detained in one of ICE s family residential centers were subsequently released by ICE into the interior of the United States. For additional information about the outcomes for family unit members detained in ICE family residential centers, see appendix II. According to ICE headquarters and field office officials, while a family unit is at a family residential center, ICE officers typically assist family units with their post-release plans by asking heads-of-household to identify contacts in the United States, such as relatives, that the family unit can stay with after leaving ICE custody. These contacts pay for the family unit s travel expenses if the family cannot purchase bus tickets, for example, and ICE officers help coordinate these plans and typically drive the family unit to the bus station upon release, according to ICE officials. For family units who are not placed in a family residential center, ICE s procedures for assisting them with their post-release plans have varied based on local conditions. ICE headquarters and field office officials explained that, prior to October 2018, when the volume of family units arriving at the southwest border began to increase significantly, ICE officers sometimes coordinated post-release plans for family units that did not stay at a family residential center. However, officials stated ICE has not had the resources to help family units with post-release plans since that time and instead has generally relied on nongovernmental organizations for this assistance. When ICE releases family units from its custody to await immigration court proceedings, ICE officers generally enroll the family unit s head-of- household in its Alternatives to Detention program. The program uses technology, such as ankle monitoring devices, to track the movement of the adult family unit members. ICE field office officials stated that the availability of ankle monitoring devices and the volume of family units arriving at the southwest border can impact whether or not ICE enrolls a family head-of-household in its Alternatives to Detention program. In addition to ankle monitoring devices, most family units are also released on orders that require heads-of-household to report telephonically or in- person to ICE officers once they reach their destination in the United States. ICE officials stated the level of continued supervision by ICE officers is at the discretion of the ICE officer in charge of the family unit s case and may also be dependent on a variety of factors, such as whether the family unit entered the United States at or between ports of entry, whether the family unit received a positive credible fear determination, and the head-of-household s prior criminal and immigration record. <4.2. ICE s Automated Data System Does Not Track Family Unit Separations That Occur in ICE Custody> ICE relies on a manual process to track family unit separations that occur in ICE custody, but does not systematically record this information in its data system. ICE officers are to report all separations that occur in ICE custody to the headquarters office responsible for coordinating family and juvenile programs. ICE headquarters officials in that office compile the information received from the field offices to populate a spreadsheet, which they use to track all separations that occur in ICE custody. In addition, according to ICE officials at headquarters, officers are to include narrative information about the separation and the approving official s name in a comments field in the parent s and children s records in the data system. According to ICE officials, the narrative information in the comment field is not searchable within ICE s data system and ICE does not have a mechanism, such as a drop-down menu, to systematically record a family unit separation or the reasons for any separations that occur in ICE custody. Thus, ICE cannot pull data from its system to track such separations. ICE headquarters officials stated that these methods are not an efficient and effective means to have readily available data on family separations that occurred in ICE custody. According to ICE policy for detained parents, detained parents maintain their parental rights during removal proceedings. In particular, if ICE is removing a parent from the United States, field office directors or their designees are to accommodate, to the extent practicable, the detained parent s efforts to make arrangements for his or her minor child or children, including for the children to be removed with the parent. As such, before removing an adult from the United States, ICE officers are to check the individual s paper A-file, and specifically the individual s Form I- 213, for any indication the adult arrived with a child, according to ICE headquarters officials. In addition, according to ICE officials, ICE officers are to review the individual s record in ICE s data system where ICE officers would be alerted to whether the individual had ever been separated from a child. Given the limitations in ICE s data system, officers would need to know to review the narrative information in the comments field within the individual s records to determine whether he or she had been separated from a child in ICE custody; however, none of ICE s guidance documents explain that officers are to look for such information in the narrative comments field. Further, ICE officials told us that officers are not required to check the spreadsheet maintained at ICE headquarters or contact headquarters officials prior to removing adults from the United States. As of November 2019, ICE headquarters officials stated they are working with the ICE data unit to create a new module that would enhance ICE s ability to link and track family units in its data system, including capturing information on families that ICE separates. According to ICE officials, ICE has established a project team for this effort and hopes to deploy the updates in the fourth quarter of fiscal year 2020. However, ICE did not provide documentation with details, such as a project plan with time frames for deploying these system updates, to verify these plans. Standards for Internal Control in the Federal Government states that management designs the entity s information system and related control activities to achieve objectives and respond to risks. Further, management designs the entity s information system and the use of information technology by considering the defined information requirements for each of the entity s operational processes. Given that ICE did not provide documentation with details about planned changes to ICE s data system, it is too early to determine whether and when ICE s planned system enhancements will include a mechanism that allows ICE officers to systematically track family separations that occur in ICE custody. Without a mechanism in its data system to systematically track the family units it separates, ICE is unable to ensure that separated parents who are subject to removal are able to make arrangements for their minor child or children (including being removed with them), as provided in ICE policy . <5. DHS and HHS Have Interagency Agreements with Roles and Responsibilities Regarding UAC, but Long-Standing Information Sharing Gaps Remain> DHS and HHS have developed interagency agreements for the transfer and placement of UAC between the two departments; however, information sharing gaps remain. In 2015, we reported that the interagency process to refer UAC from DHS to HHS was inefficient and vulnerable to errors because it relied on emails and manual data entry. In addition, each DHS component (Border Patrol, OFO, and ICE) submitted referrals for UAC to HHS s ORR in a different way. To increase the efficiency and improve the accuracy of the interagency referral and placement process for UAC, we recommended the Secretaries of Homeland Security and Health and Human Services jointly develop and implement a documented interagency process with clearly defined roles and responsibilities for all agencies involved in the referral and placement of UAC in HHS shelters. DHS and HHS concurred with our recommendation. Since our 2015 report, DHS and HHS developed two documents to guide interagency procedures related to the processing of UAC. Specifically, in April 2018, HHS and DHS established a memorandum of agreement regarding information sharing for UAC. In addition, on July 31, 2018, DHS and HHS issued a Joint Concept of Operations to memorialize interagency policies, procedures, and guidelines related to the processing of UAC. According to the April 2018 memorandum of agreement, among other things ICE and CBP are to provide ORR with information at the time of the referral and documents when the child is transferred to ORR, including whether the child was traveling with other individuals and the Form I-213, so that ORR can make informed decisions for the child. Specifically, once a child has been transferred to ORR, the agency begins the process of identifying a potential sponsor for the child and, when a potential sponsor is identified, ORR requests information about that sponsor. At this step, according to the memorandum of agreement, ICE is to conduct a screening of the potential sponsor that includes, at a minimum, a biographic criminal check of national databases, a check for warrants of arrest, and an immigration status check. DHS is to provide HHS with information necessary to conduct suitability assessments for sponsors, including that which HHS would not otherwise have access. In addition, to the extent permitted by law, and consistent with policy, DHS is to report to ORR the results of any investigations it conducts that are relevant to ORR s determinations concerning the care and placement of UAC. According to the July 2018 Joint Concept of Operations, ICE or CBP should use ORR s data system to refer UAC to ORR whenever feasible. If ORR s data system is not available, DHS may email ORR a referral form along with any supporting documentation. DHS is also to provide ORR with specific documents, including the Form I-213, when the child is transferred to ORR. In the event a child is separated from a parent or legal guardian, CBP or ICE is to enter this information into ORR s data system, according to the Joint Concept of Operations. CBP or ICE is also to include contact information for parents, legal guardians, or adult relatives, as this information can assist in ORR s reunification process, if needed. ORR is to contact the child s family to, among other things, determine whether the child has a potential sponsor who resides in the United States, and to facilitate visitation and contact with family members, regardless of their immigration status. Finally, DHS is to preserve the unity of families during repatriation, according to the Joint Concept of Operations. The memorandum of agreement and Joint Concept of Operations state the roles and responsibilities of DHS and HHS and their components and describe some of the information to be shared between the agencies regarding the placement of UAC, among other things. However, DHS and HHS officials statements indicate that, in practice, they have not resolved long-standing differences in opinion about whether and how agencies are to share information, and what type of information is needed to inform decisions about the care and placement of UAC, including those processed as UAC after separation from a parent. We found that DHS has not consistently provided information and documents to ORR as specified in the memorandum of understanding and the Joint Concept of Operations. Further, ORR officials identified additional information they believe ORR needs from DHS at the time of referral (or soon thereafter) to inform their decisions about placing children with sponsors and reunifying separated families, when necessary. <5.1. Information Sharing Processes as Described in the Interagency Agreements> With regard to information sharing expectations established in the interagency agreements, as of September 2019, we found that certain documents were not being shared or mechanisms for sharing information were not being used consistently. For example, Border Patrol has taken steps since our 2015 report to improve its referral process, so that Border Patrol s referral information is uploaded directly into ORR s data system, in keeping with Joint Concept of Operations requirements. However, the referral screens in Border Patrol and ORR data systems do not fully align, which has required ORR headquarters staff to manually enter some required information into the ORR data system. That is, Border Patrol s referral screen does not include many of the fields areas to input specific information included in ORR s referral screen. Border Patrol and ORR officials offered different perspectives on why the information on the referral screens in the data systems do not align. Specifically, ORR officials stated that Border Patrol has not updated its referral screen to match updates that ORR has made. For example, in July 2018, ORR added a checkbox in its data system for DHS agencies to indicate whether a UAC had been separated from a parent, as necessary. Border Patrol took steps in October 2018 to similarly update its referral screen, so the indication of a family separation would be automatically uploaded to ORR s data system with the referral. However, additional steps must be taken by ORR for its data system to upload the information, according to Border Patrol officials. Meanwhile, if ORR staff see some indication of a family separation in the Border Patrol referral form, such as in a narrative text field, ORR staff will typically add that information to the records in their data system manually. Border Patrol has not taken additional steps to update other parts of its system s referral screen to align with ORR s data system because ORR s data system does not comply with DHS security standards, according to Border Patrol officials. ORR officials said they had not been made aware of any security concerns. However, concern about system security standards is a long-standing issue that we noted in our 2015 report. As of October 2019, Border Patrol and ORR did not have any plans to collaborate further to improve automated referrals for UAC. Further, as of October 2019, ORR officials told us that ICE and OFO officials are not consistently accessing the ORR data system to submit a referral for a UAC. Specifically, ICE and OFO officers in certain locations use ORR s data system to submit a referral infrequently and instead use a form, which ORR last updated in 2013, that they attach to emails to refer UAC. However, ORR officials stated their expectation is that email referrals are to be used only occasionally, such as if DHS officials encounter technical problems using ORR s data system. ICE and OFO stated that their officers only rarely make referrals to ORR and sometimes face constraints that prohibit them from using ORR s data system to submit the referral. For example, ICE officials stated that officers generally use ORR s data system for referrals, but that, on some occasions, the officer s password to access ORR s data system has expired due to infrequent use, and they must email the referral. In addition, OFO and ICE officials stated that their officers who have access to the ORR data system to make referrals are not always available, so in those instances, other officers must email a referral form to ORR. OFO and ICE headquarters officials were unsure how often their officers used email to send ORR referrals, rather than directly accessing ORR s data system. ORR officials also stated that even when ICE and OFO use ORR s data system to submit the referral, consistent with the Joint Concept of Operations, the officers are not consistently marking the separations checkbox in ORR s data system for those children involved in family separations. As a result of these challenges, ORR officials said they must often manually enter referral information from ICE or OFO into the ORR data system, including any indication of a family separation or that the child was apprehended with an adult. ORR officials also stated that DHS CBP and ICE is not routinely submitting the child s Form I-213 to ORR, as specified by both interagency agreements. Border Patrol and OFO headquarters officials stated they have concerns about sharing sensitive information, including in referral forms or on the Form I-213, with ORR headquarters or contracted shelter staff because they are not law enforcement officers. ORR headquarters officials stated that they have worked with other federal partners to ensure that only ORR officials with the proper authorization receive sensitive materials. These officials said they are interested in working with DHS to set up a similar process so ORR can receive the information it needs to make decisions for UAC. For example, ORR headquarters officials stated that they would explore options for updating DHS and HHS data systems so the child s Form I-213 could be shared directly between data systems. This would help ensure that only ORR staff who have the proper authorization will have access to them, according to HHS officials. In addition, DHS and HHS provided different perspectives on the expected information sharing procedures included in the interagency agreements. For example, ORR headquarters officials stated they interpret existing interagency agreements to apply to information sharing on all UAC, regardless of whether they were apprehended alone or with an adult. By contrast, Border Patrol headquarters officials stated that the interagency agreements apply to UAC involved in family separations, but not those children referred to ORR after Border Patrol assessed a family relationship to be invalid. In addition, ICE headquarters officials stated that the interagency agreements were drafted to reflect the circumstances of children apprehended alone, not separated children or those CBP assesses to have invalid family relationships. ICE officials also stated they no longer believe the April 2018 memorandum of agreement is valid for any UAC, because it was developed to address a process ORR no longer requires. <5.2. Additional Information Sharing Needs Identified by ORR> ORR identified additional information sharing needs some not covered by existing interagency agreements to inform decisions regarding the care and placement of UAC. Specifically, this information includes details about the circumstances of family separations, and information about adults who were apprehended with children (who subsequently were designated as UAC). ORR officials stated that ORR and ICE require this information, collected by DHS, to (1) assess potential sponsors for placement of UAC and (2) to reunify eligible separated families. Assessing Potential Sponsors. ORR officials stated that ORR needs additional information about parents and other adults accompanying a child (who is later designated as a UAC) at the time of apprehension to assess all potential sponsors with whom UAC will be placed as they await immigration proceedings in the United States. However, the Joint Concept of Operations contains limited details about what information should be shared between DHS and HHS about relevant adults. For example, the agreement states that ICE and CBP will provide ORR with contact information for parents, legal guardians, or adult relatives. However, the agreement does not, for example, require DHS to share the details of an adult s criminal history information to ORR. In addition, Border Patrol headquarters officials stated that agents typically would not alert ORR to any concerns about invalid family relationships, as they do not believe that information is relevant. ORR officials stated they need detailed information about an accompanying adult to assess whether they could potentially pose a danger to the child, and this is not addressed in the Joint Concept of Operations. However, ORR officials stated that this information is often not included in DHS s referrals for UAC, and ORR sometimes learns about an accompanying adult from a child after placement in an ORR shelter. Reunifying Eligible Separated Family Units. To ensure compliance with the federal court injunction in the Ms. L. v. ICE litigation, ORR officials stated that they need to know enough details about (1) family separations or (2) situations in which CBP had concerns a family relationship was invalid, to determine whether there are any family units potentially eligible for reunification. If DHS and HHS determine that a parent will be reunified with a child, ORR is responsible for (1) verifying the validity of the family relationship and (2) determining whether the parent is fit or poses a danger to the child, according to ORR officials. For family unit reunifications, ORR has relied, in part, on the determinations made by DHS when the family was separated, according to these officials. However, ORR officials stated the information DHS provides about family separations is generally limited or provided inconsistently, often without enough detail for ORR to assess whether the family unit may be eligible for reunification. For example, the referral might state a family separation is due to the parent s criminal history, but ORR must follow up with ICE to specify the charge, determine whether the adult was convicted, or learn the date of the event. In addition, ORR may conduct family reunifications in accordance with ORR policies and procedures in other situations. For example, there have been cases in which families were separated, but DHS later dropped criminal charges against a parent it planned to prosecute, or a parent has completed a hospitalization that required the parent to be separated from his or her child. According to ICE policy, when ICE is removing a parent from the United States, that parent has the right to determine whether a minor child will be removed with him or her. ORR officials stated that, according to ORR policies and procedures, if the child is to be removed with the parent, ORR must assess whether (1) the family relationship is valid and (2) whether the parent presents a danger to the child. However, ORR officials stated that if this information was not provided at the time of referral, they must reach out to ICE officials to collect it. Further, ORR headquarters officials stated that ICE has removed adults from the United States who wished to be removed with their child or children in ORR custody, before ORR could complete its assessment. However, neither ICE nor ORR could determine exactly how often that had occurred or in exactly what time frame these removals had occurred. DHS and HHS officials provided different perspectives on these information sharing challenges not covered within existing interagency agreements. ORR takes additional steps to collect information from ICE and CBP that ORR is not routinely receiving at the time of referral. This can extend the time that children spend in ORR custody, according to ORR officials. If ORR staff conducting intake duties have questions about UAC and any accompanying adults, ORR headquarters officials told us they typically first contact the local CBP officials who processed the apprehension. In April and August 2019, ORR officials said that some Border Patrol sectors are more responsive than others and that limited and inconsistent information sharing by DHS about separated children has led to delays in placement and release decisions for UAC. ORR staff also reach out to ICE s field office juvenile coordinators or ICE headquarters officials responsible for juvenile and family management. For example, ORR and ICE headquarters coordinate on a weekly basis via email to assess whether family separations are in compliance with federal court orders in the ongoing Ms. L. v. ICE litigation. Specifically, since February 2019, ORR and ICE have shared a spreadsheet tracking UAC who may have been involved in a family separation, according to ORR and ICE headquarters officials. Further, ICE officials said they gather additional information, such as more details about the reason for a family separation from the Form I-213 or by reaching out to CBP officials. They provide some of this information to ORR, as ICE officials noted that they recognize ORR needs such information to assist in its decision-making for UAC. ICE headquarters officials noted that they have found ways to provide more detailed information to ORR without sharing sensitive law enforcement information. It is through this vetting process that ICE and ORR assess potential family separations to reach a confirmed number of cases and the reasons for them, according to ICE and ORR officials. ORR headquarters officials stated that, from their perspective, it would be more efficient if CBP or ICE provided this information directly into ORR s data system at the time of referral, where possible, rather than sharing a spreadsheet via email. Specifically, ORR headquarters officials stated that they have experienced delays in releasing a child to a sponsor due to missing information about a parent or the inability to notify a parent in ICE detention about sponsorship decisions. By contrast, Border Patrol and OFO headquarters officials noted concerns about sharing sensitive information with ORR, particularly for adults apprehended with UAC. Border Patrol officials stated, for example, that Border Patrol does not share sensitive law enforcement information with a third party such as ORR. According to ICE headquarters officials, sometimes ICE officers conduct additional research after a child is referred to ORR, such as if CBP was unable to collect certain information before making a separation decision. ICE officials stated that, for their purposes, the current information sharing procedures in place are sufficient, but noted that ICE has added staff resources to keep up with the demands of current information sharing procedures. Specifically, until May 2019, there was one ICE headquarters official, 2in the juvenile and family management unit, responding to all of ORR s requests, and that ICE added another staff person to assist in responding to ORR s requests. As of October 2019, there were no plans to discuss further these information sharing concerns, according to ORR, CBP, and ICE officials. Leading practices of high-performing organizations include fostering collaboration both within and across organizational boundaries to achieve results. Further, agencies should work together to establish a shared purpose and goals; develop joint strategies or approaches that complement one another; and ensure the compatibility of policies, procedures, and other means to operate across agency boundaries. We have previously reported that written agreements, such as a memorandum of understanding or interagency agreements, can help facilitate collaboration by articulating roles and responsibilities, among other things. These types of written agreements are most effective when they are regularly updated and monitored, as we reported in 2012. While issuing the April 2018 memorandum of agreement and July 2018 Joint Concept of Operations were important steps toward addressing the weaknesses we identified in our 2015 report, additional actions are needed to fully address our recommendation and increase the efficiency and improve the accuracy of the interagency referral and placement process for all UAC. In addition, further DHS and HHS collaboration about information sharing methods and ways to enhance interagency agreements would better position ORR to make informed and timely decisions for UAC, including those separated from adults with whom they were apprehended. <6. Conclusions> As the number of CBP apprehensions of family units has risen markedly in recent years, DHS has developed policies and procedures for processing family units. For example, since 2015 CBP has introduced policies and procedures for collecting information about family units, which has increased the data it collects, including on family separations. However, DHS continues to face challenges in ensuring that it accurately and consistently tracks information about family units, including those it separates. Specifically, CBP training includes definitions of and guidance for processing family units that are inconsistent with CBP policy. Issuing updated training materials with correct definitions of and guidance for processing family units would help CBP ensure that its agents and officers are accurately tracking family units and, where applicable, family separations. In addition, CBP has policies and procedures related to concerns about the validity of a family unit, but it does not have written requirements about what information on these cases Border Patrol agents and OFO officers are to record. Without additional guidance about what details CBP agents and officers are to record on the required Form I-213, these cases will not be well documented, as required by CBP policy. Further, ICE and ORR officials do not have sufficient information to make decisions for the adults and children involved, including determining when reuniting valid family units is necessary. CBP has developed policies and procedures related to family separations, but additional controls would help Border Patrol and OFO ensure that information about these cases is accurately and consistently captured. By developing and implementing additional controls for tracking family separations such as requiring checks during supervisory review that separations were documented properly Border Patrol could better ensure it has accurate information about these cases, consistent with CBP and Border Patrol policies. Further, some of the options for separation reasons in Border Patrol s and OFO s data systems do not fully align with CBP policy. Without updating the reasons agents and officers have available to select from, CBP is not well positioned to determine whether its officials are separating family units for reasons consistent with CBP policy. In addition, during our review of selected Forms I-213 for a sample of separated family units, we found that agents did not always include the reason for the separation or include a detailed description of the circumstances of the case. Developing and implementing additional controls to check that Border Patrol agents document family separations and why they occurred on family unit members Forms I-213 could help Border Patrol ensure its agents are separating family units in accordance with CBP policy. Additionally, without additional guidance on what specific information about the circumstances of the family separations Border Patrol agents and OFO officers are to include on the parent s and child s Forms I-213, ICE and ORR do not have sufficient information to determine, among other things, when family reunifications are required. During our review of ICE s policies and procedures for processing family units, we found that it does not systematically track the family units it separates in its data system. By updating its data system to do so, ICE would be better able to ensure that separated parents, who are subject to removal, are able to make arrangements for their minor child or children, including being removed with them, consistent with ICE policy. While DHS and HHS have developed written interagency agreements related to the transfer and care of UAC, as we recommended in 2015, we found that information sharing gaps between the two agencies remain. As such, continuing their efforts to address our prior recommendation to jointly develop and implement a documented interagency process for all agencies involved in the referral and placement of UAC could help DHS and HHS increase the efficiency and improve the accuracy of these processes for UAC. Moreover, additional DHS and HHS collaboration about information sharing would help provide ORR with additional information, including about accompanying adults, to make informed and timely decisions for UAC. <7. Recommendations for Executive Action> We are making a total of nine recommendations, including six to CBP and one each to ICE, DHS, and HHS. Specifically: The CBP Commissioner should issue updated Border Patrol and OFO training materials that reflect the correct definition of a family unit and guidance for recording that information. (Recommendation 1) The CBP Commissioner should provide written guidance to Border Patrol agents and OFO officers about what narrative information should be recorded on the child s and the accompanying adult s Forms I-213 to document cases in which CBP determines that a parent child relationship may be invalid. (Recommendation 2) The CBP Commissioner should develop and implement additional controls to ensure that Border Patrol agents accurately record family unit separations in its data system. (Recommendation 3) The CBP Commissioner should update Border Patrol s and OFO s data systems to ensure data captured on family unit separation reasons clearly align with CBP policy. (Recommendation 4) The CBP Commissioner should develop and implement additional controls to ensure that Border Patrol agents include a narrative description of a family unit separation on the parent s / legal guardian s and child s Forms I-213, including the reason for the separation. (Recommendation 5) The CBP Commissioner should provide guidance to Border Patrol agents and OFO officers on the narrative information they are to include about family unit separation events on the parent s / legal guardian s and child s Forms I-213. (Recommendation 6) The ICE Director should develop and implement a mechanism to systematically track in its data system the family units ICE separates. (Recommendation 7) The Secretary of Homeland Security, jointly with the Secretary of Health and Human Services, should collaborate to address information sharing gaps identified in this report to ensure that ORR receives information needed to make decisions for UAC, including those apprehended with an adult. (Recommendation 8) The Secretary of Health and Human Services, jointly with the Secretary of Homeland Security, should collaborate to address information sharing gaps identified in this report to ensure that ORR receives information needed to make decisions for UAC, including those apprehended with an adult. (Recommendation 9) <8. Agency Comments> We provided a draft of this report to DHS and HHS for review and comment. DHS and HHS provided formal, written comments, which are reproduced in full in appendixes III and IV, respectively. DHS and HHS also provided technical comments on our draft report, which we incorporated, as appropriate. DHS concurred with our recommendations and described actions planned or underway to address them. For example, in response to several of our recommendations that CBP provide additional or revised guidance and training to agents and officers, DHS stated that Border Patrol issued a memo in January 2020 to clarify what information agents are to record for family unit members, potentially invalid family units, and subsequent separations, if applicable. DHS also described planned updates to OFO data systems to automatically record certain information in family unit members Form I-213, such as the names and identifying information of all family members apprehended together. Regarding our recommendation that CBP should update Border Patrol s and OFO s data systems to ensure the options for family separation reasons clearly align with CBP policy, DHS provided documentation of guidance that OFO and Border Patrol issued about data system updates. DHS requested that we consider the recommendation implemented. We will review the information and documents DHS provided to assess the extent to which CBP fully addressed this recommendation. Regarding our recommendation that ICE develop and implement a mechanism to track its separations in its data system, DHS stated that ICE has efforts underway to enable ICE officers to track separations and reunifications in its data system throughout ICE s immigration enforcement process. DHS and HHS also both concurred with our recommendations that the agencies collaborate to address information sharing gaps identified in this report, and described plans to coordinate and reach agreement on information sharing practices. We will review the agencies actions and planned efforts, including any documentation provided by DHS and HHS, and the extent to which they address each of our nine recommendations. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from its issue date. At that time, we will send copies of this report to the appropriate congressional committees, the Acting Secretary of Homeland Security, and the Secretary of Health and Human Services. In addition, the report is available at no charge on the GAO website at https://gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or gamblerr@gao.gov. Key contributors to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology The objectives of this report were to examine (1) what U.S. Customs and Border Protection (CBP) data indicate about the numbers and characteristics of family units who have been apprehended along the southwest border, (2) the extent to which CBP has developed and implemented policies and procedures for processing family units apprehended along the southwest border, (3) the extent to which U.S. Immigration and Customs Enforcement (ICE) has developed and implemented policies and procedures for processing family units apprehended along the southwest border, and (4) how the Department of Homeland Security (DHS) shares information with the Department of Health and Human Services (HHS) about unaccompanied alien children (UAC), including those children who initially arrived with and were separated from their parents or other adults. To address these objectives and to observe agents and officers processing families, we conducted site visits at U.S. Border Patrol stations and Office of Field Operations (OFO) ports of entry in Arizona, California, and Texas, from July 2018 to October 2018. We also visited ICE family detention facilities, known as family residential centers, in Dilley and Karnes City, Texas in February 2019. Specifically, in Tucson, Arizona we visited Border Patrol s Tucson sector headquarters and OFO s Tucson Field Office headquarters and the Nogales port of entry. In the San Diego, California region, we visited Border Patrol s San Diego sector headquarters and Imperial Beach station and the San Ysidro port of entry. In the Rio Grande Valley, Texas region, we visited CBP s Central Processing Center, Border Patrol s McAllen station, and the Hidalgo and Brownsville ports of entry. In the San Antonio, Texas region, we visited ICE s San Antonio field office headquarters, South Texas Family Residential Center, and Karnes County Residential Center. During these site visits, we interviewed Border Patrol, OFO, and ICE officials, observed agents and officers processing families, and toured CBP and ICE facilities, among other activities. To select these locations, we reviewed CBP data on Border Patrol and OFO apprehensions along the southwest border, including family unit apprehensions, and identified specific locations that had the greatest increase in the number of apprehensions of individuals from fiscal year 2016 to 2017. We also considered the geographical proximity of multiple CBP and ICE facilities to maximize observations. Our observations during site visits are not generalizable to all Border Patrol, OFO, and ICE operations along the southwest border, but provided us the opportunity to learn more about how policies and procedures for processing families are conducted and how CBP and ICE coordinate their efforts. In addition, to address all of our objectives, we interviewed DHS and HHS officials. Specifically, we met with DHS officials from CBP s Office of the Commissioner and Office of Chief Counsel; Border Patrol s Law Enforcement Operations Directorate and Strategic Planning and Analysis Directorate; OFO s Admissibility and Passenger Programs office; ICE s Enforcement and Removal Operations (including the Juvenile Family and Residential Management Unit, Field Operations, Alternatives to Detention, and Law Enforcement Systems and Analysis) and ICE s Office of the Principal Legal Advisor. We also interviewed HHS officials from the offices of the Assistant Secretary for Preparedness and Response and Office of Refugee Resettlement (ORR). To address our first objective and describe what CBP data indicate about the numbers and characteristics of family units who have been apprehended along the southwest border, we reviewed record-level apprehensions data from CBP s Border Patrol and OFO for individuals determined to be inadmissible or potentially subject to removal. We collected data for fiscal year 2016 through the second quarter of fiscal year 2019 because Border Patrol and OFO began to systematically collect data on individuals apprehended as part of a family unit in fiscal year 2016. The second quarter of fiscal year 2019 was the most current data available at the time of our review. We used number of apprehensions rather than the number of individuals or family unit members as the unit of analysis we reported because an individual may have been apprehended multiple times in the same year. The data we report on apprehensions of family unit members include individuals in family units CBP later separated (for reasons other than concerns about validity of the family relationship) from April 19, 2018, when Border Patrol and OFO began collecting data on family separations, through the first two quarters of fiscal year 2019. The record-level data we analyzed are current as of the date Border Patrol or OFO provided it to us. Specifically, Border Patrol data for fiscal years 2016 through 2018 are current as of January 2019; Border Patrol data for the first two quarters of fiscal year 2019 and selected fields for all fiscal years are current as of April 2019. OFO data for fiscal years 2016 through 2018 are current as of February 2019; OFO data for the first two quarters of fiscal year 2019 are current as of June 2019. We grouped the ages of apprehended children in family units (e.g. ages 0 4, 5 11, and 12 17) according to key agency and court documents. While most of our analysis was conducted on the apprehensions of individuals in family units, we were also able to analyze the composition of family units (i.e., as a group rather than individuals) apprehended by Border Patrol. Specifically, Border Patrol uses a family unit number to link the records of adult(s) and children processed as a family unit. As a result, we analyzed whether the family unit was headed by an adult male or adult female and how many children were in the family unit. We could not conduct a similar analysis for the family units apprehended by OFO, because OFO does not assign family units unique identifying numbers to link family members in its data system. As a result, we were unable to report on the composition of family units that OFO encountered. As part of our analysis of CBP data, we determined the number of family unit members Border Patrol and OFO data indicated as separated from April 19, 2018 through March 31, 2019. We selected this time frame because Border Patrol began to systematically collect data on family separations in its data systems on April 19, 2018, and the second quarter of fiscal year 2019 was the most current data available at the time of our review. Our analysis of the reasons for family separations is based on the data recorded by agents and officers in Border Patrol s and OFO s data systems. During the period of our review, Border Patrol s and OFO s data systems included options for agents and officers to choose from to explain the reason for the separation, including, for example, family member prosecuted criminal history and family member prosecuted other reasons. These reasons, and the numbers of separations for each reason, reflect CBP data and may not match the information about separations (including numbers of, reasons for, and timeframes of separations) that DHS reported to a federal court in response to related litigation, such as Ms. L. v. ICE. According to court filings, the information provided in response to that litigation was based on a manual review of multiple federal datasets and reflect categories as required by the litigation. We excluded family separations indicated in CBP data as temporary from our analysis. We also reported separately on the number of adults and children who were apprehended together, but whom CBP assessed to have potentially invalid family relationships and thus processed separately, as CBP does not consider these family separations. To assess the reliability of CBP data, we completed a number of steps, including (1) performing electronic testing for obvious errors in accuracy and completeness, such as running logic tests; (2) reviewing existing information about the data and the systems that produced them, such as relevant training materials for Border Patrol agents and OFO officers who use agency data systems; and (3) discussing data entry issues and data limitations with Border Patrol and OFO officials. We also received demonstrations on the data systems from Border Patrol and OFO officials at headquarters. The limitations and determinations of reliability for the Border Patrol and OFO data are discussed in more detail below. Border Patrol data. We identified a small number of Border Patrol apprehension records that had the same date of apprehension and unique identifier, known as the A-number. It is possible that these apprehension records represented one apprehended individual that Border Patrol agents processed as two apprehensions. These records constituted less than 1 percent of the almost 2.4 million apprehension records we analyzed. We included these apprehension records in our analysis because Border Patrol considers them unique apprehensions and because their small number does not materially affect our analysis. We did not include a small number of records (less than 1 percent of apprehensions of family unit members) that had a family unit number but did not meet CBP s definition of a family unit in our analysis of records of family unit members. For example, a small number of family unit member records did not include a date of birth, so we could not determine whether the individual was an adult or child (i.e., under or over the age of 18 years). For our analysis of the reasons for family separations, we found a small number (18) of Border Patrol records that included more than one separation reason, so we could not distinguish which reason led to a permanent family separation. Thus, we excluded these records from our analysis of the reasons for family separations. According to Border Patrol headquarters officials and documents, in situations in which only one of the adults in a two-parent family was separated, the child or children would remain with the other adult as an intact family unit (and the child would not be designated a UAC and transferred to the custody of ORR). As such, in these situations, we included the separated adults in our reported numbers of separated family unit members, but did not include associated remaining family units in our analysis of separated family units. We found 18 records for family units that included one adult and one child, with one of the family unit members separated. According to Border Patrol s procedures, in the event a family separation occurs, both family unit members are to be processed in the data system as separated. We included these records in the number of family unit members, but did not include them in our analysis of separated family unit members, as it was unclear from the records whether or not the family unit was separated. We identified data reliability issues with Border Patrol s data on family separations, as described in our report. When reporting these data, we rounded down to the nearest increment of five, and described relevant data using modifiers such as at least because of possible missing information. This enabled us to report on the Border Patrol data that we determined were sufficiently reliable for our purposes. OFO data. For the OFO data, we excluded approximately 11 percent of all apprehension records (including single adults, UAC, and parents and children that arrived as part of a family unit) from our analyses because we could not confirm an A-number, for those apprehensions. Among the apprehension records missing an A-number, 44 percent were cases in which OFO officers paroled the individuals and, according to OFO officials, officers are not required to assign an A-number to these individuals. In addition, 47 percent of the records with a missing A- number were cases that involved individuals withdrawing their applications for admission into the United States, in which OFO officers have discretion whether or not to assign an A-number. According to OFO officials, additional records with missing A-numbers may be due to human error during data entry or problems with the data system saving this information in the database that OFO used to pull the data. Finally, we collapsed 153,025 apprehension records into 71,986 apprehension records because we determined that they were duplicate records for the same individual and the same apprehension, based on factors such as A- number, birth date, and date and time of apprehension. As a result, we determined that we could not present precise figures for analyses that include OFO data and instead provided approximations throughout the report. We rounded all data and figures on OFO apprehensions, including where OFO s data inform CBP-data and figures, down to the hundreds place. As an exception, for the much-smaller number of OFO family separations, as compared with total apprehensions, we rounded the figures by increments of five, and described relevant data using modifiers such as at least because of possible missing information. This enabled us to report on the OFO data that we determined were sufficiently reliable for our purposes. With the previously described modifications, we determined that the Border Patrol and OFO data were sufficiently reliable to generally describe the number and demographic characteristics of family units apprehended by CBP along the southwest border. To address the second objective, on the extent to which CBP has developed and implemented policies and procedures for processing family units including how CBP defines family units, assesses the validity of family relationships, and determines whether family separations are warranted we reviewed CBP, Border Patrol, and OFO policy documents, training materials, and other guidance documents in effect from October 2015 through December 2019. For example, we reviewed CBP s 2015 National Standards on Transport, Escort, Detention, and Search policy, as well as Border Patrol s data system processing guidance and Border Patrol and OFO policies and procedures on how agents are to record family separations in agency data systems, among other documents. We compared CBP, Border Patrol and OFO policies and procedures to Standards for Internal Control in the Federal Government related to identifying, analyzing, and responding to change; designing control activities to achieve objectives and identify risks; and using quality information to achieve objectives. We also compared Border Patrol definitions for family units, and processes and guidance for tracking family units, invalid family units, and family unit separations against CBP and Border Patrol policy. To evaluate how Border Patrol recorded information for family units apprehended from June 28, 2018 through March 31, 2019, we also selected a sample of ORR records for UAC involved in family separations and compared them to Border Patrol apprehensions data for the same children. Specifically, we selected a small, random, nongeneralizable sample of 40 ORR records for UAC involved in family separations. We then matched all 40 selected records to Border Patrol apprehensions data, using unique identifiers. Our findings are not generalizable due to the size of our sample, so we cannot use our findings to assess the magnitude of the issues we identified in Border Patrol data. We limited the records from which we selected our sample to those ORR records that included an A-number, a unique identifier, for the adult separated from the child in ORR custody, since Border Patrol tracks its separation reasons in the adult s records. Finally, we compared this information with CBP s October 2015 National Standards on Transport, Escort, Detention, and Search policy, which states that family separations must be documented in the appropriate data systems. We also assessed information against federal internal control standards, which call for management to identify and use quality information to achieve the entity s objectives and address risks, among other control activities. To describe how Border Patrol agents document the reasons for and circumstances of each family separation case, we reviewed a nongeneralizable sample of the DHS Form I-213, Record of Deportable/Inadmissible Alien (Form I-213), which is a form that agents are required to complete for each individual CBP apprehends. Specifically, Border Patrol provided us with Forms I-213 for the adults and children involved in the three most recent instances of family separation from June 28, 2018 through March 30, 2019, in each of Border Patrol s nine sectors along the southwest border. Two of the sectors only had one family separation during that period, so we reviewed the forms for a total of 23 family separations. We reviewed a sample of Forms I-213 prepared by Border Patrol agents, as Border Patrol separated approximately 95 percent of the family separations indicated in CBP data during the period we reviewed. We did not review a sample of Forms I-213 prepared by OFO officers, given the relatively smaller number of families separated by OFO. In addition, we reviewed a sample of forms for cases of family separations only, and did not review forms for cases in which Border Patrol determined the family relationship was invalid because Border Patrol officials told us that they do not record information about assessments of invalid family relationships on the Form I-213. Finally, we compared this information with a 2015 CBP policy that states that family separations must be documented in the appropriate data systems; a June 2018 CBP policy that includes potential reasons to warrant family separations; and federal internal control standards, which call for management to identify and use quality information to achieve the entity s objectives and address risks, among other control activities. To address the third objective, and examine the extent to which ICE has developed and implemented policies and procedures for processing families apprehended along the southwest border, we reviewed ICE policy documents, training materials, and other guidance documents. For example, we reviewed ICE s Juvenile and Family Residential Management Unit Field Office Juvenile Coordinator Handbook, ICE s Family Residential Standards, ICE s data system training manual, and ICE s detained parent policy. We compared ICE s processes against ICE policies and procedures and federal internal control standards, which call for management to design the entity s information system and related control activities to achieve objectives and respond to risks. ICE data. To report on family members apprehended by CBP and detained in ICE family residential centers, we reviewed ICE detention data from June 2014, when ICE opened its first family residential center on the southwest border, through fiscal year 2018, the most current data available at the time of our review. The data for all fiscal years is current as of May 2019, when ICE provided us with record-level data to analyze. To assess the reliability of ICE s data, we completed a number of data reliability steps, including (1) performing electronic testing for obvious errors in accuracy and completeness, such as running logic tests; (2) reviewing existing information about the data and the systems that produced them, such as relevant training materials for the ICE officers who use them; and (3) discussing data entry issues and data limitations with ICE officials. We also received demonstrations on ICE s data system from officials at headquarters. We determined that the data were sufficiently reliable to describe the numbers and demographic characteristics of family members who were apprehended by CBP and detained by ICE at one of its family detention facilities. Additionally, we collected and reviewed data on the families whom ICE separated from July 2018 through September 2019. We selected this time frame because July 2018 is when ICE began to require its field offices to report all instance of family separations to headquarters, which tracks the information on a spreadsheet, and September 30, 2019, the end of the fiscal year. We reported the total number of family separations from the spreadsheet, but could not independently verify the number of separations in ICE s spreadsheet because ICE does not track family separations systematically in its data system. As a result, we reported the total number of family separations, according to ICE, for context to demonstrate that most family separations occur when family units are in CBP custody. To describe how DHS shares information with HHS about UAC, including those involved in family separations, we reviewed DHS and HHS interagency agreements, including the April 2018 information sharing memorandum of agreement and July 2018 Joint Concept of Operations. Additionally, we interviewed DHS and HHS officials at headquarters and DHS officials at locations along the southwest border. We compared the information we gathered with DHS and HHS interagency agreements, which provide expectations for interagency information sharing and procedures for the care and custody of UAC. We also compared DHS and HHS information sharing practices to leading practices for collaboration among federal agencies. We conducted this performance audit from July 2018 to February 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: U.S. Customs and Border Protection (CBP) Apprehensions and U.S. Immigration and Customs Enforcement (ICE) Detentions of Family Units This appendix provides additional information about apprehensions of noncitizen family units by CBP s U.S. Border Patrol and Office of Field Operations (OFO) at or between U.S. ports of entry from fiscal year 2016 through the second quarter of fiscal year 2019. It also provides additional information about family unit members who were apprehended by CBP and subsequently detained by U.S. Immigration and Customs Enforcement (ICE) at a family residential center at some point from fiscal year 2015 through fiscal year 2018. <9. Demographic Information and CBP Processing Decisions for Family Units> The following tables contain information on the demographics of CBP apprehensions of noncitizen family units and family unit members and the processing decisions that CBP agents and officers made for them. CBP data indicate that Border Patrol was responsible for the majority of the overall number of family unit member apprehensions by CBP from fiscal year 2016 through the second quarter of fiscal year 2019 (see table 9). CBP data indicate that family unit member apprehensions grew as a percentage of total CBP apprehensions from fiscal year 2016 through the second quarter of fiscal year 2019 (see table 10). For example, CBP data indicate that apprehensions of family unit members grew from about 22 percent of total southwest border apprehensions in fiscal year 2016 to about 51 percent of such apprehensions during the first two quarters of fiscal year 2019. CBP data indicate that most apprehensions of family unit members from fiscal year 2016 through the second quarter of fiscal year 2019 were nationals of Central American countries (see table 11). CBP data indicate that the majority of apprehensions of adult family unit members by CBP were females, while the majority of children were male (see table 12). Border Patrol s data system collects information about the family units it apprehends. Border Patrol s data indicate that family units that agents apprehended were generally headed by females, although the number of family units headed by males and two-parent family units increased from fiscal year 2016 through the first two quarters of fiscal year 2019 (see table 13). Border Patrol s data indicate that most Border Patrol apprehensions of family unit members occurred in just three sectors (Rio Grande Valley, Texas; El Paso, Texas; and Yuma, Arizona) from fiscal year 2016 through the second quarter of fiscal year 2019 (see table 14). OFO data indicate that most OFO apprehensions of family unit members occurred in just four ports of entry (San Ysidro, California; El Paso, Texas; Hidalgo, Texas; and Nogales, Arizona) from fiscal year 2016 through the second quarter of fiscal year 2019 (see table 15). CBP data indicate that the majority of apprehensions of family unit members resulted in the family unit members being released into the interior of the United States with a notice to appear before an immigration court, which became increasingly common from fiscal year 2016 through the second quarter of fiscal year 2019 (see table 16). <10. Family Units CBP Separated at the Border> The following tables contain information on family units that CBP separated at the border. CBP data indicate that the majority of children that CBP separated from their parents from April 19, 2018 through March 31, 2019 were male (see table 17). CBP data indicate that CBP separated children that ranged in age from less than 1 year old to 17 years old from their parents from April 19, 2018 through March 31, 2019, and the majority of separated children were age 12 and over (see table 18). CBP data indicate that the majority of children that CBP separated from April 19, 2018, through March 31, 2019, were nationals from Central American countries and that more than half were Guatemalan nationals (see table 19). Border Patrol data indicate that the majority of family units that Border Patrol separated from April 19, 2018 through March 31, 2019 were headed by males who were apprehended with a single child (see table 20). Border Patrol data indicate that most adults that were separated from their children by Border Patrol from April 19, 2018, through March 31, 2019, had not been previously apprehended by CBP (see table 21). <11. Demographic Information and ICE Processing Decisions for Family Units Detained at ICE Family Residential Centers> The following tables and figures contain information about the noncitizen family unit members apprehended by CBP and detained by ICE at ICE s family residential centers from fiscal year 2015 through fiscal year 2018. ICE data indicate that from fiscal year 2015 through fiscal year 2018, ICE detained 139,098 family unit members at its family residential centers (see table 22). ICE data indicate that most child family unit members (ages 0 to 17) detained in ICE detention facilities were under the age of 13 (see table 23). ICE data indicate that the majority of adults detained at ICE s family residential centers were female, and the gender of children detained was relatively equal between male and female (see fig. 7). ICE data indicate that the majority of family unit members detained at ICE s family residential centers were from El Salvador, Guatemala, and Honduras, as well as Mexico (see fig. 8). ICE data indicate that the vast majority of family unit members who were detained in one of ICE s family residential centers were subsequently released by ICE into the interior of the United States (see table 24). Appendix III: Comments from the Department of Homeland Security Appendix IV: Comments from the Department of Health and Human Services Appendix V: GAO Contact and Staff Acknowledgments <12. GAO Contact> <13. Staff Acknowledgments> In addition to the contact named above, Kathryn Bernet (Assistant Director), Leslie Sarapu (Analyst in Charge), Hiwotte Amare, James Ashley, Kathleen Donovan, Michele Fejfar, Cynthia Grant, Michael Harmond, Eric Hauswirth, Stephanie Heiken, Jan Montgomery, Heidi Nielson, Kevin Reeves, and Jonathan Still made key contributions to this report. | Why GAO Did This Study
In fiscal year 2019, CBP reported apprehending more than 527,000 noncitizen family unit members at or between U.S. ports of entry along the southwest border—a 227 percent increase over fiscal year 2018. In April 2018, the U.S. Attorney General issued a memo on criminal prosecutions of immigration offenses, which DHS officials said led to an increase in family separations.
GAO was asked to review issues related to DHS's processing of family units. This report examines (1) CBP data on apprehended family unit members; the extent to which (2) CBP and (3) ICE developed and implemented policies and procedures for processing family units; and (4) how DHS and HHS share information about UAC. GAO analyzed record-level DHS and HHS data and documents; interviewed DHS and HHS officials; and visited DHS locations in California and Texas where CBP apprehensions of family units increased in 2017.
What GAO Found
Data from the Department of Homeland Security's (DHS) U.S. Customs and Border Protection (CBP) indicate that apprehensions of family unit members (noncitizen children under 18 and their parents or legal guardians) grew from about 22 percent of total southwest border apprehensions in fiscal year 2016 to about 51 percent of such apprehensions during the first two quarters of fiscal year 2019—the most current data available. During this period, CBP data indicated that most apprehensions of family units—about 76 percent—occurred between ports of entry by the U.S. Border Patrol (Border Patrol). With regard to family separations, from April 2018 through March 2019, CBP data indicate it separated at least 2,700 children from their parents, processing them as unaccompanied alien children (UAC) and transferring them to the Department of Health and Human Services (HHS).
CBP developed some policies and procedures for processing family units but does not have sufficient controls to ensure effective implementation. For example, CBP policy requires that Border Patrol agents and officers track apprehended family unit members and, if applicable, subsequent family separations in agency data systems. GAO's analysis of Border Patrol documents and data indicates that its agents have not accurately and consistently recorded family units and separations. Specifically, GAO examined a nongeneralizable sample of 40 HHS records for children involved in family separations between June 2018 and March 2019 and matched them to Border Patrol apprehensions data for these children. GAO found Border Patrol did not initially record 14 of the 40 children as a member of a family unit (linked to a parent's record) per Border Patrol policy, and thus did not record their subsequent family separation. GAO found an additional 10 children among the 40 whose family separations were not documented in Border Patrol's data system as required by CBP policy during this period. Border Patrol officials were unsure of the extent of these problems, and stated that, among other things, data-entry errors may have arisen due to demands on agents as the number of family unit apprehensions increased. Thus, it is unclear the extent to which Border Patrol has accurate records of separated family unit members in its data system. Further, Border Patrol agents inconsistently recorded information about the reasons for and circumstances surrounding family separations on required forms. Developing and implementing additional controls would help Border Patrol maintain complete and accurate information on all family separations.
DHS's U.S. Immigration and Customs Enforcement (ICE) is, among other things, responsible for detaining and removing those family units apprehended by CBP. ICE officers are to determine whether to accept or deny a referral of a family unit from CBP for detention in one of ICE's family residential centers, release family unit members into the interior of the United States, or remove family unit members (who are subject to final orders of removal) from the United States. ICE has procedures for processing and releasing family units from ICE custody. However, with regard to family unit separations, ICE relies on a manual process to track separations that occur in ICE custody (generally at one of ICE's family residential centers) and does not systematically record this information in its data system. Without a mechanism to do so, ICE does not have reasonable assurance that parents whom ICE separated from their children and are subject to removal are able to make arrangements for their children, including being removed with them, as provided in ICE's policy for detained parents.
In 2018, DHS and HHS developed written interagency agreements regarding UAC. However, DHS and HHS officials stated they have not resolved long-standing differences in opinion about how and what information agencies are to share related to the care and placement of those children, including those referred to HHS after a family separation. GAO found that DHS has not consistently provided information and documents to HHS as specified in interagency agreements. HHS officials also identified additional information they need from DHS, about those adults apprehended with children and later separated, to inform their decisions about placing children with sponsors and reunifying separated families, when necessary. Increased collaboration between DHS and HHS about information sharing would better position HHS to make informed and timely decisions for UAC.
What GAO Recommends
GAO is making eight recommendations to DHS and one to HHS. Among them, CBP should develop and implement additional controls to ensure that Border Patrol agents accurately record family unit separations in data systems. GAO also recommends that ICE systematically track in its data system the family units ICE separates. Further, DHS and HHS should collaborate about information sharing for UAC. DHS and HHS concurred with the recommendations. |
gao_GAO-19-296 | gao_GAO-19-296_0 | <1. Background> This section provides information on the electricity provider, impact of 2017 hurricanes, and status of electricity restoration in Puerto Rico and the U.S. Virgin Islands. Also, it describes FEMA s Public Assistance Program. <1.1. Puerto Rico> Electricity provider. PREPA is a public power utility owned by the Commonwealth of Puerto Rico and a monopoly supplier of electricity in the commonwealth. It is also one of the nation s largest public power utilities, serving approximately 1.5 million customers. PREPA was approximately $9 billion in debt prior to Hurricanes Irma and Maria, and its electric power infrastructure was known to be in poor condition, largely due to underinvestment and poor maintenance practices. In May 2018, we found that inadequate management of PREPA s financial condition contributed to Puerto Rico s persistent deficits. Specifically, PREPA did not update or improve its electric generation and transmission systems, which hampered their performance and led to increased costs of producing electricity that it did not fully pass on to consumers. In addition, Puerto Rico s economy is in a prolonged period of economic contraction, and according to U.S. Census Bureau estimates, its population declined from a high of approximately 3.8 million people in 2004 to 3.3 million people in 2017, a decline of 12.8 percent. Along with the declining population, demand for electricity declined 18 percent from 2007 to 2017, according to PREPA. Impact of 2017 hurricanes. Hurricanes Irma and Maria in September 2017 left Puerto Rico s entire electricity grid inoperable, according to the economic and disaster recovery plan for Puerto Rico. According to a report by a working group that included utilities and national laboratories, among others, because of the extended and unprecedented damage, a significant portion of the generation, transmission, and distribution system must be rebuilt, including high-voltage transmission lines that often survive lower category hurricanes. While Puerto Rico s population had already been declining, the migration of people from Puerto Rico accelerated following the hurricanes, according to PREPA. Status of electricity restoration. According to PREPA, it took roughly 11 months for power to be restored to all of the customers able to receive power safely in Puerto Rico following the hurricanes. A PREPA official told us that PREPA s estimates of customers with power restored are based on the number of meters that it knows are served by a given power line and on the number of meters it can read currently. Power has been restored to 100 percent of customers that are able to receive power safely, but this does not mean that all pre-storm customers have power restored, as some structures may not have been deemed safe for power restoration, according to PREPA officials. Figure 1 shows the percentage of customers with electricity restored in Puerto Rico beginning in January 2018 when PREPA was able to start estimating this information. Although PREPA estimates that electricity had been restored to all customers by August 2018, in some instances electricity service has been supported by temporary generators, and outages have continued. For example, as of December 11, 2018, USACE was supporting seven generators that were supporting micro grids for the island municipalities of Vieques and Culebra. These islands had previously been served by an undersea transmission line connecting the islands to PREPA s main grid on Puerto Rico. According to the U.S. Energy Information Administration, total electricity sales in Puerto Rico returned to pre Hurricane Maria levels as of April and May 2018, although residential electricity sales appear to continue to lag historical levels, reflecting some continued outages. <1.2. U.S. Virgin Islands> Electricity provider. VIWAPA, a public utility, is a monopoly provider of electric power services in the U.S. Virgin Islands and serves approximately 55,000 customers throughout the territory. Like PREPA, VIWAPA faced financial challenges before the hurricanes. The USVI Hurricane Recovery and Resilience Task Force Report noted that VIWAPA has a 17 percent non-payment rate across its customer base, a significant unfunded pension liability, and long-term debt commitments of $265 million. In addition, the report states that the U.S. Virgin Island s energy system faces many challenges that have led to higher rates and a historically unreliable grid. These include an aging, inefficient, and oversized infrastructure and heavy reliance on imported fossil fuels. The report also says that peak demand declined 18 percent from 2011 through 2017, driven by a variety of factors, including population decline. In addition, the report says that VIWAPA s high energy rates and reliability issues have led some customers particularly larger commercial and industrial ones to leave the grid. Impact of 2017 hurricanes. Hurricanes Irma and Maria damaged more than 90 percent of VIWAPA s aboveground power lines and over 20 percent of VIWAPA s generation capacity, according to the USVI Hurricane Recovery and Resilience Task Force Report. Specifically, the hurricanes damaged more than 20,000 poles and 1,100 miles of transmission and distribution lines, according to the report. Although 90 percent of VIWAPA s above ground power lines were damaged, this was far fewer than the miles of transmission and distribution lines damaged in Puerto Rico. Electricity status. According to VIWAPA, following the hurricanes, it took roughly 5 months for power to be restored to all of the eligible customers in the U.S. Virgin Islands. Eligible customers were those whose homes were safely able to receive power. Some homes had suffered substantial damage to their electrical infrastructure from the hurricanes and were not able to receive power safely until their electrical equipment was repaired. VIWAPA s estimates of customers with power restored are based on the number of meters that VIWAPA knows are served by a given power line, as VIWAPA s automated system for determining the percentage of customers without power was destroyed and is still being restored, according to a FEMA official. Although electricity service has been restored, electricity demand has not recovered to prestorm levels. According to the USVI Hurricane Recovery and Resilience Task Force Report, VIWAPA s peak demand the maximum energy load consumed by customers at any point in a year was approximately 107 megawatts before the storms, but as of May 2018 it was 66 megawatts. The report says that demand will likely rebound to some degree as the territory rebuilds and recovers; however, it is unclear how quickly or by how much. <1.3. FEMA s Public Assistance Program> FEMA, in leading the coordination of federal disaster response efforts, provides assistance through its Public Assistance Program to state, territorial, local, and tribal governments and certain types of private nonprofit organizations to assist them in responding to and recovering from major disasters or emergencies. FEMA Public Assistance Program funds can be provided for emergency work, such as for emergency protective measures that must be done immediately to protect public health and safety; permanent work, which includes the restoration of disaster-damaged management costs, which include indirect costs, administrative expenses, or other expenses that are not directly chargeable to a specific project and that a recipient or subrecipient incurs in administering and managing Public Assistance awards. Generally, emergency work takes place for about 6 months following a disaster, while permanent work can take place over a decade, according to FEMA officials. FEMA can provide grants for both emergency and permanent work, and it can also provide direct federal assistance for emergency work. Under direct federal assistance, federal agencies directly perform or contract for the emergency work. FEMA s Public Assistance Program allows for the federal government to provide direct assistance at the request of the state, territorial, and local governments when the impact of an incident is so severe that the state, territorial, and local governments lack the capability to perform or contract eligible emergency work. Under the Public Assistance Program and the Stafford Act, FEMA may mission assign issue a work order that directs another federal agency, such as DOE or USACE to utilize its authorities and the resources granted to it under federal law in support of this direct assistance to state, local, and territorial governments. <1.4. FEMA s Community Disaster Loan Program> The Community Disaster Loan program provides loans to local governments that have suffered substantial loss of tax and other revenue in areas included in a major disaster declaration. The loan funding may be used for existing essential municipal functions and expanded functions required to meet disaster-related needs, but not for capital improvements or repair or restoration of damaged public facilities. <2. The Federal Role in Electricity Grid Restoration Was Unprecedented in Puerto Rico, and Various Factors Affected the Support Provided in Puerto Rico and the U.S. Virgin Islands> Federal agencies provided traditional support to restore electricity in response to Hurricanes Irma and Maria in both Puerto Rico and the U.S. Virgin Islands such as providing temporary power for critical facilities. They also provided unprecedented support in Puerto Rico by helping to coordinate efforts to repair Puerto Rico s electricity grid rather than primarily supporting the local utility s efforts. Factors that affected the electricity grid restoration efforts in Puerto Rico and the U.S. Virgin Islands included logistical constraints, availability of materials, the financial condition of local utilities, and the unprecedented and extensive role of federal agencies. Appendix I provides timelines of federal and other efforts to support electricity grid restoration in Puerto Rico and the U.S. Virgin Islands after the 2017 hurricane season. <2.1. Federal Support Provided to Restore Electricity in Puerto Rico and the U.S. Virgin Islands in Response to the 2017 Hurricanes Included an Unprecedented Role for the Federal Government> Federal agencies assisted in the restoration of electricity after Hurricanes Irma and Maria in a variety of ways. FEMA provided billions in grants and direct federal assistance for electricity restoration. DOE provided subject matter expertise and coordination assistance. USACE provided temporary emergency power in Puerto Rico and the U.S. Virgin Islands. In addition, FEMA and USACE undertook unprecedented roles to help coordinate and directly assist with grid restoration in Puerto Rico. Grants, direct federal assistance, and loans from FEMA. FEMA provided billions in grants and direct federal assistance to support electricity restoration in Puerto Rico and the U.S. Virgin Island through its Public Assistance Program. As public utilities, both PREPA and VIWAPA are eligible applicants for federal assistance through FEMA s Public Assistance Program for the repair, restoration, and replacement of public facilities damaged or destroyed by a major disaster. As of July 20, 2018, FEMA had obligated approximately $3.2 billion for direct federal assistance through mission assignments and Public Assistance grant funds for electricity restoration in Puerto Rico and approximately $795 million for the U.S. Virgin Islands. This includes $2 billion that FEMA obligated for direct federal assistance through mission assignments to USACE for temporary emergency power and grid restoration efforts in Puerto Rico. In the U.S. Virgin Islands, FEMA obligated $63 million for direct federal assistance related to electricity restoration, most of which was obligated to USACE and DOE. Table 1 shows FEMA funding obligations for electricity restoration efforts in Puerto Rico and the U.S. Virgin Islands. In addition, FEMA provided $75 million to VIWAPA through the Community Disaster Loan program as of July 20, 2018, according to FEMA officials. FEMA officials said that the most common use for Community Disaster Loan funds is payroll, and other examples of eligible uses include employee benefits, facilities maintenance costs, and normal operating materials. Coordination and technical assistance from DOE. DOE received mission assignments from FEMA and deployed staff from its headquarters, site offices, and power marketing administrations to provide subject matter expertise and technical assistance in support of electricity grid damage assessments and power restoration efforts in both Puerto Rico and the U.S. Virgin Islands. According to DOE officials, DOE s primary role in the response efforts on Puerto Rico and the U.S. Virgin Islands was coordination and provision of subject matter experts, as is typical for DOE s role as the lead agency for the energy sector emergency support function. In Puerto Rico, however, DOE provided more advisors for a longer period of time than would be typical because of the extent of the damage to the electricity grid in Puerto Rico and PREPA s limited capacity to respond, according to DOE officials. Specifically, DOE staffed up to 12 project support advisors to Puerto Rico from October 18, 2017, to August 8, 2018, and one supply chain support advisor from December 18, 2017, to March 16, 2018. These advisors provided subject matter expertise to USACE by reviewing construction plans and providing recommendations for prioritization, and scheduling and assisting in inventory management for incoming electrical grid equipment, among other things, according to DOE. In addition, in the U.S. Virgin Islands DOE deployed a team of 36 people from the Western Area Power Administration along with trucks and materials to help rebuild the electricity grid through a FEMA mission assignment. DOE officials told us that the department is also providing ongoing support on how to improve grid resilience as part of grid restoration and recovery efforts in both Puerto Rico and the U.S. Virgin Islands. Temporary power from USACE. USACE provided temporary emergency power for critical facilities in Puerto Rico and the U.S. Virgin Islands. These temporary emergency power missions provided and maintained generators to deliver electricity to critical public facilities, such as hospitals and relief centers. After receiving a FEMA mission assignment to provide temporary emergency power in Puerto Rico, USACE deployed its Emergency Power Planning and Response Team, USACE government employees, soldiers from the 249th Engineer Battalion, and contractors. USACE installed a record number of emergency electric generators in Puerto Rico over 2,300 through the end of May 2018. The previous record was 310 emergency generators installed in response to Hurricane Katrina. On May 17, 2018, FEMA approved the extension of the USACE mission assignment for emergency power to November 30, 2018. This extension permitted USACE to continue its support for the more than 700 generators still in use throughout Puerto Rico at that time. FEMA later extended the mission assignment until April 8, 2019. As of December 11, 2018, USACE was supporting 24 generators in Puerto Rico, seven of which were supporting micro grids for the island municipalities of Vieques and Culebra. In the U.S. Virgin Islands, USACE installed 180 generators as a part of its temporary emergency power mission. USACE s temporary emergency power mission for the U.S. Virgin Islands was completed in February 2018, and USACE is no longer supporting generators there. Unprecedented Roles by FEMA and USACE in Puerto Rico. In addition to the typical roles federal agencies undertake in restoration activities, FEMA and USACE undertook unprecedented roles in Puerto Rico because of the severe and widespread impacts of Hurricane Maria and PREPA s limited capacity. For the first time in its history, FEMA undertook the role of helping to coordinate major electricity grid restoration because PREPA did not have the necessary capability, capacity, or structure to respond, according to FEMA officials. FEMA officials also noted that PREPA s workers were not only engaged in restoration work but were also victims dealing with the same post- hurricane effects as the rest of the population. As part of its response, FEMA mission assigned USACE to lead federal efforts to repair Puerto Rico s electricity grid a role USACE had not played in the past during a domestic disaster response. Specifically, on September 30, 2017, the FEMA Administrator tasked USACE with leading the planning, coordination, and integration of the grid restoration. FEMA assigned USACE to lead federal efforts and provide direct support for grid restoration because PREPA was overwhelmed and had liquidity issues and USACE had the structures in place to award contracts with and bring in grid restoration crews, according to FEMA officials. In order to carry out its mission assignment, USACE issued contracts to bring lineworkers and materials to Puerto Rico to support the reinstallation and repair of transmission and distribution lines, among other power restoration activities. As of June 30, 2018, USACE had obligated approximately $1.5 billion on these contracts. Figure 2 shows USACE and its contractors working to restore electricity in Puerto Rico. USACE s grid restoration mission assignment from FEMA ended on May 18, 2018, because, according to FEMA officials, power had been restored to approximately 98 percent of customers and PREPA, with its remaining contractors, had adequate capability to do the remaining work. In addition to the federal response, PREPA issued its own contracts to bring in additional lineworkers, received assistance from the New York State Utility Contingent, and requested and received mutual assistance from other utilities. PREPA did not initially reach out for mutual assistance. About 6 weeks following Hurricane Maria, on October 31, 2017, PREPA formally requested aid from other utilities on the mainland through the American Public Power Association and the Edison Electric Institute. The electric power industry sent two individuals to Puerto Rico on November 3, 2017 and they began assessing storm damage and working with PREPA, FEMA, USACE, and DOE officials to develop a restoration plan. On November 22, 2017, the Governor of Puerto Rico appointed one of these individuals as Power Restoration Coordinator to oversee the multipronged restoration effort. According to the Power Restoration Coordinator, as a first step he worked to create an incident command structure, and incident management teams began arriving in December. Once the incident command structure was in place, the industry deployed additional crews, equipment and materials in January to accelerate the ongoing restoration efforts across the island. As discussed previously, local utilities are typically responsible for restoring service, with federal agencies providing financial and other support. In contrast, approximately half of the lineworkers working to restore the electricity grid in Puerto Rico were USACE or USACE contractors at the peak of restoration efforts in February 2018, as shown in figure 3. FEMA established a unified command structure to coordinate efforts of federal agencies, PREPA, PREPA s contractors, the New York State Utility Contingent, and utilities providing mutual assistance to PREPA, to help target priority work, ensure that crews could get to the work, and identify needed materials. Figure 4 shows the unified command structure. <2.2. Logistical Challenges and Other Factors Affected Federal Support to Restore Electricity> According to documents we reviewed and our interviews with officials and representatives, the most commonly cited factors that affected federal electricity grid restoration efforts in Puerto Rico and the U.S. Virgin Islands included (1) logistical challenges, (2) availability of materials, (3) financial condition of local utilities and poor condition of existing infrastructure, and (4) the extensive and unprecedented role of federal agencies. Logistical challenges. Responding to disasters on islands presents a number of logistical challenges. Specifically, according to federal officials, getting the crews, equipment, and materials needed to support restoration efforts to an island was more difficult and time- consuming than doing so on the mainland. This includes prepositioning assets, such as generators, and delivering equipment and materials in advance of a storm. The difficulties were greater in the days following the hurricanes since neither the ports nor the airports in Puerto Rico and the U.S. Virgin Islands had power, which prevented the delivery of materials to the islands. In Puerto Rico, the Port of San Juan reopened for daylight operations 3 days after Hurricane Maria made landfall; every airport and seaport had limited capacity after reopening for approximately 7 days post-landfall, according to FEMA s 2017 Hurricane Season After-Action Report. Federal officials in the U.S. Virgin Islands told us that they faced further delays locating key supplies because of inadequate labelling of shipping containers at the port. For example, some containers were marked only as disaster supply equipment, which did not sufficiently identify the contents within them. According to USACE s 2018 Remedial Action Program Senior Leader Briefing, USACE lacked the expertise and capabilities to manage the large operational logistics requirements to support the Puerto Rico and U.S. Virgin Islands response. Availability of materials. The sequence of three hurricanes making landfall in the United States in 2017 and the need to restore electricity service in Texas, Florida, and elsewhere, in addition to Puerto Rico and the U.S. Virgin Islands, complicated the restoration effort in the two territories. Since utilities in all affected areas were acquiring materials to restore electricity service, demand for these materials increased and available supplies were generally low; in some instances materials were only available as they were manufactured. Few, if any, materials were stockpiled locally on Puerto Rico. In addition, some of the equipment used in Puerto Rico was not standard in the continental United States and required ordering of specialized materials, resulting in delays in the restoration process. The U.S. Virgin Islands also faced supply issues, which became worse once grid recovery work in Puerto Rico began. Financial condition of local utilities and poor condition of existing infrastructure. Electric utilities in both Puerto Rico and the U.S. Virgin Islands were insolvent, which led to a lack of maintenance and presented its own challenges for restoring the grids after the storms. Specifically, PREPA was approximately $9 billion in debt before the 2017 hurricane season, with annual costs that exceeded its revenues. Puerto Rico s electric power infrastructure was in poor condition before the 2017 hurricane season largely because of PREPA s underinvestment and poor maintenance practices. For example, PREPA canceled its vegetation management program because of its financial situation; this contributed to the destruction of transmission and distribution lines when the hurricane arrived, according to FEMA officials. Similarly, in the U.S. Virgin Islands, financial challenges contributed to the extent of the damage to grid infrastructure. Specifically, VIWAPA officials told us that VIWAPA s financial challenges prevented certain infrastructure improvements and a large proportion of the electricity poles were at or above their weight capacity, increasing the likelihood and extent of wind damage during the hurricanes. According to VIWAPA officials, VIWAPA was aware that there were too many lines and heavy transformers on old poles, but was not in a position to address this concern prior to the hurricanes. Extensive and unprecedented role of federal agencies. FEMA did not anticipate or plan for the extensive role that it and USACE played in grid restoration in Puerto Rico. According to FEMA s after action report for the 2017 hurricane season, FEMA s planning assumptions for a hurricane, earthquake, or tsunami striking Puerto Rico and the U.S. Virgin Islands underestimated the actual requirements. As discussed above, prior to Hurricane Maria in Puerto Rico, USACE had never worked on a large-scale power restoration as part of a domestic disaster response and did not have expertise in this area, according to USACE officials. This affected grid restoration efforts. For example, USACE did not have a grid restoration contract in place to immediately initiate grid repair efforts, according to USACE officials. Rather, USACE issued an order off of a pre-existing contract that it had under its public works and engineering support function to bring electric utility lineworkers to Puerto Rico. According to USACE officials, a bid protest against one of USACE s contracts delayed its ability to increase the contract to bring more lineworkers to Puerto Rico. In addition, the contract review and approval process USACE used to obtain supplies took longer than it would typically take utilities to get supplies, according to FEMA officials we interviewed. According to USACE officials, USACE followed federal acquisition regulations, which is a slow process compared to private party purchases. USACE officials said that USACE is considering looking at what would be needed to create an advance grid restoration contract. FEMA, USACE, and DOE identified potential actions to address these challenges. According to its after action report, FEMA plans to establish a standing interagency Power Task Force to coordinate with DOE, USACE, and state and local governments and provide crisis planning for the energy sector emergency support function to support the restoration of power during future national response efforts. USACE s 2018 Remedial Action Program Senior Leader Briefing made recommendations to improve contingency contracting and operational logistics, among other things. Specifically, recommendations included that USACE review existing and planned advance contracts and make adjustments as necessary to increase capacity and improve capabilities, and that USACE work with FEMA to convene an interagency logistics planning team and identify logistics contracting gaps and propose government and private sector solutions. DOE s after action report for the 2017 hurricane season says that the lessons learned from the response to Hurricane Maria may prompt some programmatic improvements to the energy sector emergency support function roles and responsibilities related to island response, among other potential improvements. In addition, the report states that because of the extensive damage to grid infrastructure and the length of the restoration and recovery, there is an increasing need to incorporate resilience and hardening into restoration, recovery, and mitigation planning and execution. <3. Agency Comments> We provided a draft of this report to the Department of Defense (DOD), the Department of Homeland Security (DHS), DOE, and the governments of Puerto Rico and the U.S. Virgin Islands for review and comment. In its comments, reproduced in appendix II, DHS indicated that a top priority of DHS, FEMA and the entire federal government has been to provide life safety and life-sustaining resources to Puerto Rico and the U.S. Virgin Islands, including efforts to restore power and stabilize critical infrastructure. DHS, DOD, and DOE also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of Energy, the Secretary of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Timelines of Federal and Other Efforts to Support Electricity Grid Restoration See figures 5 and 6 for a timeline of federal and other efforts to support electricity grid restoration in Puerto Rico and the U.S. Virgin Islands after the 2017 hurricane season. Appendix II: Comments from the Department of Homeland Security Appendix III: GAO Contact and Staff Acknowledgments <4. GAO Contact> <5. Staff Acknowledgments> In addition to the contact named above, Quindi Franco (Assistant Director), Marya Link (Analyst in Charge), Janice Ceperich, William Gerard, Cindy Gilbert, Joseph Maher, David Marroni, Bolko Skorupski, Sheryl Stein, and Jarrod West made key contributions to this report. | Why GAO Did This Study
In 2017, Hurricanes Irma and Maria damaged much of the electricity grids' transmission and distribution systems in USVI and Puerto Rico. The hurricanes left most of USVI's 106,405 people and all of Puerto Rico's 3.3 million without power and resulted in the longest blackout in U.S. history.
Under the National Response Framework, electric utilities are responsible for repairing infrastructure and restoring service. They often use mutual assistance—voluntary partnerships with other electric utilities—to bring in additional resources to help restore electricity. Federal agencies provide financial assistance; help coordinate the federal response; and in severe emergencies, provide logistical support, such as assisting in damage assessments and location and transportation of repair crews and equipment.
GAO was asked to review the federal response to the 2017 hurricanes. This report provides information on federal support for restoring the electricity grids in Puerto Rico and USVI and factors affecting this support. GAO has ongoing work examining federal support to improve grid resilience in Puerto Rico.
GAO reviewed agency documents and funding data through July 20, 2018, the most recent data available; interviewed officials from FEMA, DOE, and USACE; and conducted site visits to Puerto Rico and USVI.
What GAO Found
Federal agencies supported efforts to restore electricity in the U.S. Virgin Islands (USVI) and Puerto Rico through the types of support they traditionally provide following disasters and, in Puerto Rico, in some unprecedented ways.
USVI. Federal agencies provided traditional federal support to the electric utility's restoration efforts. For example, the Federal Emergency Management Agency (FEMA) provided financial assistance through its Public Assistance Program, and the Department of Energy (DOE) provided subject matter expertise to assist the local utility. In addition, the U.S. Army Corps of Engineers (USACE) provided generators for hospitals and other critical facilities. FEMA obligated about $795 million for these efforts as of July 20, 2018. According to the local utility, it took about 5 months for power to be restored to all customers with structures deemed safe for power restoration.
Puerto Rico. In addition to the traditional types of support, FEMA and USACE undertook unprecedented roles of helping to coordinate and directly assist with grid restoration in Puerto Rico. FEMA requested that USACE lead federal grid repair efforts because of the scale of the damage and because the Puerto Rico Electric Power Authority (PREPA) did not have the capacity to respond, according to FEMA officials. FEMA obligated about $3.2 billion for electricity restoration efforts as of July 20, 2018, and PREPA estimated that it took roughly 11 months for power to be restored to all customers with structures deemed safe for power restoration.
Various factors affected federal support for electricity grid restoration, according to officials GAO interviewed and documents reviewed. For example, getting the crews and materials needed to islands was more difficult and time-consuming than on the mainland. In Puerto Rico, PREPA was insolvent, which presented challenges for restoring the grid. For example, PREPA canceled its vegetation management program; this contributed to the destruction of the grid when the hurricane arrived, according to FEMA officials. In addition, FEMA did not anticipate or plan for the extensive federal role in grid restoration in Puerto Rico, and USACE did not have a contract in place to immediately initiate grid repair efforts, according to USACE officials. FEMA and USACE identified potential actions to address these challenges, such as reviewing advance contracts. |
gao_GAO-19-635T | gao_GAO-19-635T_0 | <1. The Nation Faces Ongoing Challenges Across the Biodefense Enterprise> Our past work has identified five key challenges related to the nation s ability to detect and respond to biological events that transcend what any one agency can address on its own. They include: (1) enterprise-wide threat determination, (2) situational awareness and data integration, (3) biodetection technologies, (4) biological laboratory safety and security, and (5) emerging infectious disease surveillance. The complexity and fragmentation of roles and responsibilities across numerous federal and nonfederal entities presents challenges to ensuring efficiency and effectiveness across the entire biodefense enterprise. In September 2018, the White House issued the National Biodefense Strategy and through NSPM-14 established a governance structure to guide its implementation. The activities and responsibilities assigned to the interagency governance body by the strategy and NSPM-14 may create new opportunities to make progress on these longstanding and complex issues. However, because implementation of the Strategy and NSPM-14 are in early stages, it remains to be seen how or to what extent they are able to do so. We have ongoing work assessing the strategy and early efforts to implement it. We plan to report in fall 2019. <1.1. Enterprise-Wide Threat Determination Needed to Help Leverage Resources and Inform Resource Tradeoffs> We reported in October 2017 that opportunities remain to enhance threat awareness across the entire biodefense enterprise, leverage shared resources, and inform budgetary tradeoffs among various threats and agency programs. As depicted in figure 1, we reported in October 2017 that key biodefense agencies, including DHS, DOD, HHS, USDA, and EPA, rely on intelligence and global surveillance information, scientific study of disease agent characteristics, and analysis to better understand threats and help make decisions about biodefense investments. These activities are often conducted to support the agencies mission or to understand a specific threat. Additionally, to facilitate collaboration among government partners, federal agencies with key roles in biodefense share biological threat information through many different mechanisms including interagency bodies, working groups at the agency and executive level, formalized agreements, colocation, joint projects and funding efforts, and shared expertise (see figure 2). The collaborative mechanisms in which the key agencies in our October 2017 review participated may facilitate information sharing in support of specific federal activities and in individual programs, or in response to specific biological events after they begin to unfold. However, as we reported in October 2017, there was no existing mechanism that could leverage threat awareness information to direct resources and set budgetary priorities across all agencies for biodefense. The nation faces many biological threats, including naturally occurring diseases that affect human, animal, and plant health, and biological weapons used by state or nonstate actors. Without a mechanism that is able to assess the relative risk from biological threats across all sources and domains, the nation may be limited in its ability to prioritize resources, defenses, and countermeasures against the most pressing threats. The Strategy and NSPM-14 outline requirements for participating agencies that lay the ground work for a more systematic, cross- government examination of existing programs. The effort offers the potential for the nation to progress toward more integrated and enterprise-wide threat awareness and to use that information to identify opportunities to leverage resources, but this will take time and entails a change in the way participating agencies have traditionally operated. Because implementation of the strategy is in its early stages, it is too soon to assess how, if at all, it might address this challenge. <1.2. Ongoing Challenges to Fulfill Enhanced Situational Awareness and Data Integration Requirements> We have reported that DHS s National Biosurveillance Integration Center (NBIC), which was created to integrate data across the federal government with the aim of enhancing detection and situational awareness of biological events, has suffered from long-standing issues related to its clarity of purpose. In 2009, we reported that some of NBIC s partners were not convinced of the value that working with NBIC provided because NBIC s mission was not clearly articulated. We also reported that NBIC was not fully equipped to carry out its mission because it lacked key resources data and personnel from its partner agencies, which may have been at least partially the result of collaboration challenges it faced. In the 2009 report, we recommended that NBIC develop a strategy for addressing barriers to collaboration and develop accountability mechanisms to monitor these efforts. DHS agreed, and in August 2012 NBIC issued the NBIC Strategic Plan, to provide its strategic vision, clarify the center s mission and purpose, and articulate the value that NBIC seeks to provide to its partners, among other things. In September 2015, we reported that despite NBIC s efforts to collaborate with interagency partners to create and issue a strategic plan that would clarify its mission and efforts, a variety of challenges remained. We identified options for policy or structural changes that could help a federal data integrator like NBIC better fulfill its mission, given the complexity and difficulty inherent in achieving truly integrated situational awareness that makes new meaning out of disparate data, but we did not make specific recommendations. The National Biodefense Strategy identified biosurveillance data integration among several information sharing activities that need to be enhanced. Interagency attention to the goals, opportunities, and challenges of enterprise-wide data integration offers the potential for the nation to better define what kind of integrated situational awareness is possible, what it will take to effectively and efficiently achieve it, and what value it has. However, it remains to be seen how or whether the interagency efforts to implement the Strategy will be able to address ongoing situational awareness and data integration challenges. <1.3. Challenges Determining Optimal Biodetection Technology Solutions> Since 2012, we have reported that DHS has faced challenges in clearly justifying the need for the BioWatch program and its ability to reliably address that need (to detect aerosolized biological attacks). In September 2012, we found that DHS approved a next-generation BioWatch acquisition in October 2009 without fully developing knowledge that would help ensure sound investment decision making and pursuit of optimal solutions. We recommended that before continuing the acquisition, DHS reevaluate the mission need and possible alternatives based on cost-benefit and risk information. DHS concurred and in April 2014, canceled the acquisition because an alternatives analysis did not confirm an overwhelming benefit to justify the cost. DHS continues to rely on the currently-deployed BioWatch system for early detection of an aerosolized biological attack, but in 2015 we found that DHS lacked reliable information about the current system s technical capabilities to detect a biological attack, in part because DHS had not developed technical performance requirements for the system. We reported in September 2015 that DHS commissioned tests of the current system s technical performance characteristics, but without performance requirements, DHS could not interpret the test results and draw conclusions about the system s ability to detect attacks. At the time of our report in October 2015, DHS was considering upgrades to the Gen-2 system, but we recommended that DHS not pursue upgrades until it establishes technical performance requirements to meet a clearly defined operational objective and assesses the system against these performance requirements. DHS concurred and reported it was working to address the recommendation. DHS has since begun to acquire a different type of biodetection system, BioDetection 21 (or BD21), intended to replace BioWatch. BD21 is currently in a pilot phase; therefore we cannot yet determine how it will be implemented in the future or what decisions DHS will ultimately make regarding the existing BioWatch system. <1.3.1. Multiplex Point-of-Care Technologies> In August 2017, we reported that from a homeland security and public health perspective, threats of bioterrorism, such as anthrax attacks, and high-profile disease outbreaks, such as Ebola and emerging viruses like dengue, chikungunya, and Zika, highlight the continued need for diagnostic tests that provide early detection and warning about biological threats to humans. Multiplex point-of-care technologies are technologies that can simultaneously test for more than one type of human infectious disease pathogen from a single patient sample (such as blood, urine, or sputum) in one run at or near the site of a patient. Multiplex point-of-care technologies can be used for diagnosing different diseases, including more common diseases such as influenza, emerging infectious diseases, or diseases caused by select agents in minutes to a few hours. We further reported that, while potential benefits of these technologies include more appropriate use of antibiotics and improved ability to limit the spread of disease, among others, developers and users disagreed on the strength of evidence showing the extent of multiplex point-of-care technologies improvement on patient outcomes and identified the need for more clinical studies to establish the benefits of these technologies. Additionally, implementation challenges include lack of familiarity with such technologies, cost considerations, false positive results for rare diseases, and the challenges related to the regulatory review process for developers to get approval or clearance to market their technologies. The National Biodefense Strategy and its interagency governing leadership offer the potential for the nation to better define the role of detection technologies in a layered national biodefense capability to help those that pursue these technologies better articulate the mission needs and align requirements and concepts of operation accordingly. Because implementation of the strategy is in its early stages, it remains to be seen how or whether the interagency will be able engage on this issue in a way that helps to drive informed investment tradeoff decisions about technology alternatives. <1.4. Continued Oversight Needed to Enhance Biological Laboratory Safety and Security> <1.4.1. Addressing Safety Lapses at Laboratories> We along with Congress and various federal committees have, for many years, identified challenges and areas for improvement related to the safety, security, and oversight of high-containment laboratories. These laboratories conduct research on hazardous pathogens such as the Ebola virus and the bacteria that causes anthrax and toxins that may pose a serious threat to humans, animals, or plants. In 2008 and 2009, we found a proliferation of high-containment laboratories across the United States, with the number of such laboratories in the government, academic, and private sectors increasing since 2001. We recommended that the National Security Advisor name an entity charged with government-wide strategic evaluation of high-containment laboratories. National Security Staff disagreed with this recommendation. After reporting on these issues again in 2013, the Office of Science and Technology Policy implemented this recommendation. In January 2013, we also found that, for the subset of these laboratories subject to federal oversight, the oversight was duplicative, fragmented, and dependent on self-policing. We recommended that HHS s Centers for Disease Control and Prevention and USDA s Animal and Plant Health Inspection Service work with DHS and DOD to coordinate inspections and ensure consistent application of inspection standards; the departments generally agreed with our recommendations and noted various actions they had already taken, or planned to take, to coordinate inspection efforts, such as conducting joint inspections. More recently, in response to reported lapses in laboratory safety at HHS and DOD in 2014 and 2015, we examined how federal departments oversee their high-containment laboratories. In March 2016, we found that most of the 8 departments and 15 agencies that we reviewed had policies that were not comprehensive or were not up to date. Also, while the departments and agencies we reviewed primarily used inspections to oversee their high-containment laboratories, some of them were not routinely reporting inspection results, laboratory incidents, and other oversight activities to senior officials. We made 33 recommendations in total, including that departments develop and update policies to include missing elements and ensure that oversight activity results are reported to senior officials. To date, 12 of the 33 recommendations have been implemented including updating policies and reporting requirements. We continue to monitor agency progress in implementing the 21 that remain open. In response to several incidents involving the shipment of improperly inactivated pathogens, in August 2016 we reported on issues related to the inactivation of pathogens in high-containment laboratories and found that both the science and the federal guidance around pathogen inactivation are limited and inconsistently implemented. Additionally, we found that federal officials did not know how many incomplete inactivation incidents have occurred because laboratories do not have to identify them in incident reports, and are only required to report incidents involving certain pathogens. We made 11 recommendations to HHS and USDA that they improve the oversight of inactivation by revising reporting forms, improving guidance for development and validation of inactivation protocols, and developing consistent criteria for enforcement of incidents involving incomplete inactivation. To date, 6 of the 11 recommendations have been addressed and we continue to monitor the 5 that remain open. Safety lapses continued to occur at laboratories in the United States that conduct research on hazardous pathogens, raising concern about the efficacy of federal oversight. In October 2017, we found that the Federal Select Agent Program jointly managed by HHS and USDA oversees laboratories handling of certain hazardous pathogens known as select agents, but the program does not fully meet all key elements of effective oversight. For example, the Federal Select Agent Program was not independent from all laboratories it oversees, and it had not assessed risks posed by its current structure or the effectiveness of its mechanisms to reduce organizational conflicts of interest. We made 11 recommendations for the Federal Select Agent Program, including to (1) assess risks from its current structure and the effectiveness of its mechanisms to reduce conflicts of interest and address risks as needed, (2) assess the risk of activities it oversees and target reviews to high-risk activities, and (3) develop a joint workforce plan; to-date, 5 of 11 recommendations have been addressed and we continue to monitor the progress for the 6 that remain open. <1.4.2. DOD s Biosafety and Biosecurity Program> In September 2018 we found that DOD had made progress by taking a number of actions to address the 35 recommendations from the Army s 2015 investigation report on the inadvertent shipment of live anthrax; however, DOD had not yet developed an approach to measure the effectiveness of these actions. Additionally, we reported that although DOD had implemented a Biological Select Agents and Toxins Biosafety and Biosecurity Program to improve management, coordination, safety, and quality assurance, DOD had not developed a strategy and implementation plan for managing the program. Also, we found that the Army had not fully institutionalized measures to ensure that its biological test and evaluation mission remains independent from its biological research and development mission so that its test and evaluation procedures are objective and reliable. Finally, DOD had not completed a required study and evaluation of its Biological Select Agents and Toxins infrastructure that will affect the future infrastructure of the Biological Select Agents and Toxins Biosafety and Biosecurity Program. DOD officials had no estimated time frames for when DOD will complete the study and evaluation. We recommended that DOD develop an approach to assess the effectiveness of the recommendations, a strategy and implementation plan for its Biological Select Agents and Toxins Biosafety and Biosecurity Program, measures to ensure independence, and time frames to complete a study. To date, all of these recommendations remain open. In agency comments, DOD concurred with all four of our recommendations and discussed the actions the department intended to take to address them, including finalizing the development of a long-term strategy and implementation plan by September 1, 2019. The National Biodefense Strategy highlights the need for continuous improvement of biosafety and biosecurity for laboratories and other facilities. However, it is not yet known how, if at all, the strategy will drive interagency partners to develop additional oversight or other practices to mitigate the risk of bioincidents at high containment laboratories, because implementation of the strategy is in its early stages. <1.5. Challenges Building and Maintaining Emerging Infectious Disease Surveillance> We have reported that establishing and sustaining biosurveillance capabilities can be difficult for a myriad of reasons. For example, maintaining expertise in a rapidly changing field is difficult, as is the challenge of accurately recognizing the signs and symptoms of rare or emerging diseases. Additionally, we reported in October 2011 that funding targeted for specific diseases does not allow for focus on a broad range of causes of morbidity and mortality, and federal officials have said that the disease-specific nature of funding is a challenge to states ability to invest in core biosurveillance capabilities. Further, we reported in May 2018 that although the awards funded by supplemental appropriations have allowed state and local public health departments, laboratories, and hospitals to surge during a threat for example, the H1N1influenza and Zika viruses most of the 10 non-federal stakeholders we interviewed, as well as HHS officials said that the timing of these awards can result in challenges to carrying out preparedness and response activities during infectious disease threats. An effective medical response to a biological event depends in part on the ability of individual clinicians and other professionals to identify, accurately diagnose, and effectively treat diseases, including many that may be uncommon. For example, in May 2017, we reported that because Zika virus disease was a newly emerging disease threat in the United States and relatively little was known about the virus prior to 2016, HHS and state and local public health agencies were not fully equipped with information and resources needed for a rapid response at the outset of the recent outbreaks. They faced challenges establishing and implementing surveillance systems for Zika virus disease and infection and its associated health outcomes. Additionally, in March 2019, we reported that USDA would likely face surveillance challenges that could delay detection of the first cases in a foot-and-mouth disease outbreak in livestock, which could have a devastating impact on our economy and trade agreements. For example, foot-and-mouth disease can spread without detection as signs can be difficult to notice in some species, take up to 4 days to manifest after an animal is infected, and infection in wild animals could go undetected and continue to spread the virus. In 2011, while reporting on nonfederal biosurveillance efforts, we found state and local agriculture, public health, and wildlife departments were completely or largely dependent on federal funding for biosurveillance- related activities. At that time, we also reported that the common federal approach of disease-specific funding for example, West Nile virus limited nonfederal efforts to develop core capabilities that could provide surveillance capacity that cut across health threats and for emerging- disease threats. According to federal, state, and local officials, early detection of potentially serious disease indications nearly always occurs first at the local level, making the personnel, training, systems, and equipment that support detection at the state and local level a cornerstone of our nation s biodefense posture. In May 2018, we reported that officials from HHS told us that their grant awards funded by annual appropriations are intended to establish and strengthen emergency preparedness and capacity building, but may not fully support the need for surge capacity that states and other jurisdictions require in order to respond to an infectious disease threat. We reported that during recent infectious disease threats, HHS received supplemental appropriations to respond to Zika in 2016, Ebola in 2014, and H1N1 pandemic influenza in 2009. However, as mentioned above, officials also said that the timing of these awards can result in challenges to carrying out preparedness and response activities during infectious disease threats. HHS officials, as well as all 10 selected non-federal stakeholders, also noted in May 2018 that a funding mechanism to fund rapid response activities when additional support is needed would be beneficial and could help address timing challenges. However, we reported that concerns were also raised about (1) when such a mechanism for funding infectious disease threats should be used, and (2) that any type of emergency fund should not be used to make up for a lack in investments at all levels of government for current preparedness and capacity-building activities. We did not make recommendation as part of this work. However, part of our May 2018 reporting included perspectives from various stakeholders on such a fund. Stakeholders cited six factors that may be considered for a new emergency response fund: (1) who determines when to use an emergency fund, (2) what factors would trigger the use of an emergency fund, (3) methods to determine the amount of available funding, (4) activities to fund with an emergency fund, (5), accountability for use of an emergency fund, and (6) whether an emergency fund would be specific to infectious disease threats. The National Biodefense Strategy and its interagency governance structure offer the opportunity to design new approaches to identifying and building a core set of surveillance and response capabilities for emerging infectious diseases. However, it is too early into implementation to determine how effective, if at all, the new strategy will be in addressing this challenge. How and to what extent implementation of the Strategy is able to efficiently leverage and effectively sustain capacity across both nonfederal and federal stakeholders will affect how prepared the nation is to more quickly gear up for whatever challenges emerge when outbreaks of previously non-endemic diseases threaten the nation. Thank you, Chairman Lynch, Ranking Member Hice, and Members of the Subcommittee. This concludes my prepared statement. I would be happy to respond to any question you may have at this time. <2. GAO Contact and Staff Acknowledgments> If you or your staff has any questions concerning this testimony, please contact Christopher P. Currie at (404) 679-1875 or curriec@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Kathryn Godfrey (Assistant Director), Susanna Kuebler (Analyst-In-Charge), Nick Bartine, Jeffrey Cirillo, Michele Fejfar, Eric Hauswirth, Tracey King, Dawn Locke, and Adam Vogt. Key contributors for the previous work that this testimony is based on are listed in each product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
Catastrophic biological events have the potential to cause loss of life, and sustained damage to the economy, societal stability, and global security. The biodefense enterprise is the whole combination of systems at every level of government and the private sector that contribute to protecting the nation and its citizens from potentially catastrophic effects of a biological event. Since 2009, GAO has identified cross-cutting issues in federal leadership, coordination, and collaboration that arise from working across the complex interagency, intergovernmental, and intersectoral biodefense enterprise. In 2011, GAO reported that there was no broad, integrated national strategy that encompassed all stakeholders with biodefense responsibilities and called for the development of a national biodefense strategy. In September 2018, the White House released a National Biodefense Strategy.
This statement discusses GAO reports issued from December 2009 through March 2019 on various biological threats and biodefense efforts, and selected updates to BioWatch recommendations made in 2015. To conduct prior work, GAO reviewed biodefense reports, relevant presidential directives, laws, regulations, policies, strategic plans; surveyed states; and interviewed federal, state, and industry officials, among others.
What GAO Found
GAO's past work has identified a number of challenges related to the nation's ability to detect and respond to biological events that transcend what any one federal department or agency can address on its own. They include, among others:
Assessing enterprise-wide threats. In October 2017, GAO found there was no existing mechanism across the federal government that could leverage threat awareness information to direct resources and set budgetary priorities across all agencies for biodefense. GAO said at the time that the pending biodefense strategy may address this.
Situational awareness and data integration. GAO reported in 2009 and 2015 that the Department of Homeland Security's (DHS) National Biosurveillance Integration Center (NBIC)—created to integrate data across the federal government to enhance detection and situational awareness of biological events—has suffered from longstanding challenges related to its clarity of purpose and collaboration with other agencies. DHS implemented GAO's 2009 recommendation to develop a strategy, but in 2015 GAO found NBIC continued to face challenges, such as limited partner participation in the center's activities.
Biodetection technologies. DHS has faced challenges in clearly justifying the need for and establishing the capabilities of the BioWatch program—a system designed to detect an aerosolized biological terrorist attack. In October 2015, GAO recommended that DHS not pursue upgrades until it takes steps to establish BioWatch's technical capabilites. While DHS agreed and described a series of tests to establish capabilities, it continued to pursue upgrades.
Biological laboratory safety and security. Since 2008, GAO has identified challenges and areas for improvement related to the safety, security, and oversight of high-containment laboratories, which, among other things, conduct research on hazardous pathogens—such as the Ebola virus. GAO recommended that agencies take actions to avoid safety and security lapses at laboratories, such as better assessing risks, coordinating inspections, and reporting inspection results. Many recommendations have been addressed, but others remain open, such as finalizing guidance on documenting the shipment of dangerous biological material.
In September 2018, the White House issued the National Biodefense Strategy and associated plans, which could help to address some of the ongoing challenges GAO has previously identified. However, because implementation of the strategy is in early stages, it remains to be seen how or to what extent the agencies responsible for implementation will institutionalize mechanisms to help the nation make the best use of limited biodefense resources. GAO is currently reviewing the strategy and will report out later this year.
What GAO Recommends
GAO has made numerous agency recommendations in its prior reports designed to address the challenges discussed in this statement. As of June 2019, agencies have taken steps to address many of these, and GAO is monitoring ongoing efforts. |
gao_GAO-20-75 | gao_GAO-20-75_0 | <1. Background> Signed into law on May 9, 2014, the DATA Act expands on previous federal transparency legislation. It requires a greater variety of data related to federal spending by agencies, such as budget and financial information, to be disclosed and agency spending information to be linked to federal program activities so that policymakers and the public can more effectively track federal spending through its life cycle. The act gives OMB and Treasury responsibility for establishing government-wide financial data standards for any federal funds made available to, or expended by, federal agencies. As Treasury and OMB implemented the DATA Act s requirement to create and apply data standards, the overall data standardization effort has been divided into two distinct, but related, components: (1) establishing definitions which describe what is included in each data element with the aim of ensuring that information will be consistent and comparable and (2) creating a data exchange standard with technical specifications that describe the format, structure, tagging, and transmission of each data element. Accordingly, OMB took principal responsibility for developing policies and defining data standards. Treasury took principal responsibility for the technical standards that express these definitions, which federal agencies use to report spending data for publication on USAspending.gov. Under the act, agencies are required to submit complete and accurate data to USAspending.gov, and agency-reported award and financial information is required to comply with the data standards established by OMB and Treasury. See app. V for more information on the sources of data and process for submitting data under the DATA Act. <1.1. GAO Reports on Data Quality and Data Governance> Since the DATA Act s enactment in 2014, we have issued a series of reports and made recommendations based on our ongoing monitoring of DATA Act implementation. In November 2017, we issued our first report on data quality, which identified issues with, and made related recommendations about, the completeness and accuracy of the Q2 FY2017 data that agencies submitted, agencies use of data elements, and Treasury s presentation of the data on Beta.USAspending.gov. In addition, as part of our ongoing monitoring of DATA Act implementation, and in response to provisions in the DATA Act that call for us to review IG reports and issue reports assessing and comparing the quality of agency data submitted under the act and agencies implementation and use of data standards, we issued a report in July 2018, based on our review of the IG reports of the quality of agencies data for Q2 FY2017. Our prior reports identified significant data quality issues and challenges that may limit the usefulness of the data for Congress and the public. These data quality challenges underscore the need for OMB and Treasury to make further progress on addressing our 2015 recommendation that they establish clear policies and processes for developing and maintaining data standards that are consistent with key practices for data governance. Such policies and processes are needed to promote data quality and ensure that the integrity of data standards is maintained over time. In March 2019, we reported on the status of OMB s and Treasury s efforts to establish policies and procedures for governing data standards. We found that OMB and Treasury have established some procedures for governing the data standards established under the DATA Act, but a formal governance structure has yet to be fully developed. Therefore, we made recommendations to OMB to clarify and document its procedure for changing data definition standards, and to ensure that related policy changes are clearly identified and explained. <2. Data Quality Has Improved, but Challenges with Completeness, Accuracy, and the Implementation and Use of Data Standards Remain> For Q4 FY2018, 107 agencies, including all 24 CFO Act agencies and 83 non-CFO Act agencies, determined they were required to submit data, or they would voluntarily submit data, under the DATA Act. Of these 107 agencies, 96 submitted data for Q4 FY2018. This is an increase over the initial submissions for Q2 FY2017 when 78 agencies submitted data that covered 91 federal entities. This represents an improvement in the number of agencies reporting. However, not all the required files submitted by agencies were complete, and the data submitted were not always accurate (i.e., consistent with agency source records and other authoritative sources and applicable laws and reporting standards). In addition, we found that some CFO Act agencies did not include certain financial assistance programs that made awards during fiscal year 2018 in their submissions. Finally, some agencies continued to have challenges in reporting some data elements in accordance with standards. <2.1. Agencies That Submitted Data Were Generally Timely, but Several Agencies Failed to Report All or Some of Their Data> While the total number of agencies that submitted data for Q4 FY2018 increased compared to Q2 FY2017, more agencies submitted their data for Q4 FY2018 after the due date compared to Q2 FY2017. In addition, the data for Q4 FY2018 available on USAspending.gov are not complete because some agencies failed to submit data or submitted partial data. Fourteen agencies submitted late. Agencies were required to submit their DATA Act files for Q4 FY2018 by November 14, 2018 45 days after the end of the quarter. Eighty-two agencies submitted their data on time. These 82 agencies represented about 84 percent of the total obligations government-wide reported to Treasury on the SF 133 for Q4 FY2018. Fourteen agencies submitted their data after the November 14, 2018 due date. Our prior review of data submitted for Q2 FY2017 found that one agency submitted data after the due date. Eleven agencies did not submit data. Eleven non-CFO Act agencies did not submit any DATA Act files for Q4 FY2018. By contrast, in reviewing Q2 FY2017 data, we identified 28 agencies that determined they should have reported data under the DATA Act, but did not. Agencies told us that they did not submit data for Q4 FY2018 because (1) there was confusion or miscommunication between the agency and its shared service provider about who was responsible for reporting the data; (2) their officials had determined the agency was not required to report; (3) new staff were unfamiliar with DATA Act requirements; and (4) technical or systems issues, such as a financial system upgrade in process, prevented them from reporting their data. Multiple agencies submitted blank files. Of the 96 agencies that submitted DATA Act files for Q4 FY2018, 35 non-CFO Act agencies submitted the file that links budget and award information (i.e., File C) or the file containing procurement data (i.e., File D1) that did not contain any data (i.e., files were blank). Specifically, 34 non-CFO Act agencies submitted a blank File D1, which contains procurement data, and 16 of those 34 also submitted a blank File C. Another non-CFO Act agency submitted a blank File C only. File C data are particularly important to oversight and transparency because they link budget and award information, as required by the DATA Act. Without this linkage, policymakers and the public may be unable to effectively track federal spending because they would be unable to see obligations at the award and object class level. Agencies told us they submitted files without data for reasons including: (1) their data was submitted by and comingled with their shared service provider s DATA Act submissions; (2) they did not have award activity to report or award activity was below the micro-purchase threshold for reporting; and (3) they do not use the Federal Procurement Data System- Next Generation or their systems were unable to produce the data necessary to create the files. We did not assess the completeness of File D1 in 2017, but we found that 13 agencies submitted a blank File C in Q2 FY2017. Of these 13 agencies, two were CFO Act agencies with large amounts of award activity the Departments of Defense (DOD) and Agriculture (USDA) both of which did submit a File C with data for Q4 FY2018. Two agencies submitted incomplete files. DOD and Treasury submitted all seven required DATA Act files for Q4 FY2018, but the data in some of those files were not complete. According to DOD officials, its File C submission for Q4 FY2018 included data from six of its 18 accounting systems. DOD officials said they are working to report data from all 18 systems in File C by the fourth quarter of fiscal year 2019. They said prior to Q4 FY2018, OMB granted DOD extensions for reporting financial and payment information in File C, as permitted by the act. DOD officials said the extensions allowed DOD to focus on financial statement audit readiness, build a single source tool from which File C obligation data could be aligned with procurement and grant data, and coordinate with the intelligence community on concerns over increased transparency. According to Treasury officials, the agency s data submission did not include the spending of one of its component organizations the Treasury Executive Office for Asset Forfeiture, Equitable Sharing Program because OMB guidance does not allow for reporting aggregate transactions when Primary Place of Performance, a required data element, is at a multistate or nationwide level. According to Treasury officials, Treasury is working with OMB and the Treasury DATA Act Program Management Office to allow for these types of transactions to be reported. In our 2017 review, we identified similar challenges with the completeness of agencies DATA Act submissions for Q2 FY2017 and made recommendations to Treasury and OMB to improve the completeness of data on USAspending.gov. We recommended that Treasury reasonably assure that ongoing monitoring controls to help ensure the completeness and accuracy of agency submissions are designed, implemented, and operating as intended. Treasury agreed with this recommendation. In September 2019, Treasury officials told us that they are working to formalize a process for monitoring agency submissions that will include emailing reminders to agencies prior to submission deadlines, following up with agencies that do not submit required data on time, and forwarding a list of non-compliant agencies to OMB. We also recommended that OMB continue to provide ongoing technical assistance that significantly contributes to agencies making their own determinations about their DATA Act reporting requirements and that it monitor agency submissions. While OMB generally agreed with our recommendation, it has not yet taken steps to monitor agency submissions to help ensure their completeness. In October 2019, OMB staff told us that they believe monitoring agency submissions is not their responsibility. During this review we asked agencies why they did not submit data for Q4 FY2018. Subsequently, five of them submitted their data late (out of the initial 18 agencies that had not submitted data), demonstrating that simple monitoring tasks such as a follow up call or email can result in actions taken by the agencies. To address ongoing challenges with the completeness of agencies DATA Act submissions, we continue to maintain that Treasury and OMB should monitor agencies submissions to help ensure the completeness and accuracy of those data submissions. See app. IV for more information on the status of these recommendations. Agencies did not report awards made to 39 financial assistance programs. Seven of the 24 CFO Act agencies did not report spending for at least one financial assistance program that made awards during fiscal year 2018. File D2 contains detailed information about individual financial assistance awards. We compared the spending data reported by the 24 CFO Act agencies in File D2 against the Assistance Listings, formerly known as the Catalog of Federal Domestic Assistance (CFDA), a government-wide compendium of federal programs, projects, services, and activities that provide assistance or benefits to the American public. As of March 2019, the Assistance Listings website contained 2,926 programs for the CFO Act agencies. Of these, 39 programs (approximately 1 percent) were not included in the Q4 FY2018 DATA Act submissions, even though these agencies stated that they made reportable awards during fiscal year 2018. In comparison, in July 2017, the CFDA listed 2,219 programs for the CFO Act agencies. Of these 2,219 programs, 160 programs (approximately 7 percent) were not included in the Q2 FY2017 DATA Act submissions even though they made reportable awards. The remaining programs either reported at least one award or did not make awards that were subject to reporting. To provide a sense of magnitude of the underreporting, we obtained estimates of the total projected annual spending for these programs for fiscal year 2018 from the Assistance Listings website and applicable agencies. Based on the estimated obligations, the 39 programs account for approximately $11.5 billion in estimated annual obligations in fiscal year 2018. The omitted amounts largely resulted from USDA s failure to report 27 programs representing more than 99 percent of the estimated annual obligations. According to USDA officials, USDA did not submit awards for some of these programs because it maintains that the information in legacy reporting systems is incompatible with the Treasury broker. USDA is working on solutions to resolve identified reporting challenges with its financial and awards systems. Treasury took steps to address findings on completeness issues for financial assistance programs we reported in 2017. At Treasury s request, we provided details regarding the programs that were omitted from the USAspending.gov database for fiscal year 2017, which Treasury shared with the appropriate agencies. In our review of fiscal year 2018 data, we found that only nine of these programs did not report. <2.2. Budgetary and Award Data Accuracy Has Improved> Based on the results of testing performed on a sample of budgetary and award transactions, we found that the overall completeness within individual transactions and accuracy of the reported data was high. We selected a projectable government-wide sample of 405 transactions and tested 41 data elements and subelements associated with them for completeness and accuracy. We determined data completeness within the transaction based on whether the element included a value and whether the value was appropriate. We determined accuracy of data elements by determining consistency with agency source records as well as applicable laws and reporting standards. Specifically, based upon our sample we estimate with a 95 percent confidence level that all the data in the population were between 99 and 100 percent complete and between 90 and 93 percent accurate. We further analyzed accuracy at the transaction and individual data element levels as follows: 1. Transaction level, which describes the extent to which all applicable data elements within an individual transaction are complete and consistent with agency source records, and applicable laws and reporting standards. 2. Data element level, which describes the extent to which the data elements and subelements used for reporting budgetary and award information were consistent with agency source records and applicable laws and reporting standards. Consistency of transactions. For data submitted in Q4 FY2018, we found that the level of consistency differed between budgetary and award transactions, but both improved compared to the data we sampled for our review of Q2 FY2017 data. Based on our projectable government-wide sample of Q4 FY2018 data, we estimate with 95 percent confidence that between 84 and 96 percent of the budgetary transactions and between 24 and 34 percent of the award transactions in the USAspending.gov database were fully consistent with agency sources. We considered a transaction to be fully consistent if the information contained in the transaction was consistent with agency records for every applicable data element. This result represents an increase in consistency from what we reported in 2017, when we estimated that between 56 and 75 percent of budgetary transactions were fully consistent, and between 0 and 1 percent of award transactions were fully consistent. In addition to the transactions that were fully consistent, we estimate that 94 to 100 percent of budgetary transactions and 62 to 72 percent of award transactions in the population were significantly consistent. We considered a transaction significantly consistent if 90 percent or more of the data elements and subelements in the transaction were consistent with agency source records and applicable laws and reporting standards. Consistency of data elements. We also found improvements in the consistency of budgetary and award data elements with agency records, and applicable laws and reporting standards. As shown in figure 1, more data elements were significantly consistent and fewer were significantly inconsistent in Q4 FY2018 than Q2 FY2017. We considered a data element to be significantly consistent if the estimated consistency rate was at least 90 percent. Five of six of the budgetary data elements were significantly consistent in Q4 FY2018, compared to four of seven data elements in our 2017 review. We also found improvements in the consistency of award data elements and subelements compared to our 2017 review. Eighteen of the 35 award data elements and subelements in our sample were significantly consistent in Q4 FY2018, compared to only one of 26 data elements and subelements we tested in our 2017 review. See figure 2 for the data elements and subelements in our sample that were significantly consistent. We considered a data element significantly inconsistent if it was either not consistent with agency records or incomplete at least 10 percent of the time. We found that no budgetary data elements were significantly inconsistent, which is an improvement from our 2017 review where we found one budgetary data element Obligation significantly inconsistent. Similarly, we found fewer significantly inconsistent award data elements compared to our 2017 review. Specifically, we found five of 35 award data elements and subelements significantly inconsistent in Q4 FY2018, compared to 11 of 26 in our 2017 review. See figure 3 for the data elements and subelements in our sample that were significantly inconsistent. Unverifiable data elements. We found no data elements that exhibited a significant amount of unverifiable information incomplete or inadequate agency source records that prevented us from determining whether the data element was significantly consistent or inconsistent. See app. III for details. While we tested the consistency of agency records and applicable laws and reporting standards for the 41 data elements and subelements previously discussed, we performed a different test for three other data elements that contained a value derived by FPDS-NG and FABS. These data elements and subelements Legal Entity County Name, Primary Place of Performance County Name, and Primary Place of Performance Congressional District were assessed against the other sources from which they were derived, such as data from the U.S. Census Bureau and house.gov, rather than agency records. We found that each were neither significantly consistent nor significantly inconsistent with their sources. See appendix III, table 5 for details. <2.3. Overall Data Quality Is Limited by Challenges in the Implementation and Use of Some Data Standards> The DATA Act requires OMB and Treasury to establish data standards to produce consistent and comparable reporting of federal spending data. While we found improvements in the overall completeness and accuracy of the data when compared with the results of our 2017 review, we identified persistent challenges with the implementation and use of two award data elements Award Description and Primary Place of Performance Address that limit the usefulness of these data. We previously reported that these data elements are particularly important to achieving the transparency goals envisioned by the DATA Act because they inform the public what the federal government spends money on and where it is spent. In our sample results, we found agencies reported values for Award Description that were significantly inconsistent with agency sources and with the established standard for reporting this data element which is defined by the DATA Act data standard as a brief description of the purpose of the award. Based on our testing of a representative sample of Q4 FY2018 transactions, we estimate that the Award Description data element was inconsistent with agency source records or contained information that was inconsistent with the established standard in 24 to 35 percent of awards. While this represents an improvement over the results we reported for this data element in 2017, we found in our testing that agencies continue to face challenges in reporting Award Description consistent with the established standard. See figure 4 for several examples of the Award Description data submitted by agencies in our sample, which illustrates the range of agency interpretations of this data element from understandable to incomprehensible. Lengthy, technical description. For example, the National Aeronautics and Space Administration (NASA) included several paragraphs for the description of procurement and financial assistance award transactions in our sample that were long and highly technical. These descriptions did not meet the data standard because they contained acronyms, jargon, and other technical terminology that might be challenging for others outside the agency to understand. NASA officials said they use the Award Description field internally to search for vendors when making awards for similar services. Thus, they instructed contract officers to include as much information as possible to maximize the Award Description field for later use. As of June 2019, the General Services Administration decreased the character limit for reporting Award Description in FPDS-NG for procurement awards from 4,000 characters to 250 characters to discourage agencies from copying and pasting sizeable portions of a contract s contents rather than thoughtfully including a brief description of what is being procured. NASA officials said that the new maximum will limit the flexibility to search for contractors. They are seeking alternatives for these searches. No description provided. The Department of Education reported unknown title for the Award Description for the majority of the financial assistance award transactions in our sample. This does not meet the data standard because it does not provide any information about the award. Agency officials said the Award Description is provided by the applicant and if one is not provided, their system automatically will populate it as unknown title. Geographic information. DOD reported location information for the Award Description in several transactions in our sample. The locations reported in the description field were not understandable except to agency officials. For example, one field contained the text 4542874050!TRBO REGION 1. DOD officials explained that this description includes the part number for a medical supply item and the region of the country and is auto populated by an agency system. While the description is consistent with agency sources, it is not easily understood by the public. The Defense Federal Acquisition Regulation Supplement Procedures, Guidance, and Information provides instructions to use plain English as much as possible, and to explain numbers and acronyms. DOD officials said the agency is investigating methods to improve how similar transactions are auto-populated. Description of modification. The Department of Homeland Security (DHS) used the Award Description field to describe modifications to contracts instead of the good or service being procured. Specifically, DHS reported de-obligate excess funds and closeout for a modification to a contract that procured information technology products and services. DHS officials said reporting the nature of the modification, rather than the original purpose of the award, is consistent with practices used in contract writing systems across the federal government and is intended to inform the public of changes made to the contract by the modification. DHS is working with Treasury to clarify how this information is displayed on USAspending.gov and suggested that additional information on how award descriptions for modifications are to be reported would be beneficial and should be provided in the DAIMS. We found that some individual agencies have taken steps to provide additional guidance on Award Description to ensure agency personnel are providing information that is consistent with the standard. Four agencies in our sample had additional guidance for their contracting officers. For example, officials from the Department of Veterans Affairs (VA) said that in June 2019, VA trained hundreds of members of its contracting workforce with curriculum that included an interactive game to illustrate how to provide a brief description of an award that meets the standard for reporting this information. Officials from 11 agencies said additional guidance on Award Description could help ensure those entering the data understand the standard definition and report appropriate information, for example, by providing examples of award definitions that meet the standard. In the absence of government-wide guidance, agencies have reported values that are inconsistent with the data standard and not comparable between agencies. Agencies also reported several challenges with reporting Primary Place of Performance Address for nonroutine locations, which OMB and Treasury defined as where the predominant performance of the award will be accomplished. Taking into account each of its subelements, we found the information regarding Primary Place of Performance Address had higher rates of inconsistency than the majority of the data elements in our review. Multiple subrecipients. Agency officials reported challenges with identifying Primary Place of Performance Address in cases where an award is made to a recipient that further distributes the funding to subrecipients. For example, the U.S. Agency for Global Media (USAGM) awards Radio Free Europe/Radio Liberty a grant that funds work globally. Officials from USAGM said that as a U.S. not-for-profit organization, Radio Free Europe/Radio Liberty, maintains corporate headquarters in Washington, D.C., but, as an international media organization, maintains many offices abroad. USAGM reports the Primary Place of Performance Address as Washington, D.C. because it is where the organization maintains its corporate office, but much of the performance takes place in other locations. In another example, the Department of Health and Human Services (HHS) Centers for Medicare and Medicaid Services (CMS) reports the Primary Place of Performance Address for Medicare payment data as the county of its payment processing centers, even though each processing center makes payments to recipients in multiple states and counties. CMS contracts with Medicare Administrative Contractors (MAC) to process and pay Medicare fee-for-service claims. For each type of Medicare claim, the number of jurisdictions and the number of MACs that handle that type of claim vary. At the time of our review, there were 12 jurisdictions for Medicare Part A and B claims handled by MACs. As shown in figure 5, the jurisdictions are made up of multiple states. In addition to the MAC jurisdictions for Medicare Part A and Part B claims, there were four home health and hospice jurisdictions and four durable medical equipment jurisdictions. Thus, there are 20 MAC jurisdictions, almost all of which covered multiple states. As a result, the spending for Medicare payments is reported in a small number of counties instead of where the beneficiaries of Medicare services are located. Software. Officials from three agencies in our review said that it is challenging to determine Primary Place of Performance Address for software licenses when purchased as a service. For example, there could be multiple performance locations, but none of these locations are predominant. Large or undefined locations. Officials from the agencies in our review reported challenges in meeting the standard for reporting large or undefined performance locations. For example, officials from the Delta Regional Authority said that it was difficult, at times, to determine the Primary Place of Performance Address for watersheds because they can cover a large area and cross multiple jurisdictions. Officials from the National Science Foundation (NSF) said that for projects that may not have a single location, they report the location that corresponds to the research asset s physical location or the primary site. For example, for a research vessel, NSF officials report the awardee s address, which is generally the vessel s homeport as the Primary Place of Performance Address. In another example, NASA officials said that when they let contracts for services performed on the International Space Center, they report the command center in Houston as the Primary Place of Performance Address. For some of these non-routine locations, the FPDS-NG data dictionary provides guidance for procurement transactions. For example, for services being performed in oceans and seas, it directs agencies to report the closest U.S. city. For services being performed in the atmosphere or space, the FPDS-NG Data dictionary directs agencies to report the location from which the equipment conducting the services was launched. However, the DATA Act Information Model Schema (DAIMS) Data Dictionary does not include the same level of detailed guidance for reporting financial assistance awards and directs agency officials to report the location where the predominant performance of the award will be accomplished. Officials from several agencies said it would be helpful for OMB and Treasury to issue guidance on Primary Place of Performance Address for financial assistance awards to help agencies report this information consistent with the established standard. In the absence of more specific guidance, agencies are using different decision rules to identify the Primary Place of Performance Address for financial assistance awards which could limit the usefulness of this information to the public. We previously identified similar issues with Award Description and Primary Place of Performance Address on USAspending.gov. We recommended that OMB and Treasury provide agencies with additional guidance to address potential clarity, consistency, or quality issues with the definitions for specific data elements including Award Description and Primary Place of Performance Address and that they clearly document and communicate these actions to agencies providing these data as well as to end-users. OMB issued guidance in June 2018 which provides clarification on reporting requirements for some data element definitions. However, additional guidance is needed to clarify how agencies are to report spending data using standardized data element definitions that may be open to more than one interpretation, and then broadly communicate this information to agencies and the public. We continue to believe additional guidance is needed to facilitate agency implementation of certain data definitions to produce consistent and comparable information. Given the challenges we identified in this report and in previous reports with Award Description and Primary Place of Performance Address, we have concerns about whether the guidance OMB issued provides sufficient detail for agencies to consistently interpret and implement the definitions. See app. IV for more information on the status of this recommendation. <2.4. Known Data Limitations Are Not Transparent to Users of USAspending.gov> Treasury does not fully disclose all known data limitations on USAspending.gov. According to OMB guidance, federal agencies should be transparent about the quality of information and identify the limitations of the data they disseminate to the public. Further, Treasury s Information Quality Guidelines state that, when disseminating information to the public, information should be presented within the proper context to disseminate information in an accurate, clear, complete, and unbiased manner. In November 2017, we identified data quality limitations that were not disclosed on USAspending.gov. We recommended that Treasury disclose known data quality issues and limitations on USAspending.gov. Treasury agreed with this recommendation and has taken steps to better disclose some of these limitations, but many of the issues we identified in 2017 continue to present challenges. Some of these challenges apply widely, while others were specific to particular agencies. They include the following: Data not submitted or incomplete. One step taken by Treasury to improve disclosure was to create a webpage in USAspending.gov that provides information on unreported data. However, it is unclear exactly what this information covers. For example, it is unclear whether the information on unreported data includes financing accounts, agencies that should have reported but did not submit data, missing data for agencies that did submit, or spending that was not reported because obligation amounts fell below $25,000 and was therefore not required to be reported. As a result, users do not clearly know what data are unreported or the amount that was required to be reported. Optional data elements and subelements. Another issue we identified in 2017 and found again in our current review was that key information about the reporting requirements for some data elements and subelements was not adequately disclosed to the public. Specifically, for Q4 FY2018 certain data elements were listed in guidance as optional for agencies to report. According to Treasury officials, agencies were not required to report these data elements because the data standard was not fully implemented. For example, prior to fiscal year 2019, the data element Funding Office Name was optional for financial assistance awards. Additionally, as of September 2019, Period of Performance Start Date and Period of Performance Current End Date remained optional for reporting pending government-wide agreement on the standard. USAspending.gov does offer some information regarding optional data elements by providing a link to the DAIMS Reporting Submission Specifications document. However, this document is not labeled in a way that would make it clear to the user what information can be found there. Moreover, some agencies may voluntarily submit data for optional fields so only partial information for optional data elements may be displayed on USAspending.gov. Because data limitations related to optional data elements are not prominently displayed on USAspending.gov, users may not know which data elements or subelements are potentially incomplete. A more systematic approach for identifying and disclosing known data limitations on USAspending.gov including procedures for addressing wide ranging issues such as communicating changes in the reporting requirements for certain data elements and information about data that may be unreported or incomplete could help users of the data better understand potential quality issues with particular data elements and sources, and how to appropriately interpret the data. While Treasury has taken steps to better disclose data limitations, it needs to take further action to implement a more systematic approach, in line with our 2017 recommendation. In addition to such broader challenges, we identified two specific data limitations involving DOD and HHS: Delay in availability of DOD procurement data. A third issue we identified in our 2017 review, and again in our current review, concerns how information on DOD procurement data is presented on USAspending.gov. Specifically, information related to a 90-day delay in data availability for DOD procurement awards is not posted on USAspending.gov. FPDS-NG which collects information on contract actions for display on USAspending.gov releases DOD-reported procurement data to the public after a 90-day waiting period to help ensure the security of these data before they are released to the public. This also results in a 90-day delay in reporting these data to USAspending.gov. FPDS-NG clearly states that DOD data are subject to a 90-day delay as seen in figure 6. While DOD reports this data limitation in its senior accountable official certification statement, it is not presented prominently to users who are viewing DOD s spending data. For example, DOD s delay in data availability is not presented on DOD s agency profile page or with queries on specific transactions associated with DOD. Until such information is transparently communicated, users of USAspending.gov who access DOD procurement data directly or as a result of broader government-wide searches are likely unaware that the information may be incomplete or not comparable. Medicare payment data. Additionally, in this review we found limitations in how Medicare payment data are made available to the public. According to HHS officials, CMS reports the Primary Place of Performance Address for Medicare payment data as the county for the applicable Medicare Administrative Contractor (MAC) because the MAC is the direct recipient of the agency s contract award. As a result, Medicare spending data on USAspending.gov are not reported in the county where the Medicare beneficiaries are located. There are more than 3,200 counties and county equivalents in the United States and Puerto Rico, but only 20 Medicare MAC jurisdictions. Although Medicare payments may reach every county in the country, the users of USAspending.gov will only see this spending in the counties in which a MAC is located. We found that this information is not described on USAspending.gov. HHS officials said that they identified this limitation to the transparency of Medicare payment data to Treasury in 2016. They suggested that Treasury add information about how Medicare payments are reported on USAspending.gov to avoid confusion for users of the data. However, at that time, Treasury determined that it was unnecessary to provide this additional information on USAspending.gov. Until such information is transparently communicated, it will be unclear to the user that Medicare payments are consolidated in the counties where MACs are located. <3. Fully Implementing Data Governance Consistent with Key Practices Would Improve Data Quality> <3.1. Enforcing the Consistent Application of Data Standards across the Federal Government Would Improve Data Quality> One of the purposes of the DATA Act is to establish government-wide data standards to provide consistent and comparable data that are displayed accurately for taxpayers and policymakers on USAspending.gov. As we have reported previously, establishing a data governance structure an institutionalized set of policies and procedures for providing data governance throughout the life cycle of developing and implementing data standards is critical for ensuring that the integrity of data standards is maintained over time. Such a structure, if properly implemented, would greatly increase the likelihood that the data made available to the public will be accurate. Accordingly, in 2015, we recommended that OMB, in collaboration with Treasury, establish a set of clear policies and procedures for developing and maintaining data standards that are consistent with leading practices for data governance. This recommendation has not been implemented. Having formalized policies and procedures in place for one of these key practices managing, controlling, monitoring, and enforcing the consistent application of data standards once they are established could help address some of the data quality challenges we identified in this and previous reviews. As described earlier, agencies experience challenges reporting Award Description and Primary Place of Performance Address. We continue to believe that having a robust data governance structure that includes policies and procedures for enforcing the consistent application of the established standards would lead to greater consistency and comparability of reporting for data elements, such as Award Description and Primary Place of Performance Address. <3.2. Efforts Continue to Develop a Robust Data Governance Structure to Ensure the Integrity of Data Standards> OMB and Treasury have established some procedures for governing the data standards established under the DATA Act, but a robust governance structure has yet to be fully developed and operational. Since the enactment of the DATA Act in 2014, OMB has relied on a shifting array of advisory bodies to obtain input on data standards. In March 2019, we reported that the governing bodies involved in initial implementation efforts had been disbanded, and that their data governance functions were to be accomplished within the broader context of the cross-agency priority (CAP) goals established under the 2018 President s Management Agenda (PMA). Since we issued our report, OMB has taken additional steps to develop a government-wide data structure and to establish data governance programs at each agency. OMB staff told us that they envision agencies as incubators of data governance where they can learn lessons on data governance. Toward that end, OMB, in collaboration with other interagency groups, has taken a number of steps to further develop data governance at both the agency and government-wide levels: In October 2019, OMB issued a set of grants management data standards under the Results Oriented Accountability for Grants CAP Goal. According to OMB staff, they received more than 1,100 public comments on draft standard data elements which were released for public comment in November 2018. OMB issued a memorandum in April 2019 that outlines approaches to shared services and the governance structure established to support shared services used for data reporting. In June 2019, as part of the CAP Goal Leveraging Data as a Strategic Asset, OMB issued the draft 2019-2020 Federal Data Strategy Action Plan (Action Plan). This document identifies both government-wide and agency-level action steps for improving data governance. To address government-wide data governance, the Action Plan calls for improvement in the standards for financial management data and geospatial data. The Action Plan directs agencies to establish a body of internal stakeholders responsible for data governance. These bodies will be made up of senior level staff and be responsible for assessing agency capability and ensuring monitoring and compliance with policies and standards related to data. Agencies are also instructed to assess data and related infrastructure maturity, identify opportunities to increase staff data skills, and identify data needs to answer key agency questions. OMB also issued initial guidance in July 2019 to support agency efforts to implement the first phase of the Evidence Act. For example, the Evidence Act requires, among other things, agencies to designate a Chief Data Officer by July 13, 2019. OMB also guidance directs agencies to establish a data governance body, chaired by the Chief Data Officer, with participation from relevant senior-level staff from agency business units, data functions, and financial management by September 30, 2019. In July 2019, the Federal Data Strategy Team issued a data governance playbook. According to OMB officials, this playbook is not guidance, but is meant to be a framework for agency-level data governance accompanied by forthcoming resources. OMB staff told us that updates to the playbook would come relatively quickly, but also said they had no planned time frames for doing so. <3.3. Agencies Have Taken Initial Steps to Implement Data Governance Programs and Data Quality Plans> Agencies have taken initial steps to establish data governance programs and develop data quality plans. As of September 2019, seven of the 30 agencies included in our review reported that they have taken steps to designate a Chief Data Officer as required by the Evidence Act. Twenty reported establishing internal bodies similar to the data governance bodies as directed by OMB guidance. The make-up and function of data governance bodies varies across agencies. The Department of Labor reported its Data Board was formalized and that the acting Chief Data Officer had become the official Chief Data Officer. The U.S. Agency for International Development reported establishing a DATA Act Governance Council to facilitate the effective implementation of the DATA Act. Other agencies reported similarly structured bodies referred to as working groups, steering committees, and consortiums. As of September 2019, 19 agencies reported that they have completed a data quality plan as required by OMB Memorandum, M-18-16. Nine agencies that do not have a data quality plan will have one completed by September 30, 2019. The data quality plans from the agencies in our sample varied in scope and content. Features of data quality plans we reviewed included a description of a data governance board, an assessment of existing and planned internal controls for data quality, and determination of priority data elements based on assessments of risk of data quality issues. For example, the Departments of Commerce and the Interior each conducted a risk assessment on the likelihood and consequence of improper reporting for assistance and procurement data. They will employ strategies or controls to mitigate risks related to the highest risk elements. Similarly, Treasury named targeted data elements based on their relevancy and further assessed the risk of improper reporting of each element based on existing internal controls. Agencies in our review reported using a variety of sources of guidance in developing their data quality plans, including the Data Quality Playbook issued by the Leveraging Data as a Strategic Asset Working Group in November 2018, OMB Circular M-18-16, and guidance on conducting required reviews under the DATA Act from the Council of inspector general for Integrity and Efficiency. While some agencies in our review reported that the information from these sources was helpful, they also noted the need for additional guidance, including help understanding the reporting requirements for certain data elements. <4. Conclusions> In the 5 years since enactment, OMB, Treasury, and federal agencies have made significant strides to address many of the policy, technical, and reporting challenges presented by the DATA Act s requirements. We found improvements in the overall quality of the data on USAspending.gov compared to our 2017 review of data quality. To continue moving forward with this progress and to fully realize the DATA Act s promise of helping to improve data accuracy and transparency, more needs to be done to address continued challenges with the completeness and accuracy of key data elements. For example, OMB and Treasury have not fully addressed our recommendations to monitor agency submissions and ensure agencies are accountable for the completeness and accuracy of their data submissions. In addition, without the transparent disclosure of known data limitations, users may view, download, or analyze data made available on the website without full knowledge of the extent to which the data are timely, complete, accurate, or comparable over time. This could lead users to inadvertently draw inaccurate information or conclusions from the data. We have previously recommended that Treasury disclose known data limitations on USAspending.gov. The agency has taken some steps toward this goal. However, as we have shown, work remains for Treasury to develop a more systematic approach for disclosing known data limitations on its website. In the meantime, we believe it is important to address the specific data limitations we identify in this report. These include the need to provide users with information about the delay in the availability of DOD procurement data, and how Medicare payment data are reported. Finally, the challenges we have found with data completeness and accuracy, and the transparency around data limitations also demonstrate the importance of continued progress by OMB and Treasury in addressing our previous open recommendations to develop a robust and transparent data governance structure, and implement controls for monitoring agency compliance with DATA Act requirements. <5. Recommendations for Executive Action> We maintain that OMB and Treasury should address our prior recommendations on DATA Act implementation, including recommendations on monitoring agency submissions, providing additional guidance on reporting established data standards, implementing a systematic approach to facilitate the disclosure of known data limitations on USAspending.gov, and developing a robust and transparent governance structure. We are making a total of two new recommendations to Treasury regarding the disclosure on USAspending.gov of specific known data limitations: The Secretary of the Treasury should ensure that information about the 90-day delay for displaying DOD procurement data on USAspending.gov is transparently communicated to users of the site. Approaches for doing this could include prominently displaying this information on the DOD agency profile page, in the unreported data section, and in search results that include DOD data. (Recommendation 1) The Secretary of the Treasury should ensure that information regarding how the Primary Place of Performance Address for Medicare payment data are reported is transparently communicated to the users of USAspending.gov. (Recommendation 2) <6. Agency Comments> We provided a draft of this report to the Departments of Agriculture (USDA), Defense (DOD), Commerce, Education, Health and Human Services (HHS), Homeland Security, the Interior (DOI), Labor (DOL), the Treasury, and Veterans Affairs (VA); the Office of Management and Budget (OMB); the National Science Foundation (NSF); the National Aeronautics and Space Administration (NASA); the Small Business Administration (SBA); the U.S. Agency for International Development (USAID); the U.S. Agency for Global Media (USAGM); and the Delta Regional Authority (DRA) for review and comment. USAID and Treasury provided written responses, which are summarized below and reproduced in appendixes VII and VIII, respectively. DHS and OMB provided technical comments, which we incorporated as appropriate. USDA, DOD, Commerce, Education, HHS, DOI, DOL, VA, NSF, NASA, SBA, USAGM, and DRA had no comments on the draft report. In its written comments, USAID stated that it is committed to DATA Act reporting and the accessibility and transparency of its spending data. In its written comments, Treasury stated its commitment to fully realizing the DATA Act s promise of helping to improve data accuracy and transparency. Treasury agreed with our two recommendations on the disclosure of specific known data limitations and stated that it will work with HHS and DOD to implement them in the coming months. Treasury also stated that it remains committed to fully implementing our prior recommendations on DATA Act implementation. We are sending copies of this report to the relevant congressional committees; the Secretaries of Agriculture, Defense, Commerce, Education, Homeland Security, the Interior, Labor, the Treasury, and Veterans Affairs; the Directors of the Office of Management and Budget and the National Science Foundation; the Administrators of National Aeronautics and Space Administration, the Small Business Administration, and U.S. Agency for International Development; the Chief Executive Officer of the U.S. Agency for Global Media; the Chairman of the Delta Regional Authority; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact Michelle Sager at (202) 512-6806 or sagerm@gao.gov or Paula M. Rascona at (202) 512-9816 or rasconap@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. Key contributors to this report are listed in app. IX. Appendix I: List of Agencies and Number of Transactions in Our Sample National Science Foundation Nuclear Regulatory Commission (NRC) File B (Budgetary) File D1 (Procurement) The Broadcasting Board of Governors changed its name to the U.S. Agency for Global Media in August 2018. Appendix II: Objectives, Scope, and Methodology The Digital Accountability and Transparency Act of 2014 (DATA Act) requires that we report on the timeliness, completeness, accuracy, and quality of the data submitted under the act and the implementation and use of data standards. This review responds to the act s requirement by addressing the following: (1) the timeliness, completeness, accuracy, and quality of the data and the implementation and use of data standards; and (2) the extent to which progress has been made to develop a data governance structure consistent with key practices, and how it affects data quality. We also update the status of select implementation issues and our previous recommendations related to implementing the DATA Act and data transparency. To assess the timeliness, completeness, accuracy, and quality of the data submitted and the implementation and use of data standards, we analyzed agency submission files for the fourth quarter of fiscal year 2018 (Q4 FY2018) on USAspending.gov and reviewed a representative stratified random sample from the Department of the Treasury s (Treasury) USAspending.gov database download for Q4 FY2018. Specifically, to assess timeliness, we accessed agency submission files on USAspending.gov for Q4 FY2018 and determined whether agencies submitted their data by the established deadline 45 days after the end of the quarter or November 14, 2018 based on the date agencies certified their submissions. To help understand the proportion of spending that agencies reported by the due date, we obtained and analyzed a file from Treasury containing SF 133 Report on Budget Execution and Budgetary Resources (SF 133) data which includes unaudited balances reported by agencies for Q4 FY2018. These obligation balances are only used for illustrative purposes in our report. They include financing accounts, among other things, which are not required to be reported under the DATA Act. To assess completeness, we determined whether (1) all agencies that determined they are required to or would voluntarily submit DATA Act files did so, (2) the transactions reported in the files submitted by agencies contained all required data for that transaction, and (3) the database contained required assistance award data from the 24 Chief Financial Officers Act of 1990 (CFO Act) agencies. To determine whether all agencies that should have reported Q4 FY2018 data did so, we compared Treasury s list of agencies that determined they were required to or would voluntarily report data to the agency file submissions on USAspending.gov for Q4 FY2018. We followed up with agencies that had not reported to find out the reasons for not reporting, but we did not verify the accuracy of their responses. To assess the completeness of files submitted by agencies, we accessed the agency submission files for Q4 FY2018 available on USAspending.gov and determined whether all files for each agency contained data (i.e., were not blank). We followed up with agencies that submitted a blank File C and/or File D1 that did not contain any data to find out why the files were blank, but we did not verify the accuracy of their responses. We also made inquiries of agencies to determine whether any agency components or systems did not submit data. Finally, we tested completeness of agency submissions through our sample testing, described in detail below. To assess the completeness of assistance data in the USAspending.gov database, we determined the extent to which federal agencies were reporting required award data based on a list of potential award-making agencies/programs from Assistance Listings on beta.SAM.gov, formerly the Catalog of Federal Domestic Assistance. We identified all programs listed in the Assistance Listings, as of September 2018. For the 24 CFO Act agencies only, we compared programs listed in the Assistance Listings to data in the USAspending.gov database to determine which programs reported information on at least one assistance award for fiscal year 2018. For any program reporting no assistance award information for the year, we asked agency officials why information was not reported. For all programs that agency officials determined either made an award but did not report it, or reported awards late to USAspending.gov, we extracted the agencies obligation estimates for fiscal year 2018 as reported in the Assistance Listings. To further assess completeness of the data and to assess accuracy of the data and the implementation and use of data standards, we extracted all records included in the scope of our review from a database used to display data on USAspending.gov. The records covered activity during Q4 FY2018 (July through September 2018). To extract all records from the database, we mapped the database fields to the data elements within the scope of our audit. Once we had the data within the scope of our audit for Q4 FY2018, we performed the following steps: Sampling data to determine completeness and accuracy: From the database we extracted, we selected a stratified random probability sample of 405 records for Q4 FY2018. Data records were stratified into procurement award transactions, assistance award transactions, and budgetary records. We randomly selected 158 procurement awards, 150 financial awards, and 97 budgetary records. Estimates for the results of the procurement, assistance, and budgetary samples have sampling errors of +/- 7.8, 8, and 10 percentage points or less, respectively, at the 95 percent level of confidence. The probability sample was designed to estimate the overall rate of reporting errors for a data element with a sampling error of no greater than plus or minus 5.3 percentage points at the 95 percent level of confidence. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample s results as a 95 percent confidence interval (e.g., +/- 7 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. For 41 data elements and subelements required by FFATA or the DATA Act, we first assessed the extent to which a data element was complete whether there was a value and if that value was appropriate. If the data element was not complete, then we also considered that data element to not be accurate. For those elements that were complete, we then assessed the extent to which the data were accurate by comparing the information in our sample to the information contained in the originating agency s underlying source documents, where available, and determining whether the data were consistent with applicable laws and reporting standards, as applicable. Therefore we determined an element was inconsistent if it was either inconsistent with the agency documents, applicable laws or reporting standards, or incomplete. For three data elements that contained values derived by Federal Procurement Data System-Next Generation (FPDS-NG) and Financial Assistance Broker Submission (FABS) based on other values provided by agencies, we compared the information in the sample to other sources, such as data from the U.S. Census Bureau and house.gov. This allowed us to verify whether the values in our sample were consistent with the systems from which they were derived. We then interviewed agency officials to discuss differences between the information in our sample and information in agency or other sources. Data element and subelement testing: Table 3 shows the 44 data elements and subelements tested in the statistical sample including six budgetary data elements and 38 award data elements and subelements. Individual data elements may vary with their representation in the sample (e.g. Legal Entity Address Lines 1 and 2) because the data element was not required for all of the sampled data records. Specific error rates by category can be found in app. III. The government-wide results are a weighted total of the three strata of our sample: (1) procurement award transactions, (2) assistance award transactions, and (3) budgetary records. For reporting purposes, we combined some of the results for the award strata because some data elements appear in both Files D1 (procurement) and D2 (financial assistance). See app. I for the list of agencies and number of records randomly selected and tested in each strata. If we determined, after reviewing agency source documents, that a data element was not applicable to the sampled record, we did not factor the data element into our evaluation of completeness and accuracy. We determined an element to be unverifiable if no agency source records were provided or the records provided did not meet our audit standards. To test the controls over the reliability of agency data, we obtained supporting documentation to confirm that the agency provided only official agency source documents, such as a system of records notice. When such a supporting document was unavailable, we reviewed agency transparency policy documentation, data verification and validation plans or procedures, or system source code information to ensure the reliability of the data. We did not assess the accuracy of the data contained in sources provided by agencies. For the purposes of our review, we defined data quality as encompassing the concepts of timeliness, completeness, and accuracy. Therefore, our assessment of overall data quality is reflected in our specific assessments of these components. We also reviewed OMB, Treasury, and agency documents related to DATA Act implementation. We interviewed OMB and Treasury officials on their role in DATA Act implementation and interviewed officials from the agencies in our sample to discuss their test results and efforts to submit data under the DATA Act. To describe changes in data quality since our prior work, we compared the results of our review of Q4 FY2018 data to the results of our review of quarter two fiscal year 2017 (Q2 FY2017) data performed in our first assessment of data quality. For both reviews, we examined a projectable sample of budgetary and award transactions from a database that, according to Treasury, is partly used to display data on USAspending.gov. However, there were the following differences: (1) our 2017 sampling frame was confined to the 24 CFO Act agencies (which represented 99 percent of obligations in our data set at that time), while our sampling frame for this review included all agencies that submitted Q4 FY2018 data files as of February 11, 2019; (2) more agencies and their components reported data in Q4 FY2018 than in Q2 FY2017; (3) in 2017 our estimated error rate calculations included elements of certain sampled transactions that were determined to be not applicable to the transaction and were classified as consistent with agency sources in both the numerator and denominator while in this review, we excluded not applicable elements from both the numerator and denominator of the estimated rate calculations; (4) our sampling frame for this review included more data elements and subelements than were in our Q2 FY2017 sampling frame; (5) in this review, since three data elements we reviewed were derived by FPDS-NG and FABS rather than provided by agencies, we compared the information in the sample to other sources rather than agency documents and therefore did not include those results in our comparisons to Q2 FY2017; (6) agencies Q4 FY2018 data were submitted under policies and procedures outlined in DAIMS v1.3 which reflects changes in validation rules and reporting requirements from the DAIMS v1.0 that was in effect in 2017; (7) OMB issued additional guidance on DATA Act reporting since we reported in 2017; and (8) changes were made to the Treasury broker since our last report. To evaluate how the current data governance structure affects data quality, we compared data quality challenges we identified during our review to key practices for data governance identified in our prior work to underscore the need for a more robust structure consistent with key practices. To assess progress made to develop a data governance structure consistent with key practices, we reviewed policy and other documentation related to ongoing efforts to develop a government-wide structure for governing the standards established under the act and interviewed OMB staff about these efforts. We also reviewed agency data quality plans guidance intended to facilitate agency efforts to establish data governance programs and interviewed agency officials on their data governance efforts. To update the status of our recommendations related to the implementation of the DATA Act, we reviewed new guidance and other related documentation, and interviewed OMB staff and Treasury officials. See app. IV for an update on our recommendations related to DATA Act implementation. We conducted this performance audit from November 2018 to November 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Estimates of Consistency Rates for Award Transactions and Budgetary Accounts/Balances Appendix III: Estimates of Consistency Rates for Award Transactions and Budgetary Accounts/Balances Accurate/consistent (%) Q4 FY2018 Q2 FY2017 Q4 FY2018 Q2 FY2017 Q4 FY2018 Q2 FY2017 3-8 97-100 83-91 0-1 5-11 0-3 Inconsistent (%) Catalog of Federal Domestic Assistance Number (CFDA) Inconsistent (%) Data element Primary Place of Performance Address (all subelements) Accurate/consistent (%) Unverifiable (%) Legal Entity Address City Name refers to two subelements under DAIMS v.1.3 (Legal Entity Address City Name and Foreign City Name), which we combined for reporting purposes. Legal Entity Address State Name refers to three subelements under DAIMS v.1.3 (Legal Entity Address State Description for procurement awards and Legal Entity Address State Name and Foreign Province Name for financial assistance awards), which we combined for reporting purposes. Legal Entity Address Zip Code refers to four subelements under DAIMS v.1.3 (Legal Entity Address Zip+4 for procurement awards, Legal Entity Address Zip 5 and Last 4 for financial assistance awards, and Legal Entity Address Foreign Postal Code for foreign financial assistance awards), which we combined for reporting purposes. Primary Place of Performance Address Zip Code is one subelement under DAIMS v.1.3 (Primary Place of Performance Address Zip+4), which contains both the first five digits from the zip code and the last 4. However, the USAspending.gov database we obtained our sample from contained the zip code information for this element in two parts: 5 digit zip code and +4. Therefore, we present these subelements separately for reporting purposes. Element was optional for fourth quarter of fiscal year 2018. Unverifiable includes data elements rates as inaccurate because agency records were insufficient to complete the test or because the agency did not provide supporting documentation. <7. Data element> Accurate/consistent (%) Estimated ranges Inconsistent (%) Unverifiable (%) In our prior Digital Accountability and Transparency Act of 2014 (DATA Act) reports, we have made recommendations to both the Department of the Treasury (Treasury) and the Office of Management and Budget (OMB) on a range of topics. Treasury and OMB have collectively taken action that resulted in closure of nine prior recommendations on the data transparency and implementation of the DATA Act. Table 7 provides a listing of open DATA Act recommendations at the time this report was issued as well as a short discussion of their status. Full and effective implementation of the open recommendations listed below will contribute to more reliable and consistent federal data to measure the cost and magnitude of federal investments as well as facilitate efforts to share data across agencies to improve transparency, accountability, decision- making, and oversight. Appendix V: Sources of Data and Process Overview on USAspending.gov The Digital Accountability and Transparency Act of 2014 (DATA Act) requires the Office of Management and Budget (OMB) and the Department of the Treasury (Treasury) to establish government-wide data standards that to the extent reasonable and practicable produce consistent, comparable, and searchable spending data for any federal funds made available to or expended by federal agencies. These standards specify the data elements to be reported under the DATA Act and define and describe what is to be included in each data element, with the aim of ensuring that data will be consistent and comparable. The DATA Act requires OMB and Treasury to ensure that the standards are applied to the data made available on USAspending.gov which has many sources of data. Some data are from agency systems, while other data are pulled or derived from government-wide reporting systems. Key award systems that generate data files that are linked to agency submitted files include the Federal Procurement Data System-Next Generation (FPDS-NG), which collects information on contract actions; the Financial Assistance Broker Submission (FABS) which collects information on financial assistance awards; the System for Award Management which is the primary database for information on entities that do business with the federal government (i.e., contractors and grantees), and in which such entities must register; and the Federal Funding Accountability and Transparency Act of 2006 (FFATA) Subaward Reporting System (FSRS), which provides data on first-tier subawards reported by prime award recipients. Agencies submit procurement award information to FPDS-NG daily and financial assistance award information (grants, loans, insurance and other financial assistance) to FABS at least twice monthly. These award data are reflected in USAspending.gov daily. As depicted in figure 7, agencies are expected to submit financial data linked to award data and certified on a quarterly basis, 45 days after the close of the quarter. They submit three data files with specific details and data elements to Treasury s DATA Act Broker (broker) from their financial management systems quarterly (Files A, B, C). In February 2019, to reduce agency burden, Treasury made updates including an optional new broker feature that agencies can use to generate a provisional File A which agencies can choose to upload and submit as their File A in the regular submission process. The new feature produces an agency s provisional File A based on budget and financial information reported by the agency to the Government-wide Treasury Account Symbol Adjusted Trial Balance System for the creation of the SF 133 Report on Budget Execution and Budgetary Resources. The broker then extracts award and subaward information from existing government-wide reporting systems to build four files that include procurement information, information on federal assistance awards such as grants and loans, and recipient information (Files D1, D2, E, and F). Each agency s data must pass a series of validations in the broker and then be certified by the agency s senior accountable official (SAO) before they are submitted for display on USAspending.gov. According to OMB guidance, the purpose of the SAO certification is to provide reasonable assurance that the agency s internal controls support the reliability and validity of the data submitted to Treasury for publication on the website. The SAO assurance means that, at a minimum, the data reported are based on appropriate controls and risk management strategies as described in OMB Circular A-123, Management s Responsibility for Enterprise Risk Management and Internal Control. In addition, agencies should include information about any data limitations in their SAO certification statements. Appendix VI: Agencies That Submitted Data for Quarter Four of Fiscal Year 2018 Committee for Purchase from People Who Are Blind or Severely Disabled (AbilityOne Commission) District of Columbia Courts (DC Courts) Appendix VII: Comments from the U.S. Agency for International Development Appendix VIII: Comments from the Department of the Treasury Appendix IX: GAO Contacts and Staff Acknowledgments <8. GAO Contacts> <9. Staff Acknowledgments> In addition to the above contacts, Peter Del Toro (Assistant Director), Kathleen Drennan (Assistant Director), Michael LaForge (Assistant Director), Maria C. Belaval (Auditor-in-Charge), Barbara Lancaster (Analyst-in-Charge), Diane Morris (Auditor-in-Charge), Carl Barden, Daniel Berg, Mark Canter, Jenny Chanley, Shelby Clark, Tracy Davis Ross, Tabitha Fitzgibbon, Valerie Freeman, Jamaika Hawthorne, Michael Kany, Roy Kilgore, Peter Kramer, Sera LaFache-Brazier, Krista Loose, Tonyita Muschette, Quang Nguyen, Kristine Papa, Joseph Raymond, Lisa Rowland, Susan Sato, John A. Schaefer, Sara Shore, James Skornicki, Andrew J. Stephens, James Sweetman, Jr., Silvia Symber, and Lisa Zhao made key contributions to this report. Additional members of GAO s DATA Act Internal Working Group also contributed to the development of this report. Related GAO Products DATA Act: Customer Agencies Experiences Working with Shared Service Providers for Data Submissions. GAO-19-537. Washington, D.C.: July 18, 2019. DATA Act: Pilot Effectively Tested Approaches for Reducing Reporting Burden for Grants but Not for Contracts. GAO-19-299. Washington, D.C.: April 30, 2019. DATA Act: OMB Needs to Formalize Data Governance for Reporting Federal Spending. GAO-19-284. Washington, D.C.: March 22, 2019. Open Data: Treasury Could Better Align USAspending.gov with Key Practices and Search Requirements. GAO-19-72. Washington, D.C.: December 13, 2018. DATA Act: Reported Quality of Agencies Spending Data Reviewed by OIGs Varied Because of Government-wide and Agency Issues. GAO-18-546. Washington, D.C.: July 23, 2018. DATA Act: OMB, Treasury, and Agencies Need to Improve Completeness and Accuracy of Spending Data and Disclose Limitations. GAO-18-138. Washington, D.C.: November 8, 2017. DATA Act: As Reporting Deadline Nears, Challenges Remain That Will Affect Data Quality. GAO-17-496. Washington, D.C.: April 28, 2017. DATA Act: Office of Inspector General Reports Help Identify Agencies Implementation Challenges. GAO-17-460. Washington, D.C.: April 26, 2017. DATA Act: Implementation Progresses but Challenges Remain. GAO-17-282T. Washington, D.C.: December 8, 2016. DATA Act: OMB and Treasury Have Issued Additional Guidance and Have Improved Pilot Design but Implementation Challenges Remain. GAO-17-156. Washington, D.C.: December 8, 2016. DATA Act: Initial Observations on Technical Implementation. GAO-16-824R. Washington, D.C.: August 3, 2016. DATA Act: Improvements Needed in Reviewing Agency Implementation Plans and Monitoring Progress. GAO-16-698. Washington, D.C.: July 29, 2016. DATA Act: Section 5 Pilot Design Issues Need to Be Addressed to Meet Goal of Reducing Recipient Reporting Burden. GAO-16-438. Washington, D.C.: April 19, 2016. DATA Act: Progress Made but Significant Challenges Must Be Addressed to Ensure Full and Effective Implementation. GAO-16-556T. Washington, D.C.: April 19, 2016. DATA Act: Data Standards Established, but More Complete and Timely Guidance Is Needed to Ensure Effective Implementation. GAO-16-261. Washington, D.C.: January 29, 2016. Federal Spending Accountability: Preserving Capabilities of Recovery Operations Center Could Help Sustain Oversight of Federal Expenditures. GAO-15-814. Washington, D.C.: September 14, 2015. DATA Act: Progress Made in Initial Implementation but Challenges Must be Addressed as Efforts Proceed. GAO-15-752T. Washington, D.C.: July 29, 2015. Federal Data Transparency: Effective Implementation of the DATA Act Would Help Address Government-wide Management Challenges and Improve Oversight. GAO-15-241T. Washington, D.C.: December 3, 2014. Government Efficiency and Effectiveness: Inconsistent Definitions and Information Limit the Usefulness of Federal Program Inventories. GAO-15-83. Washington, D.C.: October 31, 2014. Data Transparency: Oversight Needed to Address Underreporting and Inconsistencies on Federal Award Website. GAO-14-476. Washington, D.C.: June 30, 2014. Federal Data Transparency: Opportunities Remain to Incorporate Lessons Learned as Availability of Spending Data Increases. GAO-13-758. Washington, D.C.: September 12, 2013. Government Transparency: Efforts to Improve Information on Federal Spending. GAO-12-913T. Washington, D.C.: July 18, 2012. Electronic Government: Implementation of the Federal Funding Accountability and Transparency Act of 2006. GAO-10-365. Washington, D.C.: March 12, 2010. | Why GAO Did This Study
The DATA Act requires federal agencies to disclose roughly $4 trillion in annual federal spending and link this spending information to federal program activities so that policymakers and the public can more effectively track federal spending through its life cycle. The act also requires OMB and Treasury to establish data standards to enable consistent reporting of agency spending. The DATA Act includes a provision for GAO to report on the quality of the data collected and made available through USAspending.gov.
Specifically, this report addresses: (1) the timeliness, completeness, and accuracy of the data, and the implementation and use of data standards; and (2) progress made in developing a data governance structure consistent with key practices, and how it affects data quality. GAO examined a projectable government-wide sample of Q4 FY2018 spending data from a Treasury database that populates data on USAspending.gov by comparing them to agency source records and other sources. GAO also compared the results of Q4 2018 with results from its previous review of Q2 FY2017 data.
What GAO Found
The Digital Accountability and Transparency Act of 2014 (DATA Act) requires federal agencies to report spending data to USAspending.gov, a public-facing website. A total of 96 federal agencies submitted required spending data for quarter four of fiscal year 2018 (Q4 FY2018). GAO examined the quality of these data and compared the results with the results of its prior review of quarter two of fiscal year 2017 (Q2 FY2017) data, as appropriate. GAO identified improvements in overall data quality, but challenges remain for completeness, accuracy, use of data standards, disclosure of data limitations, and overall data governance.
Completeness. The number of agencies, agency components, and programs that submitted data increased compared to Q2 FY2017. For example, 11 agencies did not submit data in Q4 FY2018, compared to 28 in Q2 FY2017. Awards for 39 financial assistance programs were omitted from the data in Q4 FY2018, compared to 160 financial assistance programs in Q2 FY2017.
Accuracy. Based on a projectable governmentwide sample, GAO found that data accuracy for Q4 FY2018—measured as consistency between reported data and agency source records or other authoritative sources and applicable laws and reporting standards—improved for both budgetary and award transactions. GAO estimates with 95 percent confidence that between 84 a 96 percent of the budgetary transactions and between 24 and 34 percent of the award transactions were fully consistent for all applicable data elements. In Q2 FY2017, GAO estimated that 56 to 75 percent of budget transactions and 0 to 1 percent of award transactions were fully consistent.
Use of data standards. GAO continued to identify challenges related to the implementation and use of two data elements— Award Description and Primary Place of Performance Address— that are particularly important to achieving the DATA Act's transparency goals. GAO found that agencies continue to differ in how they interpret and apply The Office of Management and Budget's (OMB) standard definitions for these data elements. As a result, data on USAspending.gov are not always comparable, and in some cases it is difficult for users to understand the purpose of an award or to identify the location where the performance of the award occurred.
USAspending.gov presentation. GAO identified known data limitations that were not fully disclosed on USAspending.gov. For example, the 90-day delay for inclusion of Department of Defense procurement data is not clearly communicated. In addition, although the website provides a total figure for unreported spending it is unclear whether it includes the 11 agencies that did not submit data. Not knowing this information could lead users of USAspending.gov to inadvertently draw inaccurate conclusions from the data.
Data governance. OMB and the Department of the Treasury (Treasury) have established some procedures for governing the data standards established under the DATA Act, but procedures for enforcing the consistent use of established data standards have yet to be developed. Persistent challenges related to how agencies interpret and apply data standards underscore GAO's prior recommendations on establishing a governance structure that ensures the integrity of these standards.
What GAO Recommends
GAO maintains that OMB and Treasury should address prior recommendations on monitoring agency submissions, implementing data standards, disclosing data limitations, and developing a robust data governance structure. In addition, GAO makes two new recommendations to Treasury regarding disclosing on USAspending.gov specific known data limitations. Treasury agreed with GAO's recommendations. |
gao_GAO-19-612 | gao_GAO-19-612_0 | <1. Background> <1.1. Indian Health Service> IHS was established within the Public Health Service in 1955 in order to meet federal treaty obligations to provide health services to members of federally recognized AI/AN tribes primarily in rural areas on or near reservations. IHS oversees its provision of health care services through a decentralized system of 12 area offices, which are led by area directors and located in 12 geographic areas. IHS s headquarters office is responsible for setting national health care policy, ensuring the delivery of quality comprehensive health services, and advocating for the health needs and concerns of AI/AN people. The area offices are responsible for monitoring federally operated IHS facilities operations and finances, and providing guidance and technical assistance. IHS s 12 area offices oversee 168 service units which provide care at the local level through a total of 742 federally operated and tribally operated hospitals, health centers, and other health facilities. The types of services offered by these facilities vary, but most commonly include primary care and emergency care, as well as some ancillary and specialty services. Table 1 displays the number of federally operated and tribally operated facilities as of February 2019. <1.2. PRC Program> If federally operated or tribally operated facilities are unable to provide needed care, they may contract for health services from private providers through the PRC program. Patients must meet certain eligibility and administrative requirements in order to qualify for this care including having exhausted all other health care resources available to them and living on a federally recognized Indian reservation or within a designated PRC delivery area. The PRC program is funded through the annual appropriations process and administered at the local level by individual PRC programs that are often affiliated with local facilities. Individual PRC programs may be federally or tribally administered, and as of fiscal year 2018, IHS administered 39 percent of PRC appropriations, and tribes administered the remaining 61 percent. PRC funding is limited and has traditionally been reserved for the most critical cases. IHS has established five medical priority levels. Funds permitting, federally administered PRC programs first pay for all of the highest priority services, and then all or some of the lower priority services. IHS s five PRC medical priority levels are 1. Emergent and acutely urgent care services, which include treatment for threats to life, limb, or senses; 2. Preventive care services, which include prenatal care and 3. Primary and secondary care services, which include scheduled ambulatory services for nonemergent conditions, and specialty consultations; 4. Chronic tertiary and extended care services, which include rehabilitation care, skilled nursing facility care, and organ transplants; and 5. Excluded services, which include cosmetic and experimental procedures. <1.3. PPACA Health Coverage Expansion Provisions for AI/AN> Beginning in 2014, PPACA allowed states to expand Medicaid eligibility to non-elderly, non-pregnant adults who are not eligible for Medicare and whose income does not exceed 133 percent of the federal poverty level. As of September 2018, there were 32 expansion states those states including the District of Columbia that chose to expand Medicaid eligibility to this additional adult population and 19 non-expansion states those that had not expanded Medicaid eligibility. PPACA also required the establishment of health insurance exchanges in 2014 marketplaces where individuals may compare and select among health insurance plans offered by participating private insurers. PPACA included a number of provisions that reduced these plans costs including premiums and cost-sharing, such as deductibles and copayments for eligible enrollees, including certain AI/AN. <2. Health Insurance Coverage and Third- Party Collections at Federally Operated IHS Facilities Increased from 2013 to 2018; Tribal Facility Officials Also Reported Increases> <2.1. IHS Data Show Increase in Percent of Patients with Health Insurance Coverage at Federally Operated IHS Facilities from 2013 through 2018> Our analysis of IHS data shows that from fiscal year 2013 through fiscal year 2018, the percent of patients at 73 federally operated IHS hospitals and health centers who reported having health insurance coverage increased an average of 14 percentage points, from 64 percent in fiscal year 2013 to 78 percent in fiscal year 2018. The majority of coverage gains occurred in fiscal years 2014 through 2016 (see fig. 1). Patients at federally operated IHS facilities reported obtaining health insurance coverage from several sources. The largest increase in coverage occurred among those reporting Medicaid coverage. On average, 41 percent of IHS patients in fiscal year 2013 reported they had coverage through Medicaid at some point during the year; this number increased to 53 percent in fiscal year 2018. In comparison, the percent of patients at each facility who reported having Medicare and the percent who reported having private insurance at some point during the year each increased an average of two percentage points from fiscal years 2013 to 2018. (See fig. 2.) While the average percent of patients reporting health care coverage increased across all federally operated IHS facilities, our analysis of IHS data showed substantial variation in the magnitude of these increases. Specifically, from fiscal year 2013 through fiscal year 2018, increases at each of the 73 facilities ranged from a low of 2 to a high of 31 percentage points. Forty-four federally operated IHS facilities experienced an increase in the percent of patients with coverage over this time period of more than 10 percentage points (see fig. 3). Our analysis of IHS data shows that federally operated IHS facilities in states that expanded Medicaid had larger increases in health insurance coverage compared with such facilities in states that had not expanded Medicaid. Specifically, federally operated IHS facilities in Medicaid expansion states experienced an average 17 percentage point increase in patients reporting any form of health coverage, compared with an average 8 percentage point increase at federally operated IHS facilities in states that did not expand Medicaid. However, these increases in coverage were not spread evenly among the facilities. (See fig. 4.) IHS officials we interviewed also reported that a variety of factors in addition to Medicaid expansion likely affected the number of patients at federally operated IHS facilities who reported having health insurance coverage. Specifically, officials we interviewed at all of the 11 selected federally operated IHS facilities cited efforts at their facilities that helped increase coverage, such as increasing the number of onsite patient benefits coordinators to help enroll patients in all forms of health coverage and enhancing efforts to ensure that all patients were screened for coverage. For example, one federally operated IHS facility reported renovating its office to, among other things, move the patient benefits coordinator near the waiting room, which allowed patients to be immediately screened after walking in for an appointment. Officials we interviewed at nearly all of the selected federally operated IHS facilities also noted that their outreach and education efforts about the importance of health insurance coverage may have helped to increase enrollment. Officials we interviewed at all of the selected federally operated IHS facilities said they were engaged in such activities which included broadcasting public service announcements, posting newspaper advertisements, and promoting insurance during community events. Officials from most of the 12 IHS area offices also reported collaborating with tribes to conduct outreach and education to enhance enrollment. Officials at many IHS area offices also noted that external factors may have also played a role in increasing coverage levels, such as improvements in the local economy, which officials said led to increases in the number of patients with private health insurance. Additionally, entities outside of IHS also implemented initiatives to increase coverage for patients at federally operated IHS facilities. For example, IHS officials stated that some patients obtained health insurance through the health insurance exchanges, and in some cases, the tribe paid all premiums, coinsurance, and deductibles for these plans. In addition, a number of area Indian health boards worked together to develop a train-the-trainer program to disseminate information and resources to encourage enrollment and share information on the benefits of having health coverage. <2.2. Total Third-Party Collections at Federally Operated IHS Facilities Increased 51 Percent from Fiscal Years 2013 through 2018> Third-party collections across all federally operated IHS facilities increased 51 percent from fiscal year 2013 through fiscal year 2018, according to our analysis of IHS data. Specifically, total third-party collections increased from $708 million in fiscal year 2013 to about $1.07 billion in fiscal year 2018 while the number of patients seeking care remained constant. Medicaid collections accounted for 65 percent of the total $360 million increase, though collections from Medicare, private insurance, and Veterans Affairs also increased during this period. For example, Medicaid collections grew 47 percent, from $496 million in fiscal year 2013 to $729 million in fiscal year 2018. (See fig. 5.) While third-party collections at federally operated IHS facilities collectively increased from fiscal year 2013 through 2018, there was significant variation in changes for individual facilities. IHS officials we interviewed noted several reasons why third-party collections may vary over time and by location, including the size of the facility and any changes in the number of providers, patients, or business office staff that process billing and collections; the ability to collect payment from certain tribal health insurance, which may opt to not pay for services provided to enrolled members; and the number of patients enrolled in Medicaid managed care plans, which may identify IHS facilities as out-of-network providers and not pay for covered services. IHS and federally operated facility officials we interviewed noted that gains in health insurance coverage during this time period contributed to increases in collections. In addition, officials we interviewed from most of the 12 area offices and 11 selected federally operated IHS facilities described steps they took to enhance collections. More specifically, officials from seven area offices discussed initiating steps to improve billing and collections functions for federally operated IHS facilities in their area; at one area office this involved creating a new area-level position focused on revenue enhancement at federally operated IHS facilities. Additionally, officials we interviewed at six federally operated IHS facilities identified steps they took to enhance the accuracy and efficiency of facilities collections, noting efforts such as improving training related to coding and billing. For example, officials at one of these facilities described convening a team to review why all claims related to a specific service were being rejected. The team then instituted changes to their billing procedures that resulted in the facility collecting payments for these services. <2.3. Officials from Selected Tribally Operated Facilities and Tribal Organizations Described Increases in Health Insurance Coverage and Third-Party Collections at Some Tribal Facilities> Officials we interviewed at selected tribally operated facilities and tribal organizations including national tribal organizations and area Indian health boards described increases in health insurance coverage and collections at some tribally operated facilities that occurred from 2013 through 2018. Specifically, some tribal organization officials reported increases in coverage at facilities located in states that had expanded their Medicaid programs, compared with those that had not. For example, officials at one tribally operated facility noted that the percent of their patients with health coverage increased by 10 percentage points from 2013 to 2018. Similar to federally operated IHS facilities, officials we interviewed from some tribally operated facilities said they focused on screening patients for coverage at the time of service, including by increasing the number of patient benefits coordinators and always having staff available to help enroll patients in coverage. These officials also noted that they conducted outreach and enrollment activities to inform patients of the importance of having coverage and benefitting from outreach and education activities conducted directly by local tribes, including through messages that emphasized the importance of coverage for the tribe and tribally operated facility. Officials from a national tribal organization told us that one tribally operated facility placed stickers on all equipment purchased with third- party collections as a way to educate patients about the benefits of having health insurance coverage and to encourage further enrollment in coverage. Officials we interviewed at selected tribally operated facilities and national tribal organizations also described increases in third-party collections that occurred from 2013 through 2018 at many tribally operated facilities particularly those located in Medicaid expansion states. For example, officials from one tribally operated facility told us that they anticipated that their third-party collections for 2018 would be more than twice the amount they collected for 2013. Similar to federally operated IHS facilities, officials we interviewed from some tribally operated facilities noted that their facilities had enhanced collections by making improvements to their billing processes and taking steps to increase patient volume. For example, officials at one tribally operated facility said they recently began allowing non-tribal members to receive care at their facility an option available to tribally operated facilities but not to federally operated IHS facilities as a way to increase third-party collections and bolster the facility s long-term sustainability. Some officials also noted that not all tribally operated facilities experienced increases in collections, in part because of decreases or limitations in the number of providers, patients, or business office staff that process billing and collections. Similar to federally operated IHS facilities, officials from tribally operated facilities noted that the enrollment of patients in Medicaid managed care plans also reduced their ability to collect payment for covered services because these plans often identify the facilities as out-of-network providers and therefore do not pay for covered services provided onsite. <3. Increases in Coverage and Collections Reportedly Helped Selected Federally Operated and Tribally Operated Facilities to Continue Operations and Expand Services> Officials we interviewed from selected federally operated and tribally operated facilities stated that increases in coverage and third-party collections helped them to (1) continue their facilities operations, (2) expand the services they offer onsite at their facilities, and (3) expand the services they cover offsite through IHS s PRC program. <3.1. Continued Operations> Officials we interviewed from all 17 selected federally operated and tribally operated facilities noted that they used increased third-party collections to fund their continued operations. Even as officials we interviewed from nearly all of the 11 selected federally operated IHS facilities reported that their facilities third-party collections had grown from fiscal years 2013 to 2018, officials from most of these facilities also said they relied more heavily on these collections to support their continued operations. Officials we interviewed from all of the IHS area offices told us that third-party collections provide a vital source of funding for federally operated IHS facilities in their area. These collections allowed them to maintain a level of operations that would otherwise be challenging, for reasons such as increasing costs of payroll and of maintaining an aging infrastructure. In addition, officials we interviewed from most of the selected federally operated IHS facilities reported using third-party collections to fund a substantial and increasing portion of their payroll costs. Officials at many of the IHS area offices and most of the selected federally operated IHS facilities we interviewed also reported using third-party collections to ensure that their facility met all required standards, including those required for ongoing accreditation, or to undertake any needed maintenance such as by repairing roofs and heating systems. Some of these officials also reported using third-party collections to repair or replace medical equipment that was broken or had exceeded its intended lifespan. Table 2 displays examples of how selected federally operated and tribally operated facilities reported using third-party collections. <3.2. Expanded Services Onsite> Officials we interviewed from most of the 17 selected federally operated and tribally operated facilities told us they used increased third-party collections to expand the volume or scope of services they offered onsite as a way to better meet patients medical needs. With respect to increasing the volume of services provided, officials at most of these facilities said they added providers and medical equipment to provide patients with more timely access to services. In one example, officials from a federally operated IHS hospital said they added about 30 additional nurses from 2013 to 2018 as a result of increased third-party collections. As a result of increases in the number of providers at their facilities, officials we interviewed from several federally operated IHS facilities said they were able to schedule appointments for patients more quickly, which reduced wait times for an appointment including two facilities that reported being able to newly offer same-day appointments. Officials from facilities that expanded the scope of services provided said they did so by adding new specialties, such as behavioral health and dentistry, purchasing new medical equipment such as hospital beds, dental chairs, and magnetic resonance imaging machines, and funding health promotion and education activities such as those related to diabetes education. (See fig. 6.) To support efforts to expand services and bolster their sustainability, officials from most of the 17 federally operated and tribally operated facilities said they used third-party collections to offer more competitive salaries and bonuses for providers. In addition, officials from a few of the 12 IHS area offices told us that federally operated facilities in their area used third-party collections to fund projects to construct nearby housing for providers. In another example, officials from a national tribal organization noted that the use of third-party collections to enhance provider salaries at one facility led to a decrease in provider turnover from about 40 percent prior to 2014 to 14 percent in 2018. In addition, officials from many of the IHS area offices told us that some federally operated facilities in their area reported using third-party collections accumulated over multiple years to make investments in expanding their facilities to provide the space necessary to support these additional services. For example, according to IHS officials, one federally operated IHS facility reported using $7 million in third- party collections to fund an over 11,000 square foot expansion to house an expanded emergency room and a new urgent care clinic; two federally operated IHS facilities reported using third-party collections to purchase modular buildings to provide medical services such as audiology, behavioral health, and dental services; and one federally operated IHS facility reported saving third-party collections for six years to fund the construction of a new 23,000 square foot building to provide additional space for an increased volume of services, including dental, optometry and physical therapy services, and to pay for the new medical equipment to support these services (see fig. 7). Officials from some IHS area offices stated that the extent to which federally operated IHS facilities in their area invested in expanding onsite services largely depended on the level of facilities third-party collections. Specifically, facilities experiencing larger increases in collections, such as larger facilities or those located in Medicaid expansion states, were able to invest more heavily in an expansion of onsite services compared to those that had lower increases in collections, according to these officials. To identify their facilities needs, officials from federally operated and tribally operated facilities reported using a variety of approaches. For example, officials from three IHS area offices and one tribally operated facility said they analyzed PRC data to identify the services that patients were obtaining through that program, and worked to bring those services onsite. Officials from two federally operated IHS facilities also noted that they incorporated local tribal input as they identified local needs and projects to fund. For example, these officials told us that their facilities were in the process of adding new specialty services onsite, including acupuncture, chiropractor, and eye clinic services, at the request of their local tribes. The recent growth in third-party collections has made it possible for many federally operated IHS facilities to consider funding a range of projects, and IHS officials said they relied on established procedures to fund these projects. According to IHS officials, local facility officials draft annual spending proposals to identify the resources, including third-party collections, that they would like to use to address their facilities needs. These proposals are provided to each facility s governing board for review; the governing board is comprised of area office and facility officials whose top priority is maintaining accreditation and ensuring patient safety at each facility, according to IHS officials. Once these basic needs are met, IHS officials told us that facilities may begin to identify and fund projects to expand access to health services. <3.3. Expanding Services Offsite> Officials from IHS, as well as some of the 17 selected federally operated and tribally operated facilities, told us that increased coverage and collections allowed for an expansion in the complexity of services provided offsite through the PRC program. Specifically, officials reported that an increase in the percent of patients with health insurance, coupled with facilities enhanced onsite services, has led PRC programs to be able to expand the level of care that they can offer. For example, they stated that increases in the health insurance coverage of patients have led to a smaller percent of patients needing to access PRC, since patients may use their coverage to obtain needed services directly from other private providers. In addition, an expansion of available services onsite at federally operated and tribally operated facilities resulting from increased collections reduced the need for some patients to use PRC. From 2013 through 2018, most IHS-administered PRC programs moved from covering only the most acute and emergent cases to funding nearly all types of care covered through the PRC program, according to our analysis of IHS data and interviews with agency officials. Specifically, IHS officials we interviewed told us that prior to 2014, most PRC programs administered by the agency were only able to fund care for the most acute and emergent cases referred to as priority level 1. Our analysis of IHS data showed that these PRC programs were increasingly able to fund additional medical priority levels of care each year from fiscal year 2015 the first year that such data were available through fiscal year 2018, with most IHS-administered programs funding care through priority level 4 in fiscal year 2018. (See fig. 8.) Officials we interviewed at some of the 17 selected federally operated and tribally operated facilities that had been able to both expand services onsite and offsite through PRC funds told us that these changes have made a large impact on patients health and quality of life. For example, officials at some federally operated IHS facilities reported that having more providers onsite has allowed them to offer patients more rapid access to care, and officials from some tribally operated facilities reported that an expansion of onsite services has allowed them to serve more patients. Officials at some of the selected federally operated and tribally operated facilities reported that an expansion of onsite services has also reduced the need for some patients to travel long distances to obtain diagnostic services and specialty care through the PRC program. In addition, officials from two IHS area offices noted that PRC has been able to pay for services such as patients long-awaited knee and hip replacements, which have enabled patients to return to normal activities of life and reduce their need for pain management. <4. Agency Comments> We provided a draft of this report for review and comment to the Secretary of Health and Human Services. The Department did not have any comments on the draft report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of the Department of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or farbj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Estimated Health Insurance Coverage of the American Indian and Alaska Native Population, 2013 through 2017 In the years since the Patient Protection and Affordable Care Act (PPACA) authorized states to expand access to Medicaid and offer health insurance through the exchanges in 2014, the percent of American Indian and Alaska Native (AI/AN) in the general population with health insurance has increased. Specifically, according to an analysis of U.S. Census Bureau s American Community Survey data, the percent of nonelderly AI/ANs with health insurance coverage increased from 70 percent in 2013 to 78 percent in 2017. (See fig. 9.) While the estimated percent of AI/AN nationwide reporting health insurance coverage increased from 2013 to 2017, these increases in coverage were not evenly distributed among the states, according to an analysis of U.S. Census Bureau s American Community Survey data. The estimated percent of AI/AN reporting health insurance increased more in states that expanded Medicaid compared to those that did not. (See fig. 10.) Appendix II: GAO Contact and Staff Acknowledgments <5. GAO Contact> <6. Staff Acknowledgments> In addition to the contact named above, Kristi Peterson, Assistant Director; Patricia Roy, Analyst-in-Charge; Michelle Duren; and Lisa Rogers made key contributions to this report. Also contributing were Todd Anderson, Krister Friday, Ethiene Salgado-Rodriguez, and Emily Wilson Schwark. Related GAO Products Tribal Consultation: Additional Federal Actions Needed for Infrastructure Projects. GAO-19-22. Washington, D.C.: March 20, 2019. Indian Health Service: Spending Levels and Characteristics of IHS and Three Other Federal Health Care Programs. GAO-19-74R. Washington, D.C.: December 10, 2018. Indian Health Service: Considerations Related to Providing Advance Appropriation Authority. GAO-18-652. Washington, D.C.: September 13, 2018. Indian Health Service: Agency Faces Ongoing Challenges Filling Provider Vacancies. GAO-18-580. Washington, D.C.: August 15, 2018. High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. Indian Health Service: Actions Needed to Improve Oversight of Quality of Care. GAO-17-181. Washington, D.C.: January 9, 2017. Indian Health Service: Actions Needed to Improve Oversight of Patient Wait Times. GAO-16-333. Washington, D.C.: March 29, 2016. Indian Health Service: Opportunities May Exist to Improve the Contract Health Services Program. GAO-14-57. Washington, D.C.: December 11, 2013. Indian Health Service: Most American Indians and Alaska Natives Potentially Eligible for Expanded Health Coverage, but Action Needed to Increase Enrollment. GAO-13-553. Washington, D.C.: September 5, 2013. Indian Health Service: Increased Oversight Needed to Ensure Accuracy of Data Used for Estimating Contract Health Service Need. GAO-11-767. Washington, D.C.: September 23, 2011. Indian Health Service: Updated Policies and Procedures and Increased Oversight Needed for Billings and Collections from Private Insurers. GAO-10-42R. Washington, D.C.: October 22, 2009. Indian Health Service: Health Care Services Are Not Always Available to Native Americans. GAO-05-789. Washington, D.C.: August 31, 2005. | Why GAO Did This Study
IHS provides care to American Indians and Alaska Natives through a system of health care facilities. The Patient Protection and Affordable Care Act (PPACA) provided states with the option to expand their Medicaid programs, and created new coverage options beginning in 2014, including for American Indians and Alaska Natives. GAO was asked to review how PPACA has affected health care coverage and services for American Indians and Alaska Natives. In this report, GAO describes (1) trends in health insurance coverage and third-party collections at federally operated and tribally operated facilities from fiscal years 2013 through 2018, and (2) the effects of any changes in coverage and collections on these facilities.
To address these objectives, GAO analyzed IHS data on coverage, third-party collections, and PRC. GAO interviewed IHS officials from headquarters and all 12 area offices, as well as from 17 facilities selected to include a mix of federally operated and tribally operated hospitals and health centers in states that both had and had not expanded their Medicaid programs as of September 2018. GAO interviewed officials from 11 federally operated IHS facilities and 6 tribally operated facilities.
GAO provided a draft of this report to the Secretary of Health and Human Services for comment. The Department did not have any comments on the draft report.
What GAO Found
GAO's analysis of Indian Health Service (IHS) data shows that from fiscal years 2013 through 2018, the percent of patients at federally operated IHS hospitals and health centers that reported having health insurance coverage increased an average of 14 percentage points. While all federally operated IHS facilities reported coverage increases, the magnitude of these changes differed by facility, with those located in states that expanded access to Medicaid experiencing the largest increases. Federally operated IHS facilities' third-party collections—that is, payments for enrollees' medical care from public programs such as Medicaid and Medicare, or from private insurers—totaled $1.07 billion in fiscal year 2018, increasing 51 percent from fiscal year 2013. Although exact figures were not available, tribally operated facilities, which include hospitals and health centers not run by IHS, also experienced increases in coverage and collections over this period, according to officials from selected facilities and national tribal organizations.
Increases in health insurance coverage and third-party collections helped federally operated and tribally operated facilities continue their operations and expand the services offered, according to officials from 17 selected facilities. These officials told GAO that their facilities have been increasingly relying on third-party collections to pay for ongoing operations including staff payroll and facility maintenance. Officials at most facilities with increases in third-party collections also stated that they expanded their onsite services, including increasing the volume or scope of services offered by, for example, adding new providers or purchasing medical equipment. Increased coverage and collections also allowed for an expansion in the complexity of services provided offsite through the Purchased/Referred Care (PRC) program, which enables patients to obtain needed care from private providers if the patients meet certain requirements and funding is available. According to IHS and facility officials, increases in coverage have allowed some patients to access care offsite using their coverage, and an expansion of onsite services has reduced the need for some patients to access PRC. Officials GAO interviewed from federally operated and tribally operated facilities stated that facilities' expansion of onsite and offsite services has led to enhancements in patients' access to care in some instances. |
gao_GAO-20-234T | gao_GAO-20-234T_0 | <1. DOD Faces Substantial Supply Chain Challenges> First, DOD is facing substantial supply chain challenges that are hindering the readiness of the F-35 fleet. Specifically, spare parts shortages throughout the F-35 supply chain are contributing to F-35 aircraft being unable to perform as many missions or to fly as often as the warfighter requires. The F-35 s unique supply chain is central to DOD s strategy to sustain the growing fleet. Rather than owning the spare parts for their aircraft, the Air Force, Navy, and Marine Corps, along with international partners and foreign military sales customers, share a common, global pool of parts. This construct for the F-35 supply chain was intended to ease the logistical burden and provide economies of scale for the military services and international partners; however, the global pool does not have enough spare parts. Specifically, from May through November 2018, F-35 aircraft across the fleet were unable to fly about 30 percent of the time due to parts shortages, as compared with a program target of 10 percent. Below is pictured an F-35B aircraft conducting training aboard a ship. Our work found that several factors contribute to these parts shortages, including F-35 parts that are breaking more often than expected, and DOD s limited capability to repair parts when they break. Specifically, as of April 2019, the F-35 program was failing to meet four of its eight reliability and maintainability targets which determine the likelihood that the aircraft will be in maintenance rather than available for operations including metrics related to part removals and part failures. For instance, we reported at that time that the special coating on the F-35 canopy that enables the aircraft to maintain its stealth had failed more frequently than expected, and the manufacturer was unable to produce enough canopies to meet demands. These reliability challenges are exacerbated by DOD s limited capability to repair broken parts at the military depots. The capabilities to repair parts are currently 8 years behind schedule. DOD originally planned to have repair capabilities at the depots ready by 2016, but as we reported in April 2019, the depots will not have the capability to repair all parts at expected demand rates until 2024. As a result, the average time taken to repair an F-35 part was more than 6 months, or about 188 days, for repairs completed between September and November 2018 more than twice as long as planned. At that time, there was a backlog of about 4,300 spare parts awaiting repair at depots or manufacturers. We have also reported on other challenges that DOD faces related to its supply chain, including challenges in supporting deployed F-35 aircraft around the world, in clarifying how scarce parts will be distributed, in establishing a plan for a global supply chain network, and in maintaining accountability for spare parts. Figure 2 depicts many of these and other challenges that DOD faces related to the F-35 supply chain. DOD has not fully implemented seven of our recommendations related to its supply chain challenges: Revise sustainment plans: In October 2017, we reported that DOD s reactive approach to planning for and funding the capabilities needed to sustain the F-35 resulted in significant readiness challenges including delays in the establishment of part repair capabilities at the depots and placed DOD at risk of being unable to leverage the capabilities of the aircraft it had purchased. We recommended that DOD revise its sustainment plans to ensure that they include the key requirements and funding needed to fully implement the F-35 sustainment strategy. Conduct a comprehensive review of the F-35 supply chain: While DOD had ongoing efforts to increase the availability of spare parts, we found in April 2019 that DOD would likely continue to face challenges because the program was not planning for the quantity of parts necessary in its spare parts projections to meet warfighter requirements. Simply purchasing more F-35 parts may not be a viable solution for DOD, given the affordability concerns the program faces. These complex problems necessitate a comprehensive approach by DOD, or it is at risk that the F-35 will not be able to conduct the full range of intended missions. We recommended that DOD conduct a comprehensive review of the F-35 supply chain to determine what additional actions are needed to close the gap between warfighter requirements for aircraft performance and the capabilities that the F- 35 supply chain can deliver, in light of the U.S. services affordability constraints. Develop a process to modify the afloat and deployment spare parts packages: DOD purchases certain packages of F-35 parts years in advance to support aircraft on deployments, including on ships called afloat and deployment spare parts packages. In April 2019, we reported that continued modifications to parts and aircraft can make such packages out-of-date by the time F-35 units deploy, and that the F-35 program did not have a process and funding in place to change out mismatched parts. This could put the military services at risk of not having the parts they need to support future deployments. We recommended that DOD develop a process to modify afloat and deployment spare parts packages, to include reviewing the parts within the packages to ensure that they match deploying aircraft and account for updated parts demand, and aligning any necessary funding needed for the parts updates. Mitigate risks related to operating and sustaining the F-35 in the Pacific: In March 2018, we issued a classified report on DOD s initial transfer of F-35s to a Marine Corps base in Japan that, among other things, described the warfighting capabilities the F-35 brought to the Pacific and assessed operational challenges the Marine Corps faced. In April 2018, we publicly reported on the recommendations from this classified report, including our recommendation that the Marine Corps assess the risks associated with key supply chain-related challenges related to operating and sustaining the F-35 in the Pacific, and that it determine how to address those risks. Revise the business rules for prioritizing scarce F-35 parts: In April 2019, we reported that there was uncertainty about how the program will prioritize scarce F-35 parts among global participants. While the F-35 program had developed a set of business rules, those rules lacked clarity and detail. Absent comprehensive business rules, the F-35 program could face challenges in transparently allocating parts to support competing U.S. and international requirements. We recommended that DOD revise the business rules for the prioritization of scarce F-35 parts across all program participants so as to clearly define the roles and responsibilities of all stakeholders, the process for assigning force activity designations, and the way in which deviations from the business rules will be conducted. Complete a detailed plan for the establishment of the global network for moving F-35 parts: In April 2019, we reported that DOD s networks to move F-35 parts around the world to the United States and international participants were immature. Because the F- 35 program did not fully recognize the complexity of establishing a global network for moving F-35 parts, this network is now several years behind schedule and there is risk that it will not be fully capable to support an expanding fleet. We recommended that DOD complete a detailed plan for the establishment of the global network for moving F-35 parts that outlines clear requirements and milestones to reach full operational capability, and that includes mechanisms to identify and mitigate risks to the F-35 global spares pool. Clearly establish how DOD will maintain accountability for F-35 parts: In April 2019, we reported that in its rush to field aircraft and its heavy reliance on the prime contractor, DOD had not consistently followed DOD guidance for property accountability. Simply put, DOD did not have records of all the F-35 spare parts it had purchased; where those parts were located; and how much the military services had paid for them. We recommended that DOD issue a policy consistent with DOD guidance that clearly establishes how DOD will maintain accountability for F-35 parts within the supply chain, and identify the steps needed to implement the policy retrospectively and prospectively. DOD concurred with these recommendations and has made some progress in addressing them, including issuing a revised life cycle sustainment plan in January 2019. In addition, DOD has taken actions to increase the availability of spare parts, such as efforts to improve the reliability of parts and incentivize manufacturers to repair parts. <2. Autonomic Logistics Information System Remains Immature> Second, DOD continues to face challenges with the F-35 s Autonomic Logistics Information System (ALIS). ALIS is a complex information technology system supporting operations, mission planning, supply-chain management, maintenance, and other processes. It is intended to provide the necessary logistics tools to F-35 users as they operate and sustain the aircraft. For supply chain management, for example, ALIS is supposed to automate a range of supply functions including updating the status of parts, generating supply work orders, and communicating critical data about parts. However, we reported in April 2019 that these capabilities were immature, resulting in numerous challenges and the need for maintainers and supply personnel at military installations to perform time-consuming, manual workarounds in order to manage and track parts. We reported that one Air Force unit estimated that it spent the equivalent of more than 45,000 hours per year performing additional tasks and manual workarounds because ALIS was not functioning as needed. In our prior work we identified several challenges associated with ALIS, including the following examples (see table 1). We have made six recommendations since 2014 to help DOD address ALIS-related challenges. DOD generally concurred with these recommendations. It addressed two by developing a plan that prioritizes ALIS risks and creating a training plan for ALIS. However, DOD has not taken action on four of our recommendations. These are: Establish a performance-measurement process: In September 2014, we reported that ALIS had experienced recurring problems, including user issues and schedule delays, and was a risk that could adversely affect DOD s sustainment strategy. But we found that DOD did not have a process to determine and address the most significant performance issues with ALIS based on user requirements, which could limit its ability to effectively and efficiently address performance issues and identify root causes of those issues. We recommended that DOD establish a performance-measurement process for ALIS that includes, but is not limited to, performance metrics and targets that (1) are based on intended behavior of the system in actual operations and (2) tie system performance to user requirements. Incorporate cost-estimating best practices: In April 2016, we reported that DOD s $16.7 billion life cycle cost estimate for ALIS was not fully credible since DOD had not performed key analyses as part of the cost-estimating process. We recommended that DOD conduct uncertainty and sensitivity analyses consistent with cost-estimating best practices. Ensure that future cost estimates use historical data: In April 2016, we also reported that DOD s ALIS cost estimate was not fully accurate because DOD did not use historical cost data, including actual cost data from ALIS and data from other comparable programs. We recommended that DOD ensure that future estimates of ALIS costs use historical data as available and reflect significant program changes consistent with cost-estimating best practices. Test the operation of the F-35 when disconnected from ALIS: In March 2018, we issued a classified report on DOD s initial transfer of F-35s to a Marine Corps base in Japan that, among other things, described the warfighting capabilities the F-35 brought to the Pacific and assessed any operational challenges the Marine Corps faced. In April 2018, we publicly reported on the recommendations from this classified report, including our recommendation that the F-35 program test operating the F-35 disconnected from ALIS for extended periods of time in a variety of scenarios, to assess the risks related to operating and sustaining the aircraft, and determine how to mitigate any identified risks. We are currently conducting a review of ALIS, assessing how DOD is managing current and future issues related to the system. We plan to complete this review in early 2020. <3. DOD Lacks Critical Information to Effectively Plan for Long-term F-35 Sustainment> Third, at the core, DOD s current sustainment challenges have largely resulted from insufficient planning. We have found that DOD lacks information about the technical characteristics and costs of the F-35, which will impair its ability to plan for the long-term sustainment of the F- 35 fleet. The current F-35 sustainment strategy states that the primary contractor will provide logistics support for the aircraft. In October 2017, we reported that while DOD planned to enter into 5-year, fixed-price, performance-based contracts with the prime contractor in the next few years, DOD did not have full information on F-35 technical characteristics or costs to enable it to effectively negotiate those contracts. Specifically, certain technical aspects of the aircraft remained immature or uncertain, including reliability measures that are lagging behind operational requirements. As previously discussed, in April 2019 we reported that the F-35 program was still not on track to meet its targets for four out of eight reliability and maintainability metrics, and that the program had not taken adequate steps to ensure that those targets would be met. DOD officials told us that there would be inherent risk in signing a long-term, performance-based contract before reliability and maintainability data were more fully known, as those data would influence how much aircraft performance should cost. In addition, DOD did not have full visibility into the actual costs of some key sustainment requirements that are considered cost-drivers within the program, such as the actual costs of parts and repairs. Thus, DOD had relied on projected parts reliability and pricing to formulate cost estimates. Actual costs of sustainment requirements can change significantly from initial projections. For instance, we reported that, between the program s 2014 and its 2015 estimates, the costs of initial spare parts over the life cycle increased by $447 million. The lack of cost information continues to be a challenge for DOD, as we reported in April 2019. DOD officials have stated that they need to know actual costs in order to improve both their confidence in the estimates and their understanding of how cost is related to performance. Below is pictured an F-35A aircraft being refueled. Further, DOD lacks the technical data from the prime contractor needed to fully understand the technical characteristics of the F-35 aircraft and enable potential competition of future sustainment contracts. Technical data include the blueprints, drawings, photographs, plans, instructions, and other documentation required to adequately produce, operate, and sustain weapon systems. Technical data are critical for weapon systems such as F-35 aircraft, as they provide DOD with the information necessary to support the fleet. In April 2019, we found that challenges related to readiness and costs were driving DOD to begin to develop an option for DOD-led supply chain management as a potential alternative to the performance-based contracts through which the prime contractor would provide logistics support. The DOD-led option would require the department to obtain significant amounts of technical data on F-35 parts from the manufacturers of those parts; however, at that time DOD was facing challenges in obtaining the needed data. DOD has not fully implemented 10 of our recommendations related to these issues: Develop a long-term Intellectual Property strategy: In September 2014, we reported that DOD had not identified all of the technical data it needs from the contractor, and at what cost, to enable competition of future sustainment contracts, which put the program at risk of not having the flexibility to make changes to its sustainment strategy. We recommended that DOD develop a long-term Intellectual Property strategy to include, but not be limited to, the identification of current levels of technical data rights ownership by the federal government and all critical technical data needs and their associated costs. Assess whether the program reliability and maintainability targets are still feasible: In April 2019, we reported that the F-35 program continued to fall short of meeting performance targets for half of its reliability and maintainability metrics. Program officials said that those targets need to be reevaluated to determine more realistic performance targets, but they had not taken action to do so. We recommended that DOD assess whether the program s reliability and maintainability targets are still feasible, and revise accordingly. Identify specific and measurable reliability and maintainability objectives: In April 2019, we reported that the F-35 program s plan for improving reliability and maintainability did not address the four under-performing metrics. Specifically, the guidance the program has used to implement this plan does not define specific, measurable objectives for what the desired goals for F-35 reliability and maintainability performance should be. As long as these metrics continue to fall short, the military services may have to settle for aircraft that are less reliable and more costly to maintain than originally planned. We recommended that DOD identify specific and measurable reliability and maintainability objectives in its guidance. Link reliability and maintainability improvement projects to the associated objectives: In April 2019, we reported that the F-35 program had not aligned its planned reliability and maintainability improvement projects with reliability and maintainability goals, which could put the program at risk of not meeting those goals. We recommended that DOD identify and document in guidance which reliability and maintainability improvement projects will achieve the identified objectives. Prioritize funding for reliability and maintainability improvement: In April 2019, we reported that the F-35 program office had estimated potential life-cycle cost savings of more than $9.2 billion from implementing the reliability and maintainability improvement projects in its plan, but had not prioritized or dedicated funding in its budget necessary to carry out the projects. As a result, projects had been prematurely suspended or delayed. We recommended that the F-35 program office prioritize funding for the reliability and maintainability improvement plan. Re-examine the metrics DOD will use to hold the contractor accountable: In October 2017, we reported that DOD might not be using the appropriate performance metrics under trial performance- based agreements to achieve desired outcomes or hold the contractor accountable for performance. We recommended that DOD re- examine the metrics that it will use to hold the contractor accountable under the fixed-price, performance-based contracts, to ensure that such metrics are objectively measurable, are fully reflective of processes over which the contractor has control, and drive desired behaviors by all stakeholders. Delay entering into multi-year, fixed-price, performance-based contracts: In October 2017, we reported that DOD was moving quickly toward negotiating longer-term performance-based contracts without a sufficient understanding of the actual costs and technical characteristics of the aircraft, which put DOD at risk of overpaying for sustainment support that is not sufficient to meet warfighter requirements. We recommended that, before DOD enters into multi- year, fixed-price, performance-based contracts, it ensure that it has sufficient knowledge of the actual costs of sustainment and technical characteristics of the aircraft at system maturity. Obtain comprehensive cost information for F-35 spare parts: In April 2019, we reported that DOD did not have comprehensive cost information for individual F-35 spare parts, and that it faced challenges in obtaining this information from the prime contractor. This lack of cost information impedes DOD s ability to develop a complete understanding of the costs for the F-35 system and to effectively negotiate with the prime contractor for sustainment support. We recommended that DOD develop a methodical approach to consistently obtain comprehensive cost information from the prime contractor for F-35 spare parts within the supply chain. Formalize a methodology for recording military service funds spent on F-35 parts: In April 2019, we reported that the military services could not track the funds that they had spent for the purchase of F-35 spare parts to the actual parts on their financial statements, thereby hindering DOD s financial improvement and audit readiness efforts. We recommended that DOD complete and formalize a methodology for the U.S. services to use in recording on their financial statements the funds spent on F-35 parts within the global spares pool. Clearly define the F-35 supply chain management strategy: In April 2019, we reported that DOD was caught between two distinct sustainment concepts the program s official contractor-provided logistics support construct and DOD s effort to develop options for DOD-led supply chain management. Until DOD clearly defines its strategy for managing the F-35 supply chain in the future, the F-35 program will lack the certainty and unity of effort necessary to meaningfully improve supply chain performance and reduce costs. We recommended that DOD clearly define the strategy by which it will manage the F-35 supply chain in the future and update key strategy documents accordingly, to include any additional actions and investments necessary to support that strategy. DOD concurred with all of these recommendations. Seven of the preceding recommendations were made earlier this year, and we recognize that it will take time for DOD to implement them. However, DOD s attention to each of these recommendations is important to improving its long-term sustainment planning. In summary, DOD s costs to purchase the F-35 are expected to exceed $406 billion, and the department expects to spend more than $1 trillion to sustain its F-35 fleet. Thus, DOD must continue to grapple with affordability as it takes actions to increase the readiness of the F-35 fleet and improve its sustainment efforts to deliver an aircraft that the military services and partner nations can successfully operate and maintain over the long term within their budgetary realities. DOD s continued attention to our recommendations will be important as it balances these goals. We will continue to monitor DOD s efforts to implement our recommendations. Chairmen Garamendi and Norcross, Ranking Members Lamborn and Hartzler, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. <4. GAO Contact and Staff Acknowledgments> If you or your staff have questions about this testimony, please contact Diana Maurer, Director, Defense Capabilities and Management, at (202) 512-9627 or maurerd@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Alissa Czyz and Kasea Hamar (Assistant Directors); Jon Ludwigson, Vincent Buquicchio, Tracy Burney, Desiree Cunningham, Jeff Hubbard, Justin Jaynes, Amie Lesser, Sean Manzano, Jillena Roberts, Michael Silver, Maria Staunton, Tristan T. To, Cheryl Weissman, and Elisa Yoshiara. Related GAO Products F-35 Joint Strike Fighter: Action Needed to Improve Reliability and Prepare for Modernization Efforts. GAO-19-341. Washington, D.C.: April 29, 2019. F-35 Aircraft Sustainment: DOD Needs to Address Substantial Supply Chain Challenges. GAO-19-321. Washington, D.C.: April 25, 2019. F-35 Joint Strike Fighter: Development Is Nearly Complete, but Deficiencies Found in Testing Need to Be Resolved. GAO-18-321. Washington, D.C.: June 5, 2018. Warfighter Support: DOD Needs to Share F-35 Operational Lessons Across the Military Services. GAO-18-464R. Washington, D.C.: April 25, 2018. Military Aircraft: F-35 Brings Increased Capabilities, but the Marine Corps Needs to Assess Challenges Associated with Operating in the Pacific. GAO-18-79C. Washington, D.C.: March 28, 2018. F-35 Aircraft Sustainment: DOD Needs to Address Challenges Affecting Readiness and Cost Transparency. GAO-18-75. Washington, D.C.: October 26, 2017. F-35 Joint Strike Fighter: DOD s Proposed Follow-on Modernization Acquisition Strategy Reflects an Incremental Approach Although Plans Are Not Yet Finalized. GAO-17-690R. Washington, D.C.: August 8, 2017. F-35 Joint Strike Fighter: DOD Needs to Complete Developmental Testing Before Making Significant New Investments. GAO-17-351. Washington, D.C.: April 24, 2017. F-35 Joint Strike Fighter: Continued Oversight Needed as Program Plans to Begin Development of New Capabilities. GAO-16-390. Washington, D.C.: April 14, 2016. F-35 Sustainment: DOD Needs a Plan to Address Risks Related to Its Central Logistics System. GAO-16-439. Washington, D.C.: April 14, 2016. F-35 Joint Strike Fighter: Preliminary Observations on Program Progress. GAO-16-489T. Washington, D.C.: March 23, 2016. F-35 Joint Strike Fighter: Assessment Needed to Address Affordability Challenges. GAO-15-364. Washington, D.C.: April 14, 2015. F-35 Sustainment: Need for Affordable Strategy, Greater Attention to Risks, and Improved Cost Estimates. GAO-14-778. Washington, D.C.: September 23, 2014. F-35 Joint Strike Fighter: Slower Than Expected Progress in Software Testing May Limit Initial Warfighting Capabilities. GAO-14-468T. Washington, D.C.: March 26, 2014. F-35 Joint Strike Fighter: Problems Completing Software Testing May Hinder Delivery of Expected Warfighting Capabilities. GAO-14-322. Washington, D.C.: March 24, 2014. F-35 Joint Strike Fighter: Restructuring Has Improved the Program, but Affordability Challenges and Other Risks Remain. GAO-13-690T. Washington, D.C.: June 19, 2013. F-35 Joint Strike Fighter: Program Has Improved in Some Areas, but Affordability Challenges and Other Risks Remain. GAO-13-500T. Washington, D.C.: April 17, 2013. F-35 Joint Strike Fighter: Current Outlook Is Improved, but Long-Term Affordability Is a Major Concern. GAO-13-309. Washington, D.C.: March 11, 2013. Joint Strike Fighter: DOD Actions Needed to Further Enhance Restructuring and Address Affordability Risks. GAO-12-437. Washington, D.C.: June 14, 2012. Joint Strike Fighter: Restructuring Added Resources and Reduced Risk, but Concurrency Is Still a Major Concern. GAO-12-525T. Washington, D.C.: March 20, 2012. Joint Strike Fighter: Implications of Program Restructuring and Other Recent Developments on Key Aspects of DOD s Prior Alternate Engine Analyses. GAO-11-903R. Washington, D.C.: September 14, 2011. Joint Strike Fighter: Restructuring Places Program on Firmer Footing, but Progress Is Still Lagging. GAO-11-677T. Washington, D.C.: May 19, 2011. Joint Strike Fighter: Restructuring Places Program on Firmer Footing, but Progress Still Lags. GAO-11-325. Washington, D.C.: April 7, 2011. Joint Strike Fighter: Restructuring Should Improve Outcomes, but Progress Is Still Lagging Overall. GAO-11-450T. Washington, D.C.: March 15, 2011. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
DOD's F-35 Lightning II fighter aircraft provides key aviation capabilities to support the U.S. National Defense Strategy. The F-35 is also DOD's most costly weapon system, with U.S. sustainment costs estimated at more than $1 trillion over its life cycle. As of October 2019, there were more than 435 U.S. and international F-35 aircraft in operation, with more than 3,300 aircraft expected to be fielded throughout the life of the program. While there is little doubt that the F-35 brings unique capabilities to the U.S. military, DOD faces significant challenges in sustaining a growing fleet.
This statement discusses F-35 sustainment challenges. It also summarizes GAO's open recommendations related to these challenges.
This statement is based on previously published work since 2014 related to F-35 acquisition, sustainment, affordability, ALIS, operations, and the global supply chain.
What GAO Found
The Department of Defense (DOD) faces challenges in sustaining a growing F-35 fleet. This statement highlights three challenges DOD has encountered related to F-35 sustainment, based on prior GAO work (see figure).
As a result of these challenges, F-35 performance has not met warfighter requirements. While DOD works to address these issues, it must also grapple with affordability. DOD has determined that it will need to significantly reduce F-35 sustainment costs—by 43 percent per aircraft, per year in the case of the Air Force—in order for the military services to operate the F-35 as planned.
Continued attention to GAO's recommendations in these areas will be important as DOD takes actions to improve F-35 sustainment and aircraft performance for the warfighter.
What GAO Recommends
GAO has 21 recommendations related to the challenges described in this statement that DOD has not fully implemented. DOD generally concurred with all 21 recommendations. Continued attention to these recommendations is needed by DOD to successfully operate and sustain the F-35 fleet over the long term within budgetary realities. |
gao_GAO-20-515T | gao_GAO-20-515T_0 | <1. Background> State had 22,806 full-time, permanent, career employees at the end of fiscal year 2018 an increase of more than 38 percent from fiscal year 2002. Over this period, the number of full-time, permanent, career employees in State s Civil Service rose by nearly 40 percent, from 6,831 in fiscal year 2002 to 9,546 in fiscal year 2018. Over the same period, the number of full-time, permanent, career employees in State s Foreign Service increased by 36 percent, from 9,739 to 13,260. To increase diversity in its workforce, State carries out a variety of efforts focused on recruiting and retention. For example, the Thomas R. Pickering Foreign Affairs Fellowship Program and Charles B. Rangel International Affairs Program recruit diverse candidates for the Foreign Service by providing graduate fellowships to college seniors and college graduates. Additionally, according to State officials, recruiters for the department participate in career fairs and discussion panels and host information sessions at conferences with a focus on diversity and inclusion, such as those held by the Hispanic Association of Colleges and Universities and the Congressional Black Caucus Foundation. Some regional and functional bureaus also undertake efforts to increase diversity. According to State s Senior Advisor for Diversity, Inclusion, and Outreach, bureau leaders set the tone, and provide support for bureau- level initiatives. The Equal Employment Opportunity Commission s (EEOC) Management Directive 715 (MD-715) provides policy guidance and standards for establishing and maintaining effective affirmative programs of equal employment opportunity. Through MD-715, EEOC directs federal agencies to regularly evaluate their employment practices to identify barriers to equal opportunity in the workplace, take measures to eliminate identified barriers, and report annually on these efforts to EEOC. <2. Overall Proportion of Racial or Ethnic Minorities at State Has Grown, but Proportions of African Americans and Women Have Fallen Proportion of Racial or Ethnic Minorities at State Increased, While Proportion of African Americans Decreased> Among State s full-time, permanent, career employees, the proportion of racial or ethnic minorities grew from 28 percent in fiscal year 2002 to 32 percent in fiscal year 2018. During this period, as figure 1 shows, the proportion of racial or ethnic minorities in the Civil Service decreased slightly, from 44 to 43 percent, and the proportion of racial or ethnic minorities in the Foreign Service increased from 17 to 24 percent. Although the overall proportion of racial or ethnic minorities at State increased from fiscal year 2002 to fiscal year 2018, the direction of change for specific racial or ethnic minority groups varied, as shown in figure 1. The proportion of African Americans at State overall declined from 17 percent in fiscal year 2002 to 15 percent in fiscal year 2018. The proportion of African Americans in State s Civil Service decreased from 34 to 26 percent, while the proportion of African Americans in State s Foreign Service increased from 6 to 7 percent. The proportions of Hispanics, Asians, and other racial or ethnic minorities at State overall and in both the Civil and Foreign Services increased by varying percentages from fiscal year 2002 to fiscal year 2018. As figure 2 shows, the proportions of racial or ethnic minorities in the Civil and Foreign Services were generally much smaller in higher ranks in fiscal year 2018. The proportion of racial or ethnic minorities in fiscal year 2018 was lower than the proportion of whites at GS-11, GS-13, and higher ranks in the Civil Service and at all ranks in the Foreign Service. The proportion of racial or ethnic minorities in fiscal year 2018 was progressively lower in each rank above GS-12 in the Civil Service and above Class 5 in the Foreign Service. <2.1. Proportion of Women at State Decreased> Among State s full-time, permanent, career employees, the overall proportion of women at State decreased slightly, from 44 percent in fiscal year 2002 to 43 percent in fiscal year 2018. During this period, as figure 3 shows, the proportion of women in State s Civil Service decreased from 61 to 54 percent and the proportion of women in State s Foreign Service increased from 33 to 35 percent. In addition, the proportion of women at State was generally lower than that of men in the higher ranks of both the Civil and Foreign Services in fiscal year 2018, as figure 4 shows. The proportion of women was lower than the proportion of men at GS- 14 and higher ranks in the Civil Service and at Class 4 and higher ranks in the Foreign Service in fiscal year 2018. For example, the proportion of women at Class 4 was 36 percent, while the proportion of men was 64 percent. The proportion of women in the Civil and Foreign Services in fiscal year 2018 was generally progressively smaller from the lower to the higher ranks. <3. Promotion Outcomes Were Generally Lower for Racial or Ethnic Minorities Than for Whites and Differed for Women Relative to Men> Our analyses of State data for fiscal years 2002 through 2018 found differences between promotion outcomes for racial or ethnic minorities relative to whites and for women relative to men. We found these differences when conducting descriptive analyses, which calculated simple averages, as well as adjusted analyses, which controlled for certain individual and occupational factors other than racial or ethnic minority status and gender that could influence promotion. Our analyses do not completely explain the reasons for differences in promotion outcomes, which may result from various unobservable factors. Thus, our analyses do not establish a causal relationship between demographic characteristics and promotion outcomes. The following are some highlights of our analysis. Promotion outcomes in State s Civil Service were generally lower for racial or ethnic minorities than for whites. Our descriptive analysis of State data for fiscal years 2002 through 2018 found that rates of promotion from GS-11 through the executive rank were 16.1 to 42.0 percent lower for racial or ethnic minorities in the Civil Service than for their white counterparts, depending on the GS level. Our adjusted analysis, controlling for factors other than race or ethnicity that could influence promotion, found that racial or ethnic minorities in the Civil Service were 4.3 to 29.3 percent less likely to be promoted from GS-11 through the executive rank than their white counterparts. Promotion rates in State s Foreign Service were generally lower for racial or ethnic minorities than for whites, but the differences in promotion odds were generally not statistically significant. Our descriptive analysis of State data for fiscal years 2002 through 2018 found that, relative to whites, the rate of promotion for racial or ethnic minorities in the Foreign Service was 5.0 to 15.8 percent lower for promotions from Class 4 through Class 1. Controlling for factors other than race or ethnicity that could influence promotion, our adjusted analysis found that differences in the odds of promotion for racial or ethnic minorities and whites were generally not statistically significant. However, the odds of promotion from Class 4 to Class 3 were statistically significantly lower for racial or ethnic minorities than for their white counterparts. Promotion rates were generally lower for women than men in State s Civil Service, but differences in the odds of promotion were not statistically significant. Our descriptive analysis of State data for fiscal years 2002 through 2018 found that the rate of promotion in the Civil Service was generally lower for women than for men. Specifically, for promotions from GS-11 through the executive rank, promotion rates for women were generally 0.7 to 11.6 percent lower than the promotion rates for men, depending on the GS level. However, our adjusted analysis, controlling for factors other than gender that could influence promotion, did not find any statistically significant differences in the odds of promotion for women and men in the Civil Service. Our adjusted analysis found that the odds of promotion were generally higher for women than men in State s Foreign Service. Our descriptive analysis of State data for fiscal years 2002 through 2018 found that women in the Foreign Service experienced a higher rate of promotion than men from Class 3 to Class 2 and from Class 2 to Class 1. Our adjusted analysis, controlling for factors other than gender that could influence promotion, found that women in the Foreign Service had higher odds of promotion than men in early to mid career. For example, the odds of promotion from Class 4 to Class 3 were 9.4 percent higher for women than for men. <4. State Has Identified Some Diversity Issues but Should Consider Other Issues That Could Indicate Potential Barriers> State has identified some diversity issues in its reports to EEOC. As table 1 shows, in fiscal years 2009 through 2018, State s annual MD-715 reports identified and analyzed a total of 11 diversity issues related to participation of racial or ethnic minorities and women. State identified most of these issues in multiple years. However, State employee groups and our analysis have identified additional diversity issues, such as differences in promotion outcomes for racial or ethnic minorities relative to whites in early to mid career. For example, during our structured interviews with 11 employee groups, representatives of the groups discussed a variety of issues related to diversity at State. Examples include the following: Employee group representatives expressed concern about representation of minorities in the higher ranks of both the Civil and Foreign Services. For example, representatives told us that for some minority groups, it is difficult to be promoted above the GS-13 level. Employee group representatives voiced perceptions that it takes longer for women and racial or ethnic minorities to be promoted. For example, representatives of one group told us that it takes longer for employees with diverse backgrounds to reach GS-13 in the Civil Service and Class 2 in the Foreign Service and that very few of these employees are promoted beyond those levels. We recommended that the Secretary of State take additional steps to identify diversity issues that could indicate potential barriers to equal opportunity in its workforce. For example, State could conduct additional analyses of workforce data and of employee groups feedback. State concurred with the recommendation and noted that the agency will continue to work on initiatives to recruit, retain, develop, and empower a diverse, capable workforce. In conclusion, although State has implemented several plans, activities, and initiatives to improve diversity and representation throughout the ranks of its workforce, longstanding diversity issues for example, underrepresentation of racial or ethnic minorities and women in the senior ranks persist at the agency. Until State takes steps to explore such issues, it could be missing opportunities to investigate, identify, and remove barriers that impede members of some demographic groups from realizing their full potential. Chairman Castro, Ranking Member Zeldin, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. <5. GAO Contact and Staff Acknowledgments> If you or your staff have any questions about this testimony, please contact Jason Bair, Director, International Affairs and Trade, at (202) 512- 6881 or bairj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Emil Friberg (Assistant Director), Julia Jebo Grant (Analyst-in-Charge), Nisha Rai, Moon Parks, Justin Fisher, Melinda Cordero, Courtney Lafountain, Kathleen McQueeney, Dae Park, K. Nicole Willems, Reid Lowe, and Christopher Keblitis. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
State has expressed a commitment to maintaining a diverse workforce and has undertaken efforts to increase diversity in its Civil and Foreign Services. EEOC directs federal agencies to regularly evaluate their employment practices to identify barriers to equal opportunity, take measures to eliminate any barriers, and report annually on these efforts. This testimony examines (1) the demographic composition of State's workforce in fiscal years 2002 through 2018; (2) any differences in promotion outcomes for various demographic groups in State's workforce; and (3) the extent to which State has identified any barriers to diversity in its workforce. For the January 2020 report on which this testimony is based (GAO-20-237), GAO analyzed State's data for its full-time, permanent, career workforce in fiscal years 2002 through 2018. GAO also analyzed the number of years until promotion from early career ranks to the executive rank in both the Civil and Foreign Services. (GAO's analyses do not completely explain the reasons for differences in promotion outcomes, which may result from various unobservable factors. Thus, GAO's analyses do not establish a causal relationship between demographic characteristics and promotion outcomes.) In addition, GAO reviewed State documents and interviewed State officials and employee group representatives.
What GAO Found
The overall proportion of racial or ethnic minorities in the Department of State's (State) full-time, permanent, career workforce grew from 28 to 32 percent from fiscal year 2002 to fiscal year 2018. The direction of change for specific groups varied. For instance, the proportion of African Americans fell from 17 to 15 percent, while the proportions of Hispanics, Asians, and other racial or ethnic minorities rose by varying percentages. The proportion of racial or ethnic minorities and women was lowest in the higher ranks of State's workforce.
GAO's analyses of State data for fiscal years 2002 through 2018 found differences in promotion outcomes for racial or ethnic minorities and whites and for men and women. GAO found these differences in both descriptive analyses (calculating simple averages) and adjusted analyses (controlling for certain individual and occupational factors that could influence promotion). For example, GAO's descriptive analysis of data for State's Civil Service found that rates of promotion for racial or ethnic minorities were 16 to 42 percent lower, depending on the rank, than for whites. Similarly, after controling for certain additional factors, GAO's adjusted analysis of these data found that promotion for racial or ethnic minorites was 4 to 29 percent less likely than for whites. Also, both types of analysis generally found that promotion outcomes for women relative to men were lower in the Civil Service and higher in the Foreign Service. For example, women in the Foreign Service were more likely than men to be promoted in early to mid career.
State has identified some diversity issues, but it should consider other issues that could indicate potential barriers to diversity in its workforce. State's annual reports to the Equal Employment Opportunity Commission (EEOC) for fiscal years 2009 through 2018 identified issues such as underrepresentation of Hispanic employees and underrepresentation of minorities in the senior ranks. However, GAO's analysis and GAO's interviews with State employee groups highlighted additional issues that could indicate barriers to diversity. For example, State's reports have not identified lower promotion outcomes for racial or ethnic minorities relative to whites, which GAO found in its analysis. Until State takes steps to explore such issues, it could be missing opportunities to investigate and remove barriers that impede members of some demographic groups from realizing their full potential.
What GAO Recommends
In its January 2020 report, GAO recommended that State take additional steps to identify diversity issues that could indicate potential barriers to equal opportunity in its workforce. State concurred with this recommendation. |
gao_GAO-20-115 | gao_GAO-20-115_0 | <1. Background> The Dodd-Frank Act was enacted to promote the financial stability of the United States by improving accountability and transparency in the financial system and protecting consumers from abusive financial services practices, among other purposes. To help detect and prevent securities misconduct, section 961 of the Dodd-Frank Act promotes complete and consistent performance of SEC staff examinations, investigations and reviews, and appropriate supervision of these activities through internal supervisory controls. SEC has submitted eight annual reports to Congress under section 961, all of which stated that both its internal supervisory controls and its staff procedures were effective for the period under review. In addition, all such reports stated that no significant deficiencies in internal supervisory controls were identified. Section 961 does not define internal supervisory control. SEC has defined internal supervisory controls as the processes established by management to monitor that the procedures applicable to staff (that is, established day-to-day procedures to be followed by the employees within the applicable programs) are consistently being performed according to policy and procedures, and also remain reasonable, adequate, and current. SEC is the primary regulator of the U.S. securities markets and is responsible for protecting investors, maintaining fair, orderly, and efficient markets, and facilitating capital formation. To fulfill this mission, SEC requires public companies to disclose meaningful financial and other information to the public, examines firms it regulates, and investigates potential violations of the federal securities laws. SEC is organized into five divisions and 24 offices. SEC s approximately 4,400 staff are located in Washington, D.C., and in 11 regional offices. As discussed previously, four divisions and offices are subject to section 961 of the Dodd-Frank Act (see table 1). SEC formalized its Section 961 Working Group in 2017. The primary purposes of the Working Group are to enhance the efficiency and effectiveness of SEC s processes related to section 961 compliance and to enhance coordination and information sharing among the divisions and offices. The Working Group is a staff-level group comprising one or more representatives from each of the four divisions and offices subject to section 961 as well as the Office of the Chief Operating Officer. These staff are responsible for carrying out the Working Group s responsibilities, which include establishing a common understanding and consistent approach to compliance; creating a means to share information and ideas to improve the efficiency and effectiveness of section 961 compliance activities; discussing best practices to streamline procedures and documentation of internal control testing and reporting; and developing and updating guidance related to implementing section 961. <1.1. Federal Internal Control Standards> Standards for Internal Control in the Federal Government provides the overall framework for establishing and maintaining internal control in federal agencies. Agency management is responsible for adapting the framework for an agency. Furthermore, an agency may use the framework to organize its development and implementation of internal controls and implement its standards throughout the agency or at an office level. Five interrelated components and associated principles establish requirements for developing and maintaining an effective internal control system: Control environment: The control environment is the foundation for an internal control system. It provides discipline and structure, which affect the overall quality of internal control. It influences how objectives are defined and control activities are structured. The oversight body and management establish and maintain an environment throughout the entity that sets a positive attitude toward internal control. Risk assessment: Management assesses the risks facing the entity as it seeks to achieve its objectives. This assessment provides the basis for developing appropriate risk responses. Management assesses risks the entity faces from external and internal sources. Control activities: Control activities are the actions management establishes through policies and procedures to achieve objectives and respond to risks in the internal control system, which includes the entity s information system. Information and communication: Management uses quality information to support the internal control system. Effective information and communication are vital for an entity to achieve its objectives. Entity management needs access to relevant and reliable communication related to internal and external events. Monitoring: Internal controls are dynamic and have to be adapted continually to risks and changes an entity faces. Monitoring the internal control system is essential in helping internal control remain aligned with changing objectives, environment, laws, resources, and risks. Internal control monitoring assesses the quality of performance over time and promptly resolves the findings of audits and other reviews. Corrective actions are a necessary complement to control activities to achieve objectives. To be effective, an agency s internal control system must incorporate the five components of internal control in an integrated manner throughout its operations and on an ongoing basis. Once in place, internal control provides reasonable, not absolute, assurance of meeting agency objectives. When evaluating the design of internal control, management determines if controls individually and in combination are capable of achieving an objective and addressing related risks. To the extent a control does not fully achieve an objective or address related risks, it is deficient, and such deficiencies may be associated with a control s design or operation. A deficiency in design exists when a control necessary to meet a control objective is missing, or an existing control is not properly designed so that even if the control operated as designed, the control objective would not be met. A deficiency in operation exists when a properly designed control does not operate as designed or the person performing the control does not possess the necessary authority or competence to perform the control effectively. <1.2. Federal Managers Financial Integrity Act and SEC Assurance Statement> In addition to the requirements under section 961 of the Dodd-Frank Act, SEC must establish and maintain effective internal control and financial management systems that meet the objectives of the Federal Managers Financial Integrity Act of 1982 (FMFIA). FMFIA requires agencies to annually assess and report on the internal controls that protect the integrity of their programs and whether financial management systems conform to related requirements. In addition, FMFIA requires agencies to provide an assurance statement regarding the effectiveness of the agency s internal controls. SEC s internal controls for financial management systems are not included in this report because they are reported in our annual financial audit of SEC. In addition, all of SEC s internal controls including those which constitute internal supervisory controls are in scope for FMFIA. <2. SEC s Framework for Assessing the Effectiveness of Internal Supervisory Controls Reflected Internal Control Standards> In response to section 961 of the Dodd-Frank Act, the Working Group put in place a framework that provides guidance for division and office staff responsible for assessing the effectiveness of internal supervisory controls (control framework). The control framework draws on external sources such as federal internal control standards as well as internal documents such as SEC s Reference Guide for Compliance with Section 961 of the Dodd-Frank Act, the Risk Management and Internal Control Review Reference Guide from the Office of the Chief Operating Officer, and the charter for the Working Group. These internal documents include definitions, criteria, and other guidance and together compose SEC s control framework. For example, the control framework includes time frames for when divisions and offices should assess their internal supervisory controls and report findings to Congress (see fig. 1). SEC s control framework consists of three phases risk assessment, internal supervisory control testing, and communication of results during which division and office staff conduct activities to systematically assess and report on the effectiveness of their internal supervisory controls (see fig. 2 for examples). <2.1. Changes to SEC s Control Framework Included Refining Guidance and Classification of Internal Supervisory Controls> Changes to SEC s control framework since our last review (which focused on fiscal years 2013 2015) include revisions to key guidance documentation and reclassification of some controls (as nonsupervisory controls). The Working Group revised elements of its control framework documentation since our last review. First, the Working Group streamlined the Reference Guide for Compliance with Section 961 by removing direct guidance for example, steps staff should take to assess the design and operation of internal supervisory controls and replaced it with references to the Risk Management and Internal Control Review Reference Guide. Second, the Working Group also updated other information such as the agency s definition for internal supervisory control. Third, some divisions and offices changed which controls they considered to be internal supervisory controls subject to section 961 assessments. As stated previously, SEC defines internal supervisory controls as the processes established by management to monitor that procedures applicable to staff (the established day-to-day procedures to be followed by the employees within the applicable program) are consistently being performed according to policy and procedures, and also remain reasonable, adequate, and current. Division and Office officials elaborated further, stating they only consider controls that are supervisory in nature and directly related to the consistent and complete execution of examinations of registered entities, enforcement investigation, or reviews of corporate financial securities filings to be internal supervisory controls relevant to section 961. More specifically, OCIE reduced the number of controls it classified as internal supervisory controls from 40 in fiscal year 2015 to 14 in fiscal year 2018 by reclassifying some controls as nonsupervisory controls and by consolidating others (see table 2). For example, OCIE no longer classifies examination program strategy and selection controls (such as development and dissemination of examination program goals) as internal supervisory controls. Therefore, the controls are no longer assessed under section 961. OCIE officials explained that the strategy and selection of controls are performed by management and related to the selection of registrants for examinations, and not to staff conducting examinations consistently with professional competence and integrity. Similarly, the number of internal supervisory controls Corporation Finance maintained decreased from 25 in fiscal year 2015 to eight in fiscal year 2018. Corporation Finance officials told us that they determined that certain controls previously considered relevant to section 961 did not represent processes that fall within the core function of reviewing corporate financial securities filings and thus should not be considered internal supervisory controls under section 961. Enforcement maintained 25 internal supervisory controls from fiscal year 2015 to fiscal year 2018, while OCR had 13 14 internal supervisory controls during the same period. <2.2. SEC s Control Framework Reflected Internal Control Standards> As of the end of fiscal year 2018, SEC s control framework continued to reflect key components of internal control. We compared the framework against federal internal control standards. Specifically, we assessed whether the control framework reflected the five components of internal control control environment, risk assessment, control activities, information and communication, and monitoring. We determined that SEC s control framework included attributes covering each of the components. For example, the framework included oversight structures to monitor the design and operation of division and office internal supervisory controls, assigned responsibilities to division and office staff, incorporated steps for staff to follow to assess risks and test internal supervisory controls, and included mechanisms to correct deficiencies and report findings to internal and external stakeholders (such as Congress). See table 3 for additional examples that illustrate how the control framework reflected relevant standards. <3. SEC Lacks Policies and Procedures to Systematically Assess the Effectiveness of Staff Procedures> Divisions and offices have not developed written policies and procedures to ensure that they systematically assess the effectiveness of procedures applicable to staff who perform examinations of registered entities, enforcement investigations, and reviews of corporate financial securities filings. As mentioned previously, the report required under section 961 of the Dodd-Frank Act must include an assessment of the effectiveness of both internal supervisory controls and staff procedures. Division and office officials told us that they used findings and conclusions from their internal supervisory control assessments to support their conclusions that staff procedures were effective. As discussed earlier, SEC defines internal supervisory controls to include two types of processes used by managers: (1) those used to monitor whether staff follow existing procedures and (2) those used to monitor whether the procedures remain reasonable, adequate, and current. We found that SEC s assessments of internal supervisory controls did not directly assess the effectiveness of staff procedures for three primary reasons. First, the controls included in SEC s assessment generally consist of processes that monitor whether staff follow existing procedures, not processes that monitor whether the procedures remain reasonable, adequate, and current. Second, SEC s assessments of internal supervisory control focus on evaluating the extent to which managers executed the controls for which they are responsible. Although the controls monitor whether staff follow underlying procedures, the control assessments do not directly address whether those underlying staff procedures are designed to effectively achieve their stated objectives (for example, identifying and mitigating securities misconduct by securities market participants). Lastly, documentation of division and office internal supervisory control assessments did not speak to how division and office staff reached conclusions that procedures applicable to staff were effective. In addition to findings from internal supervisory control assessments, SEC officials also told us about policies and procedures, compliance testing, and other activities that provide information regarding the effectiveness of staff procedures. Corporation Finance officials further elaborated by stating that there is no single or discrete assessment to test the effectiveness of staff procedures. Rather, the officials explained that the division relies on activities performed throughout the year that contribute to the evaluation of the effectiveness of staff procedures. Examples of activities all or some divisions and offices referenced included the following: Enforcement, Corporation Finance, OCIE, and OCR officials told us that senior management from each division or office monitor the effectiveness of their programs throughout the year to help assess the effectiveness of staff procedures. Examples of monitoring activities include discussions with staff and subject-matter experts who perform examinations of registered entities, enforcement investigations, and reviews of corporate financial securities filings. OCR and Corporation Finance provided examples of documentation for these activities. Enforcement, Corporation Finance, OCIE, and OCR provided documentation that showed they developed review teams, task forces, projects, or initiatives that review specific policies or risks, which can result in updates to procedures. Corporation Finance, OCIE, and OCR officials told us that they have implemented reviews and redesigns of their policies and procedures through periodic reviews of their respective program manuals. See table 4 below for additional examples of activities that divisions and offices referenced as assessments of the effectiveness of staff procedures. The activities mentioned above could provide valuable information for staff who perform examinations of registered entities, enforcement investigations, and reviews of corporate financial securities filings, but they do not represent systematic assessments for the purposes of section 961. In particular, these activities varied between divisions and offices, mostly were implemented on an irregular basis, and were not established through written policies or procedures. In addition, none of the divisions and offices provided documentation linking the results of these, or any other, activities to the conclusions in SEC s annual reports to Congress under section 961, each of which have stated that SEC s staff procedures were effective for the period under review. Furthermore, only Corporation Finance officials told us that they discuss the effectiveness of staff procedures with their Director when they present their annual internal supervisory control assessment findings. As stated previously, the control framework includes an oversight structure, timelines, evaluation criteria, and documentation requirements, and SEC considers its control assessments under the framework to represent assessments of the effectiveness of staff procedures. However, SEC has not developed detailed policies, procedures, or guidance for assessing the effectiveness of staff procedures for the purposes of section 961. For example, the activities that divisions and offices referenced as assessing the effectiveness of staff procedures were not established through written policies for section 961-reporting purposes. And, existing guidance documents such as the Reference Guide for Compliance with Section 961 do not include steps or documentation requirements for assessing staff procedures. Federal internal control standards state the importance for agency management to establish policies and procedures to achieve objectives. Because divisions and offices lack written policies and procedures for assessing the effectiveness of staff procedures, each uses informal methods and varied processes instead of a systematic approach that document how each division and office reached its conclusions (that staff procedures were effective) in SEC s annual section 961 report to Congress. Establishing written policies and procedures for systematically assessing the effectiveness of staff procedures would provide SEC with greater assurance that the procedures were effective in the context of section 961 and would help divisions and offices meet objectives. <4. Selected Controls Were Designed Consistent with Standards, but Some Lacked Directions for Implementing Control Activities> To evaluate the extent to which SEC s internal supervisory controls met federal internal control standards and SEC guidance, we evaluated a non-generalizable sample of internal supervisory controls. We assessed whether (1) controls were designed to address objectives and respond to risks and (2) control activities were implemented through policies. We discuss below our findings related to the 39 internal supervisory controls that SEC identified as related to section 961. See appendix II for an example of the template we used to evaluate the controls. <4.1. All of the Selected Controls Addressed Identified Objectives and Risks> All 39 internal supervisory controls that we evaluated incorporated design elements to achieve SEC s control objectives and respond to risks that SEC identified. We assessed the overall design of selected internal supervisory controls against four design elements identified in federal internal control standards: Control activities should respond to identified objectives and risks, Appropriate types of control activities should be used, Control activities should be designed at the appropriate levels of the organization (Director, Assistant Director, Branch Chief, etc.), and Control activity duties should be segregated where practical. We found that, for the selected controls, each division and office designed control activities to respond to identified objectives and risks by identifying the risks addressed by each control and the control objective (how a control will address the associated risk) in their risk and control matrixes. In their risk and control matrixes, the divisions and offices also have established characteristics identified by relevant standards as important for designing appropriate controls, including the control frequency, control owner, and whether a control is automated or manual, preventive or detective, and key or secondary. To ensure that control activities are designed at the appropriate levels, each division and office identified control owners in their risk and control matrixes and in the control descriptions they identified the job title of staff responsible for executing the controls. Finally, the divisions and offices segregated control duties in cases in which the need for such segregation was apparent. For example, a second review by a higher-level official was included in some controls that required approval decisions. For the results of our control design assessments, see appendix III. <4.2. Some Control Activity Descriptions Lacked Sufficient Information for Implementation and Monitoring> Ten of the 39 controls we evaluated lacked key information needed to help ensure execution of the control activities (see table 5). Federal internal control standards state that documentation is required for the effective design, implementation, and operating effectiveness of an entity s internal control system, including documentation of internal control responsibilities through policies. We assessed SEC s documented control activities against three key attributes identified in federal internal control standards: Establishment of procedures to support control execution, Assignment of responsibility for control execution, and Establishment of time frames for control execution. Two or three of the selected controls from each division and office did not incorporate key execution attributes, as seen in table 5. For the results of our control design assessments, see appendix III. Descriptions for many control activities did not specify procedures to be performed or, in some cases include time frames, but all controls we assessed assigned responsibility for control execution (see table 6). More specifically, 10 of the 39 controls had no requirement to document execution of the control activities. For example, one Enforcement control and two Corporation Finance controls intended to monitor compliance with timeliness metrics did not include a requirement to document whether the control activities had been executed that managers completed the review of the timeliness reports, noted if any cases were nearing the time frame threshold, or took appropriate actions in response. In addition, three of the 39 controls we reviewed did not include the control activity attribute of follow-up actions to be taken. For example, the Corporation Finance timeliness controls discussed above also did not establish follow-up actions for cases in which a team or individual neared the timeliness threshold. Follow-up actions could include emailing or calling relevant staff when a timeliness threshold was within a certain number of days of being breached. The divisions and offices did not establish operational procedures for how the control activities would be performed in three of the 39 controls we reviewed. For example, an OCIE control intended to track enrollment and completion of new examiner training lacked underlying procedures for identifying or tracking training progress of new employees. The divisions and offices did not establish time frames for executing control activities in three of the 39 controls we reviewed. For example, while the Corporation Finance timeliness controls discussed above identified the reports to be reviewed, one of the two controls did not specify when the reports should be reviewed. By not incorporating key control attributes into their control activities, SEC may not have reasonable assurance that internal supervisory controls are effectively implemented. Some of the controls with weaknesses in one or more of the control attributes lacked documentation of the controls execution, which hindered our ability to test whether the controls operated as intended, as discussed in the next section. For example, two of the timeliness controls for Corporation Finance, described above, did not include a documentation requirement, and no documentation of control execution was created. In lieu of reviewing documentation of control execution, for SEC s assessment of the effectiveness of its internal supervisory controls, the divisions and offices asked supervisors twice a year (by email) whether they had executed this control weekly over the course of the year. Staff from some divisions and offices said the reason that control activity attributes were not included in some of the controls was because policies and procedures had been long established and orally communicated, but not written into the control activities. Based on Standards for Internal Control in the Federal Government, SEC developed a reference guide to provide guidance for identifying, documenting, and monitoring controls. The reference guide states that internal control activities should be written to describe the actual activities performed to meet the control objective, and at a minimum, identify control procedures and how they are to be executed, establish a documentation requirement for control execution, and assign responsibility and establish time frames for control execution. Following SEC guidance for developing control activities could help divisions and offices ensure evidence exists of control execution and better enable control monitoring by SEC, and oversight by external parties, such as GAO and the SEC Inspector General. In turn, better control monitoring would help ensure that SEC s internal supervisory controls are effectively implemented and that procedures necessary to achieve organizational objectives are followed. Furthermore, enhancing control activity descriptions would provide SEC greater assurance that staff have the information necessary to effectively implement the controls. <5. Assessed Controls Operated or Partially Operated as Intended, but Some Controls Could Not Be Assessed Because of Documentation Weaknesses> We selected 18 of 39 internal supervisory controls across the four divisions and offices to assess whether they operated as intended in fiscal year 2018. (See figure 3 for an overview of how we determined they operated as intended, partially operated as intended, or did not operate as intended.) As an example of how we conducted these assessments, we reviewed one OCIE control that called for manager approval at three points of an examination and additional assistant director approval to close the examination, as described in OCIE s control documentation. To assess whether this control operated as intended, we selected and reviewed a random, generalizable sample of examinations in OCIE s internal system to determine whether all of the control s activities in this case, management approvals had been executed. We could not assess some of the controls we selected because SEC did not provide sufficient documentation to allow us to determine whether the control operated as intended. For example, two of four Corporation Finance controls did not include a documentation requirement for weekly monitoring of staff compliance with internal policy. As a result, documentation did not exist for us to assess whether supervisors executed these control activities throughout the year. For more information on how we determined whether controls were operating as intended, see appendix I. <5.1. All of the Controls That Could Be Assessed Operated or Partially Operated as Intended> Of the 15 controls we could assess, 13 operated as intended and two partially operated as intended (see table 7). We could not assess three controls because sufficient documentation was not provided. More specifically, a control documentation requirement was not established for the three controls as identified through our assessment of the control s design, described earlier. We determined that two OCIE controls partially operated as intended. For example, while we found that 20 percent of sampled OCIE examinations were not approved within the designated deadline, all examinations were closed and included all required elements (see table 8). <5.2. Some Selected Controls Could Not Be Assessed Because Documentation of Control Execution Did Not Exist> We were unable to assess three of 18 selected controls because the divisions and offices did not provide sufficient documentation on the execution of control activities (see table 9). We found these controls lacked a documentation requirement for control execution in their control activity descriptions and did not produce sufficient documentation, which prevented us from determining whether these controls operated as intended. For example, two of four Corporation Finance controls did not include a requirement to document execution of the control activity weekly monitoring of staff compliance with internal policy. Because these controls did not produce documentation of weekly monitoring throughout the year as prescribed in the control activity frequency, we did not receive documentation to allow us to assess whether supervisors executed these control activities on a weekly basis or, in some cases, at all. Additionally, we could not assess one selected OCIE control involving tracking of new employee training. <6. Conclusions> To help detect and prevent securities misconduct, section 961 of the Dodd-Frank Act requires SEC to assess the effectiveness of both its internal supervisory controls and the procedures applicable to staff who perform examinations of registered entities, enforcement investigations, and reviews of corporate financial securities filings. While SEC has established a framework for systematically assessing the effectiveness of its internal supervisory controls, it has not established a framework for systematically assessing the effectiveness of staff procedures or documenting how SEC reached related conclusions about the procedures in its annual reports to Congress under section 961. Creating written policies and procedures to systematically assess the effectiveness of staff procedures and documenting the results of such assessments would provide SEC with greater assurance that the staff procedures are effective, a key objective of section 961. Every control we reviewed incorporated design elements to achieve SEC s control objectives and respond to risks that it identified. However, nine of the 39 controls did not incorporate one or more key attributes that would help ensure execution of the control, including documentation requirements, detailed procedures, identification of follow-up actions, assignment of responsibility for control execution, and time frames for control execution. Following SEC guidance for developing detailed control activities could help divisions and offices ensure evidence of control execution and better enable control monitoring by SEC and external parties, such as GAO and the SEC Inspector General. In turn, better control monitoring would help ensure that SEC s internal supervisory controls are effective and that procedures necessary to achieve organizational objectives are followed. Furthermore, enhancing control activity descriptions would provide SEC greater assurance that staff have the information necessary to effectively implement the controls. <7. Recommendations for Executive Action> We are making the following five recommendations to SEC. The SEC Chair should direct the Directors of the Division of Corporation Finance, Division of Enforcement, Office of Compliance Inspections and Examinations, and Office of Credit Ratings to develop written policies and processes to systematically assess the effectiveness of staff procedures (procedures applicable to staff who perform examinations of registered entities, enforcement investigations, and reviews of corporate financial securities filings). Examples of elements SEC could include in the policies and processes are the steps necessary to conduct such assessments, including time frames in which the assessments should be performed and reviewed; assignment of responsibilities related to the assessments; requirements for documenting assessments; and steps for staff to take to mitigate and report deficiencies identified as a result of the assessments. (Recommendation 1) The Director of the Division of Corporation Finance should ensure that all internal supervisory controls include documentation requirements, detailed procedures, identified follow-up actions, implementation time frames, and assignment of control execution responsibility, in accordance with SEC guidance and federal internal control standards for implementing control activities through documented policies. (Recommendation 2) The Director of the Division of Enforcement should ensure that all internal supervisory controls include documentation requirements, detailed procedures, identified follow-up actions, implementation time frames, and assignment of control execution responsibility, in accordance with SEC guidance and federal internal control standards for implementing control activities through documented policies. (Recommendation 3) The Director of the Office of Compliance Inspections and Examinations should ensure that all internal supervisory controls include documentation requirements, detailed procedures, identified follow-up actions, implementation time frames, and assignment of control execution responsibility, in accordance with SEC guidance and federal internal control standards for implementing control activities through documented policies. (Recommendation 4) The Director of the Office of Credit Ratings should ensure that all internal supervisory controls include documentation requirements, detailed procedures, identified follow-up actions, implementation time frames, and assignment of control execution responsibility, in accordance with SEC guidance and federal internal control standards for implementing control activities through documented policies. (Recommendation 5) <8. Agency Comments> We provided a draft of this report to SEC for review and comment. In written comments (reproduced in appendix VI), SEC agreed with our findings and concurred with our recommendations. In addition, SEC provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Chair of SEC, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-8678 or clementsm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology This report focuses on activities that fall within the purview of the Division of Corporation Finance (Corporation Finance), Division of Enforcement (Enforcement), Office of Compliance Inspections and Examinations (OCIE), and Office of Credit Ratings (OCR) at the Securities and Exchange Commission (SEC) to which we refer collectively as the divisions and offices. We examined (1) the extent to which SEC s internal supervisory control framework during fiscal years 2016 2018 reflected federal internal control standards; (2) how SEC evaluated the effectiveness of staff procedures in fiscal year 2018; (3) the extent to which selected controls in fiscal year 2018 were designed consistent with relevant standards; and (4) the extent to which selected controls operated as intended in fiscal year 2018. For our first objective, we obtained and reviewed relevant documentation on SEC s internal supervisory control framework for fiscal years 2016 2018 and interviewed division and office staff responsible for developing and updating the framework. We then assessed this framework against Standards for Internal Control in the Federal Government and determined the extent to which the framework reflected these standards. Specifically, we assessed the framework against the five components of internal control control environment, risk assessment, control activities, information and communication, and monitoring and the 17 principles associated with these components. We compared information on changes SEC made to its internal supervisory control framework with information from our previous review and federal internal control standards to determine the extent to which the framework continued to reflect internal control standards. For our second objective, we reviewed policies, procedures, and guidance documents (for fiscal year 2018) relating to SEC assessments of the effectiveness of procedures applicable to staff who perform examinations of registered entities, enforcement investigations, and reviews of corporate financial securities filings. We also interviewed SEC staff to obtain an understanding of the steps and activities that divisions and offices take to assess the effectiveness of staff procedures. We intended to assess how SEC assessed staff procedures to determine the extent to which SEC s assessments reflected federal internal control standards. However, as discussed in the report, we found SEC did not have a framework for assessing the effectiveness of staff procedures. We therefore examined policies, procedures, and guidance, but did not assess them against the components and principles associated with the federal standards for internal control. For our third objective, we used the policies, procedures, and control objectives to determine if the design of selected division and office internal supervisory controls in place during fiscal year 2018 was consistent with federal internal control standards and SEC guidance for designing internal controls. We developed an evaluation template and used it to assess selected controls from each division and office by having multiple analysts conduct independent reviews and then reached a final consensus by conducting a joint review with the same analysts. We used Standards for Internal Control in the Federal Government and The Committee of Sponsoring Organizations of the Treadway Commission s Internal Control Integrated Framework to develop our template. We also reviewed documents and interviewed staff to obtain a thorough understanding of the internal supervisory controls used to oversee the processes for conducting examinations of registered entities, enforcement investigations, and reviews of corporate financial securities filings. We selected for our review a non-generalizable sample of 53 controls in place during fiscal year 2018 13 controls in Corporation Finance, 15 controls in Enforcement, 11 in OCIE, and 14 in OCR. We grouped these controls into sets because some underlying staff processes had multiple associated controls. In cases in which we selected a control that was part of a set, we would review every control in the associated set. We selected controls and control sets that SEC designated as being associated with processes that have the highest risk or potential impact on achieving stated objectives until we reached our target of 10 15 controls per division or office. Some control sets also contained controls that were not related to section 961. Therefore, to fully assess complete control sets associated with underlying processes, our selection contained some controls that were not related to section 961. However, in this report we only discuss and include analysis for those controls that SEC identified as related to section 961, which comprises 39 controls eight in Corporation Finance, 10 in Enforcement, eight in OCIE, and 13 in OCR. For our fourth objective, we developed an evaluation template for each control and conducted independent primary and secondary reviews to reach a final consensus on the operation of each control. The template was created using SEC s control activities and related policy and procedural documents we received as part of our design assessment. We used the template to determine the extent to which the execution of controls met the design criteria. Depending on the extent to which they met criteria established from control design documents, the selected controls were grouped under one of the following categories: (1) operated as intended, (2) partially operated as intended, (3) did not operate as intended, and (4) could not be assessed because control documentation did not exist due to design weaknesses, was not received, or was not relevant. Because the nature of controls varied, we evaluated controls by applying the factors below in conjunction with professional judgment. We focused on whether deficiencies would affect the implementation and operation of controls. For controls that operated as intended, we determined that the divisions and offices provided documentation demonstrating that all control activities were executed for the instances of control implementation we reviewed. We considered controls to have partially operated as intended if the documentation provided supported that only some control activities were executed or if at least one control activity did not operate as intended, but the overall control was executed for most instances. We did not identify any controls that did not operate as intended. This determination would have applied to controls for which we received sufficient documentation to assess the control s operation and for which the divisions and offices did not execute all control activities in most instances. For controls that we could not assess, we did not receive sufficient documentation that would enable us to make a determination of whether the control was executed or operated as intended. For these controls, we also used the results of our design assessments to determine whether the controls included a documentation requirement that would enable us to assess whether they operated as intended. We judgmentally selected a non-generalizable sample of 18 controls across all four divisions and offices from the population of 39 internal supervisory controls we reviewed in the third objective. We selected these controls based on factors such as whether they were classified as key to achieving objectives, high-risk, or having high potential impact on achieving stated objectives or likelihood of failure. We then created a generalizable, random sample of cases to review for eight controls, and we reviewed all instances for the remaining controls because they occurred annually or had few instances. In some cases, we conducted on-site testing in which we assessed samples of cases for controls by demonstrations of the divisions and offices internal systems. We conducted this performance audit from October 2018 to December 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Template for GAO s Assessment of the Internal Supervisory Control Design of the Securities and Exchange Commission This appendix illustrates the template we used to assess the design of selected controls from each division and office that we reviewed at the Securities and Exchange Commission (SEC). For each control, we reviewed policies, procedures, and control objectives to determine if the design of the selected internal supervisory controls was consistent with federal internal control standards and SEC guidance for designing internal controls. Appendix III: GAO Testing Results for the Design of Selected Securities and Exchange Commission Controls, Fiscal Year 2018 To assess the extent to which design of the Securities and Exchange Commission s (SEC) internal supervisory controls was consistent with federal internal control standards and SEC guidance for designing internal controls, we reviewed 39 internal supervisory controls across the four divisions and offices in place during fiscal year 2018. We used the policies, procedures, and control objectives to determine if the controls designs were consistent with the standards and guidance. Appendix IV: Template for GAO s Assessment of the Operation of Internal Supervisory Controls by the Securities and Exchange Commission This appendix illustrates the template we used to assess the operation of selected Securities and Exchange Commission internal supervisory controls. For each control, we compared control activity descriptions, including policy and procedure documents to determine whether selected controls operated as intended. Appendix V: GAO Testing Results for Selected Securities and Exchange Commission Controls, Fiscal Year 2018 As part of our review, we tested 18 internal supervisory controls across four divisions and offices at the Securities and Exchange Commission (SEC) to determine whether they operated as intended. Controls were assessed using SEC s control activity descriptions, including related policy and procedure documents. For controls that operated as intended, SEC provided documentation demonstrating that all control activities were executed. We considered controls to have partially operated as intended if the documentation supported that only some control activities were executed or if at least one control activity did not operate as intended, but the overall control was executed. We did not identify any controls that did not operate as intended, but this would have applied to controls for which we received sufficient documentation and the divisions and offices did not execute all control activities. Controls that we could not assess lacked sufficient documentation that would have enabled us to determine whether they operated as intended. Appendix VI: Comments from the Securities and Exchange Commission Appendix VII: GAO Contact and Staff Acknowledgments <9. GAO Contact> <10. Staff Acknowledgments> In addition to the contact named above, Kevin Averyt (Assistant Director), Christopher Ross (Analyst in Charge), Aaron A. Colsher, Justin Fisher, Efrain Magallan, Marc Molino, Kirsten Noethen, Barbara Roesmann, and Farrah Stone made key contributions to this report. | Why GAO Did This Study
Section 961 of the Dodd-Frank Wall Street Reform and Consumer Protection Act directs SEC to assess and report annually on internal supervisory controls and procedures applicable to staff performing examinations, investigations, and securities filing reviews. The act also contains a provision for GAO to report on SEC's internal supervisory control structure and staff procedures. GAO's last report was in 2016 ( GAO-17-16 ).
This report examines SEC's internal supervisory control framework and assessment of staff procedures, the design of selected controls, and the operation of selected controls.
GAO analyzed SEC's internal supervisory control framework and related policies and guidance and evaluated the design and execution of a non-generalizable sample of controls selected because they addressed high-risk processes.
What GAO Found
As of fiscal year 2018, the Securities and Exchange Commission's (SEC) internal supervisory control framework—which provides guidance for division and office staff responsible for assessing the effectiveness of internal supervisory controls —reflected federal internal control standards. GAO determined that SEC's framework included elements covering each of the five components of internal control—control environment, risk assessment, control activities, information and communication, and monitoring. However, SEC does not have written policies or guidance to ensure that relevant SEC divisions and offices systematically assess the effectiveness of procedures applicable to staff who perform examinations of registered entities, enforcement investigations, and reviews of corporate securities filings. Establishing such policies would provide SEC greater assurance that these procedures are effective at achieving their objectives.
All the SEC controls GAO evaluated were designed consistent with standards, and a majority operated as intended. SEC guidance and federal internal control standards state that (1) controls should be designed to address objectives and respond to risks and (2) control activities should be implemented through policies, including documentation requirements, and include detail to enable management to monitor control execution.
Control design. All 39 controls GAO evaluated included design elements to achieve SEC's control objectives and respond to risks it identified. However, 10 of these 39 controls did not include key attributes, such as requirements to document, and set time frames for, control execution (see fig.).
Control operation. GAO could not assess the operation of three of 18 selected controls because documentation of control execution did not exist. Of the remaining controls, 12 operated as intended and three partially operated as intended. Examples of controls that operated as intended include SEC's approval of examinations and tracking of investigations.
By more consistently following SEC guidance and federal internal control standards for developing control activities, including documentation requirements, relevant SEC divisions and offices would enhance their ability to monitor and ensure the effectiveness of their internal supervisory controls.
Legend: Corporation Finance = Division of Corporation Finance; Enforcement = Division of Enforcement; OCIE = Office of Compliance Inspections and Examinations; and OCR = Office of Credit Ratings.
Source: GAO analysis of Securities and Exchange Commission (SEC) documents. | GAO-20-115
What GAO Recommends
GAO is making five recommendations to SEC related to developing policies to assess the effectiveness of staff procedures and ensuring that all relevant divisions and offices follow SEC guidance and federal internal control standards for implementing control activities through documented policies. SEC agreed with the recommendations. |
gao_GAO-20-349T | gao_GAO-20-349T_0 | <1. DHS Has Taken Steps to Improve Its Employee Engagement Scores but Still Falls below the Government-Wide Average> In connection with the Strengthening DHS Management Functions high- risk area, we monitor DHS s progress in the area of employee morale and engagement. In 2010, we identified, and DHS agreed, that achieving 30 specific outcomes would be critical to addressing the challenges within the department s high-risk management areas. These 30 outcomes are the criteria by which we gauge DHS s demonstrated progress. We rate each outcome on a scale of not-initiated, initiated, partially addressed, mostly addressed, or fully addressed. Several of these outcome criteria relate to human capital actions needed to improve employee morale. Specifically, we monitor DHS s progress to: seek employees input on a periodic basis and demonstrate measurable progress in implementing strategies to adjust human capital approaches; base hiring decisions, management selections, promotions, and performance evaluations on human capital competencies and individual performance; enhance information technology security through improved workforce planning of the DHS cybersecurity workforce; and improve DHS s FEVS scores related to employee engagement. Since we began monitoring DHS s progress on these outcomes, DHS has worked to strengthen employee engagement through several efforts both at DHS headquarters and within its component agencies. In this statement, we discuss nine recommendations related to DHS employee engagement and workforce planning, eight of which have been implemented by the department. Within DHS, the Office of the Chief Human Capital Officer (OCHCO) is responsible for implementing policies and programs to recruit, hire, train, and retain DHS s workforce. As the department-wide unit responsible for human capital issues within DHS, OCHCO also provides guidance and oversight related to morale issues to the DHS components. Seeking employees input and demonstrating progress to adjust human capital approaches. DHS, OCHCO, and the components have taken action to use employees input from the FEVS to inform and implement initiatives targeted at improving employee engagement. For example, in 2017 and 2018 DHS implemented our two recommendations for OCHCO and DHS components to establish metrics of success within their action plans for addressing employee satisfaction problems and to better use these plans to examine the root causes of morale challenges. DHS components have continued to develop these employee engagement action plans and several components report implementing initiatives to enhance employee engagement. For example, the U.S. Secret Service s action plan details a sponsorship program for all newly hired and recently relocated employees. In addition, one division of U.S. Immigration and Customs Enforcement (ICE) used FEVS survey data to identify a need for increased engagement between employees and component leadership. ICE s employee action plan includes goals with milestones, timelines, and metrics to improve this engagement through efforts such as leadership town halls and leadership site visits. At the headquarters level, DHS and OCHCO have also established employee engagement initiatives across the department. For example, DHS established initiatives for employees and their families that aim to increase awareness and access to support programs, benefits, and resources. Through another initiative Human Resources (H.R.) Academy DHS provides education, training, and career development opportunities to human resource professionals within the department. DHS uses an Employee Engagement Steering Committee to guide and monitor implementation of these DHS-wide employee engagement initiatives. As a result of these steps, among other actions, we have considered this human capital outcome area fully addressed since 2018. Basing hiring decisions and promotions on competencies and performance. OCHCO has conducted audits to better ensure components are basing hiring decisions and promotions on human capital competencies and individual performance and we have considered this outcome fully addressed since 2017. Our past work has highlighted the importance of selecting candidates based on qualifications, as doing otherwise can negatively affect morale. Working to ensure that components human capital decisions are based on performance and established competencies helps create a connection between individual performance and the agency s success. Enhancing information technology security through improved workforce planning for cybersecurity positions. In February 2018, we made six recommendations to DHS to take steps to identify its position and critical skill requirements among its cybersecurity workforce. Since then, DHS has implemented all six recommendations. For example, in fiscal year 2019, regarding its cybersecurity position identification and coding efforts, we verified that DHS had identified individuals in each component who are responsible for leading those efforts, developed procedures, established a process to review each component s procedures, and developed plans for reporting critical needs. However, DHS has not yet implemented a recommendation we made in March 2019 to review and correct its coding of cybersecurity positions and assess the accuracy of position descriptions. Specifically, we stated that DHS had not correctly categorized its information technology/cybersecurity/cyber-related positions. We noted that having inaccurate information about the type of work performed by 28 percent of the department s information technology/cybersecurity/cyber-related positions is a significant impediment to effectively examining the department s cybersecurity workforce, identifying work roles of critical need, and improving workforce planning. DHS officials stated that they plan to implement this recommendation by March 2020. As a result, this outcome remains mostly addressed. Until DHS accurately categorizes its positions, its ability to effectively identify critical staffing needs will be impaired. Improving FEVS scores on employee engagement. Since our last High-Risk report in March 2019, DHS has demonstrated additional progress in its employee engagement scores, as measured by the FEVS Employee Engagement Index (EEI). The EEI is one of three indices OPM calculates to synthesize FEVS data. The EEI measures conditions that lead to engaged employees and is comprised of three sub-indices related to employees views on leadership, supervisors, and intrinsic work experience. As a result of continued improvement on DHS s EEI score, we have moved this outcome rating from partially addressed to mostly addressed based on DHS s 2019 score. As shown in figure 1, DHS increased its EEI score across 4 consecutive years, from a low of 53 percent in 2015 to 62 percent in 2019. In particular, DHS improved its score by two points between 2018 and 2019 while the government average remained constant over the same period. With its 2019 score, DHS also regained the ground that it lost during an 8-point drop between 2010 and 2015. While DHS has made progress in improving its scores including moving toward the government average, it remains below the government average on the EEI and on other measures of employee morale. For example, in 2019 DHS remained six points below the government-wide average for the EEI. In addition to the EEI and other indices OPM calculates, the Partnership for Public Service uses FEVS data to produce an index of the Best Places to Work in the Federal Government . The Partnership for Public Service s analysis of FEVS data indicates low levels of employee satisfaction and commitment for DHS employees relative to other large federal agencies. In 2019, the Partnership for Public Service ranked DHS 17th out of 17 large federal agencies for employee satisfaction and commitment. Across the department, employee satisfaction scores vary by component. Some DHS components have EEI scores above the government average and rank highly on the Partnership for Public Service s index. For example, the U.S. Coast Guard and U.S. Citizenship and Immigration Services have EEI scores of 76 and 74, respectively, and rank 85th and 90th, respectively, out of 420 subcomponent agencies on the Partnership for Public Service s index. Further, some DHS component agencies have improved their scores in recent years. The U.S. Secret Service raised its EEI score 7 points between 2018 and 2019, and it moved from the last place among all subcomponent agencies on the Partnership for Public Service s Ranking in 2016 to 360th out of 420 subcomponent agencies in 2019. However, other DHS component agencies continue to rank among the lowest across the federal government in the Partnership for Public Service rankings of employee satisfaction and commitment. For example, in 2019 out of 420 subcomponent agencies across the federal government, the DHS Countering Weapons of Mass Destruction office ranked 420th, the DHS Office of Intelligence and Analysis ranked 406th, and the Transportation Security Administration ranked 398th for employee satisfaction and commitment. As a result, continuing to increase employee engagement and morale remains important to strengthening DHS s management functions and ability to implement its missions. DHS employee concerns about senior leadership, among other things, is one area that negatively affects DHS s overall employee morale scores. In 2015, we identified effective management practices agencies can use to improve employee engagement across the government. One of these practices is the direct involvement of top leadership in organizational improvement efforts. When top leadership clearly and personally leads organizational improvement efforts, it provides an identifiable source for employees to rally around and helps processes stay on course. A DHS analysis of its 2012 FEVS scores indicated DHS low morale issues may persist because of employee concerns about senior leadership and supervisors, among other things, such as whether their talents were being well-used. Within the 2019 FEVS results for both DHS and government wide, leadership remains the lowest of the three sub-indices of the EEI. In addition, for several years DHS components have identified several root causes of engagement scores. For example, in 2019, the Transportation Security Administration identified the performance of managers, time constraints and understaffing, and lack of manager and leadership accountability for change as root causes of the component s engagement scores in recent years. Another component, U.S. Citizenship and Immigration Services, identified in 2019 that the areas of leadership performance, accountability, transparency, and training and development opportunities were 2018 engagement score root causes. We have previously reported that DHS s top leadership, including the Secretary and Deputy Secretary, have demonstrated commitment and support for addressing the department s management challenges. Continuing to identify and address the root causes of employee engagement scores and addressing the human capital management challenges we have identified in relation to the DHS management high- risk area could help DHS maintain progress in improving employee morale. Implementing our recommendation to review and correct DHS coding of cybersecurity positions and assess the accuracy of position descriptions will assist the department in identifying critical staffing needs. In addition, as we reported in May 2019, vacancies in top leadership positions could pose a challenge to addressing aspects of DHS s high- risk area, such as employee morale. There are currently acting officials serving in ten positions requiring Senate confirmation. Filling vacancies including top DHS leadership positions and the heads of operational components with confirmed appointees, as applicable, could help ensure continued leadership commitment across DHS s mission areas. We will continue to monitor DHS s progress in strengthening management functions, and may identify additional actions DHS leadership could take to improve employee morale and engagement. In conclusion, DHS has made notable progress in the area of human capital management, specifically in improving employee engagement and morale, but still falls behind other federal agencies. It is essential for DHS to continue improving employee morale and engagement given their impact on agency performance and the importance of DHS s missions. Continued senior leadership commitment to employee engagement efforts and filling critical vacancies could assist DHS in these efforts. Madam Chairwoman Torres Small, Ranking Member Crenshaw, and Members of the Subcommittee, this completes my prepared statement, I would be happy to respond to any questions that you may have at this time. <2. GAO Contact and Staff Acknowledgments> If you or your staff have any questions concerning this statement, please contact Christopher P. Currie at (404) 679-1875 or curriec@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this statement were Alana Finley (Assistant Director), Mara McMillen (Analyst-in-Charge), Nina Daoud, Michele Fejfar, Andrew Howard, and Tom Lombardi. In addition, Colette Alexander, Richard Cederholm, Ben Crossley, Eric Essig, Laura Ann Holland, Tammi Kalugdan, Neelaxi Lakhmani, Shannin O Neill, Kevin Reeves, John Sawyer, and Julia Vieweg made contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
DHS is the third-largest cabinet-level department in the federal government, employing more than 240,000 staff in a broad range of jobs, including countering terrorism and homeland security threats, providing aviation and border security, emergency response, cybersecurity, and critical infrastructure protection. Since it began operations in 2003, DHS has faced challenges with low employee morale and engagement. Federal surveys have consistently found that DHS employees are less satisfied with their jobs compared to the average federal employee. For example, DHS's scores on the FEVS and the Partnership for Public Service's rankings of the Best Places to Work in the Federal Government® are consistently among the lowest for similarly-sized federal agencies.
This statement addresses our past and ongoing work monitoring human capital management and employee morale at DHS and select work on employee engagement across the government. This statement is based on products GAO issued from September 2012 through May 2019 as well as GAO's ongoing efforts to monitor employee morale at DHS as part of GAO's high-risk work. For these products, GAO analyzed DHS strategies and other documents related to DHS's efforts to address its high-risk areas, interviewed DHS officials, conducted analyses of FEVS data, and interviewed officials from other federal agencies that achieved high employee engagement scores, among other things.
GAO provided a copy of new information in this statement to DHS for review. DHS confirmed the accuracy of this information.
What GAO Found
The Department of Homeland Security (DHS) has undertaken initiatives to strengthen employee engagement through efforts at its component agencies and across the department. For example, at the headquarters level, DHS has instituted initiatives to improve awareness and access to support programs, benefits, and resources for DHS employees and their families.
In 2019, DHS improved its employee engagement scores, as measured by the Office of Personnel Management's Federal Employee Viewpoint Survey (FEVS)—a tool that measures employees' perceptions of whether and to what extent conditions characterizing successful organizations are present in their agency. As shown below, DHS increased its scores on a measure of employee engagement, the Employee Engagement Index (EEI), across 4 consecutive years, from a low of 53 percent in 2015 to 62 percent in 2019.
While DHS has made progress in improving its scores, in 2019 it remained six points below the government-wide average for the EEI. For several years, DHS and its component agencies have identified root causes for their engagement scores including concerns about leadership accountability and understaffing, among others. This statement discusses nine recommendations related to DHS employee engagement and workforce planning. DHS implemented all but one of these recommendations—to review and correct its coding of cybersecurity positions and assess the accuracy of position descriptions. Finally, filling vacancies could help ensure continued leadership commitment across DHS's mission areas. |
gao_GAO-19-696T | gao_GAO-19-696T_0 | <1. OPM and Agencies Need to Strengthen Efforts to Identify and Close Mission-Critical Skills Gaps> The federal government faces long-standing challenges in strategically managing its workforce. We first added federal strategic human capital management to our list of high-risk government programs and operations in 2001. Because skills gaps within individual federal agencies as well as across the federal workforce can lead to costly, less-efficient government, the issue has been identified as the focus of the Strategic Human Capital Management GAO high-risk area since February 2011. Our experience has shown that the key elements needed to make progress in high-risk areas are top-level attention by the administration and agency leaders grounded in the five criteria for removal from the High-Risk List, as well as any needed congressional action. The five criteria for removal are: (1) leadership commitment, (2) agency capacity, (3) existence of a corrective action plan, (4) program monitoring, and (5) demonstrated progress. Although Congress, OPM, and individual agencies have made improvements since 2001, federal human capital management remains a high-risk area because mission-critical skills gaps within the federal workforce pose a high risk to the nation. GAO, along with OPM and individual agencies, has identified mission critical skills gaps in numerous government-wide occupations. These skills gaps both within federal agencies and across the federal workforce impede the government from cost-effectively serving the public and achieving results. For example, the difficulties in recruiting and retaining skilled health care providers and human resource staff at Veterans Health Administration s (VHA) medical centers make it difficult to meet the health care needs of more than 9 million veterans. As a result, VHA s 168 medical centers have large staffing shortages, including physicians, registered nurses, physician assistants, psychologists, physical therapists, as well as human resource specialists and assistants. In October 2017, we reported that the VHA, within the Department of Veterans Affairs (VA), has opportunities to improve staffing, recruitment, and retention strategies for physicians that it identified as a priority for staffing, or mission-critical. For 2016, the top five physician mission- critical occupations were primary care, mental health, gastroenterology, orthopedic surgery, and emergency medicine. However, VHA was unable to accurately count the total number of physicians who provide care in its VA medical centers (VAMC). Additionally, VHA lacked data on the number of contract physicians and physician trainees. Five of the six VAMCs in our review used contract physicians or physician trainees to meet their staffing needs, but VHA had no information on the extent to which VAMCs nationwide use these arrangements. We also reported that VHA had not evaluated the effectiveness of its physician recruitment and retention strategies. One such strategy hiring physician trainees was weakened by ineffectual hiring practices, such as delaying employment offers until graduation. In February 2018, we reported that the Department of Homeland Security (DHS) had taken actions to identify, categorize, and assign employment codes to its cybersecurity positions, as required by the Homeland Security Cybersecurity Workforce Assessment Act of 2014; however, its actions were not timely and complete. While DHS has implemented four of our six recommendations from this report, two recommendations remain open. For example, DHS has not yet completed its efforts to identify all of the department s cybersecurity positions and accurately assign codes to all filled and vacant cybersecurity positions. Further, it has not yet fully developed guidance to assist DHS components in identifying their cybersecurity work categories and specialty areas of critical need that align to the National Initiative for Cybersecurity Education framework. Without ensuring that its progress in identifying and assigning codes to its positions is accurately reported and it has guidance to fully assist components, DHS will not be positioned to effectively examine its cybersecurity workforce, identify its critical skill gaps, or improve its workforce planning. In March 2019, we reported that 24 federal agencies generally assigned work roles to filled and vacant positions that performed information technology, cybersecurity, or cyber-related functions as required by the Federal Cybersecurity Workforce Assessment Act of 2015. However, most agencies had likely miscategorized the work roles of many IT positions. Until agencies accurately categorize their positions, the agencies may not have reliable information to form a basis for effectively examining their cybersecurity workforce, improving workforce planning, and identifying their workforce roles of critical need. Skills gaps caused by insufficient number of staff, inadequate workforce planning, and a lack of training in critical skills are contributing to our designating strategic human capital management and other areas as high risk. (See table 1.) Skills gaps affect individual agencies but also cut across the entire federal workforce in areas such as cybersecurity and acquisition management. As our 2019 analysis of federal high-risk areas shows, in addition to Strategic Human Capital Management, skills gaps played a role in 16 of the other 34 high-risk areas we have identified. Insufficient numbers of staff with critical skills can be related to staff retirements as well as to inadequate recruitment and hiring. Moreover, if not carefully managed, anticipated retirements could widen skills gaps or open new ones, adversely affecting agencies capabilities. As shown in figure 1, more than 31 percent of federal employees on board by the end of fiscal year 2017 will be eligible to retire in the next 5 years. <2. Key Strategies and Practices for Recruiting, Incentivizing and Engaging the Current and Future Federal Workforce> In March 2019, we reported on key talent management strategies that can help agencies better manage the current and future workforce. Below we focus on nine selected practices we identified related to recruiting, incentivizing, and engaging the federal workforce: Cultivate a diverse talent pipeline. In our prior work, we have noted the importance of active campus recruiting that goes beyond infrequent outreach to college campuses. Active campus recruiting includes developing long-term institutional relationships with faculty, administrators, and students. In addition, OPM guidance emphasizes that agencies should develop an inclusive approach to their talent acquisition strategies. This includes developing strategic partnerships with a diverse range of colleges and universities, trade schools, apprentice programs, and affinity organizations from across the country. Recruit continuously and start the hiring process early in the school year. The ability to hire students is critical to ensuring that agencies have a range of experience levels for succession planning and a talent pipeline to meet mission requirements. One of the key challenges agencies face in recruiting students is managing the timing of recruitment. The federal fiscal year begins on October 1 about when private sector firms we interviewed start recruiting on campus. Frequently, however, federal agencies have been unable to hire at this time of year because of the limitations of budget uncertainty. Yet if agencies wait to start the recruiting and hiring process until they receive funding, many graduates will have taken other job opportunities. Agencies can overcome these timing challenges by recruiting continuously and starting the hiring process early in the school year. To recruit continuously, Chief Human Capital Officers (CHCOs) from the U.S. Departments of Agriculture and Homeland Security said that they advertise funding-conditional positions throughout the year. Write user-friendly vacancy announcements. We previously reported that some federal job announcements were unclear. This can confuse applicants and delay hiring. In July 2018, OPM officials stated that agencies can develop more effective vacancy announcements when hiring managers partner with human resource (HR) staff. According to OPM, hiring managers can work with HR staff to identify the critical competencies needed in the job, develop a recruiting strategy, and ensure the job announcement accurately and clearly describes the required competencies and experience. To promote collaboration between hiring managers and HR staff, OPM is training agencies on the role of hiring managers in writing vacancy announcements, according to OPM officials. Strategically leverage available hiring and pay flexibilities. To help ensure agencies have the talent they need to meet their missions, we have found that federal agencies should have a hiring process that is simultaneously applicant friendly, sufficiently flexible to enable agencies to meet their needs, and consistent with statutory requirements, such as hiring on the basis of merit. Key to achieving this is the hiring authority used to bring applicants onboard. In March 2019, we reported that CHCOs cited the complex competitive examining process as a cause of the lengthy hiring time. This has been a long-standing concern. In our 2002 report on human capital flexibilities, we noted that for many years prior, federal managers had complained that competitive examining procedures were rigid and complex. Agencies can use a number of additional hiring authorities beyond competitive examining. These authorities can add flexibility to the process and CHCOs have expressed a desire for more. However, we previously found that agencies relied on only a small number of available authorities. In fiscal year 2014, of the 105 hiring authority codes used in total, agencies relied on 20 hiring authority codes to make around 90 percent of the new appointments. We recommended in 2016 that OPM use information from its reviews of agencies use of certain hiring authorities to determine whether opportunities exist to refine, consolidate, or expand agency-specific authorities, and implement changes where OPM is authorized, including seeking presidential authorization or developing legislative proposals if necessary. OPM agreed with our recommendation and has made progress in these areas, although more work is needed to follow through on planned actions to streamline authorities. For example, in December 2018, OPM said that it continues to research and examine streamlining opportunities, such as those identified in its July 2018 study on excepted service hiring authorities. However, OPM did not provide a time frame for implementation. In addition, in its March 2019 Congressional Justification for the Fiscal Year 2020 Budget Request, OPM included legislative proposals for new hiring authorities, such as authority for short-term appointments to allow agencies to appoint and compensate highly qualified experts to help agencies meet critical needs as well as a change to the criteria for granting direct hire authority. A variety of special pay authorities can help agencies compete in the labor market for top talent, but agencies only use them for a small number of employees. In fiscal year 2016, these incentives were used for less than 6 percent of employees. In December 2017, we reported that agencies can tap an array of special payments when they need to recruit or retain experts in engineering, cybersecurity, or other in-demand fields. These payments include, for example, incentives for recruitment or retention, or higher rates of pay for critical positions. We found that agencies reported that these payments were helpful, but few documented their effects, and OPM had not assessed their effectiveness. Further, in our March 2019 report, we found that less than 5 percent of employees received payments for recruitment or retention annually in the past 10 years. In December 2017, we made three recommendations to OPM, including for it to track the effectiveness of special payment authorities. OPM partially concurred with this recommendation, saying that agencies are in the best position to take this action. In December 2018, OPM stated that it established a baseline to measure changes in the use of special payment authorities over time, and that it is focused on government-wide, mission- critical occupations to help identify trends where there may be recruitment and retention difficulties. However, documents OPM provided gave no information on actions taken on this recommendation. We will continue to monitor OPM s actions to implement this recommendation. This is one of 18 priority recommendations in GAO s Priority Recommendations letter to OPM. Use relevant assessment methods and share hiring lists. In March 2019, we reported that CHCOs and OPM officials we interviewed stated that roadblocks to hiring the right skills include issues with assessment methods. Specifically, agencies may use methods that are less relevant for assessing the desired skills or agencies may experience issues incorporating multiple assessments in the hiring process. For example, one CHCO we interviewed said that her agency uses multiple-choice questions to assess candidates, but essay questions more effectively assess the skills she seeks. OPM issued guidance to agencies on how to use additional assessment methods, including how to rank applicants. Additionally, federal employee and management group representatives we spoke with said agencies could reduce the time of the assessment process by sharing hiring lists. The Competitive Service Act of 2015 allows agencies to share hiring lists, but agencies have only started to pilot the practice within departments, according to OPM officials we spoke with for our March 2019 report. OPM and agencies discussed sharing hiring certificates with the CHCO Council, and OPM is planning virtual training sessions on this topic. However, one federal employee group representative noted that to be consistent with merit principles, agencies may need to refresh the list every 2 to 3 months to give new candidates the opportunity to enter the application pool. Highlight agency mission and link to employees work. Agencies can help counter negative perceptions of federal work by promoting their missions and innovative work, according to experts and CHCOs we interviewed for our March 2019 report. For example, DHS s CHCO told us that DHS provides Day in the Life information on its work to promote public awareness of how its everyday tasks tie in with its mission of protecting the United States. In addition, we have previously reported that high-performing organizations create a line of sight between individual performance and organizational results by aligning employees daily activities with broader results. Agencies can motivate and retain employees by connecting them to their agency s mission, according to human capital experts and federal employee and management group representatives we interviewed. Employee responses to Federal Employee Viewpoint Survey (FEVS) indicate the federal government appears to be performing well in this area. In 2017, 84 percent of employees knew how their work related to the agency goals and priorities. Increase awareness of benefits and incentives, such as work-life programs. As shown in figure 2, the majority of federal employees were satisfied with compensation, and employees who participated in work-life programs were satisfied with those incentives. However, OPM s 2018 Federal Work-Life Survey Governmentwide Report found that one of the most commonly reported reasons employees do not participate in work- life programs is lack of program awareness among employees and supervisors. Increase support for an inclusive work environment. An increasingly diverse workforce can help provide agencies with the requisite talent and multidisciplinary knowledge to accomplish their missions. In January 2005, we reported fostering a diverse and inclusive workplace could help organizations reduce costs by reducing turnover, increasing employee retention across demographic groups, and improving morale. We also reported that top management commitment is a fundamental element in the implementation of diversity management initiatives. Encourage details, rotations, and other mobility opportunities. In March 2019, we stated that CHCOs, human capital experts, and federal management groups said upward and lateral mobility opportunities are important for retaining employees. CHCOs also said that in some cases, lateral mobility opportunities such as rotations, details, and opportunities to gain experience in other sectors can help employees gain new skills more cost-effectively than training, particularly for rapidly changing skill sets such as those related to the sciences. Further, we previously reported that effective interagency rotational assignments can develop participants collaboration skills and build interagency networks. However, according to OPM data, few employees in 2017 moved horizontally because, according to federal manager group representatives and our previous work, managers are sometimes reluctant to lose employees. (See table 2.) We have previously made recommendations that could help address these challenges. For example in 2015, we recommended that OPM determine if promising practices, such as providing detail opportunities or rotational assignments to managerial candidates prior to promotion, should be more widely used across government. OPM partially concurred with this recommendation and agreed to work with the CHCO Council to explore more government-wide use of rotational assignments. However, OPM noted that agencies already have authority to take these actions. In June 2019, OPM officials told us they had discussed the scalability of promising practices for supervisors specifically, details and rotational assignments and a dual career ladder with members of the CHCO Council. OPM found these practices were being used at some agencies, but has not determined if these practices may be beneficial to other agencies. In conclusion, OPM has instituted numerous efforts to assist agencies in addressing mission-critical skills gaps within their workforces. This includes providing guidance, training and on-going support for agencies on the use of comprehensive data analytic methods for identifying skills gaps and the development of strategies to address these gaps. However, as of December 2018, OPM had not fully implemented 29 of our recommendations made since 2012 relating to this high-risk area. We will continue to monitor OPM s efforts to implement our recommendations. Further, we have reported on numerous talent management strategies that can help agencies better manage the current and future workforce. Without these measures, the federal government s ability to address the complex social, economic, and security challenges facing the country may be compromised. Chairman Lankford, Ranking Member Sinema, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you may have at this time. If you or your staff have any questions about this testimony, please contact Yvonne D. Jones, Director, Strategic Issues, at (202) 512-6806 or jonesy@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Clifton Douglas, Jr., Assistant Director; Christopher Falcone; Karin Fangman; Cindy Saunders, Alan Rozzi and Katherine Wulff. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
Strategic human capital management plays a critical role in maximizing the government's performance and assuring its accountability to Congress and to the nation as a whole.
GAO designated strategic human capital management as a government-wide, high-risk area in 2001. Since then, important progress has been made. However, retirements and the potential loss of leadership and institutional knowledge, coupled with fiscal pressures, underscore the importance of a strategic and efficient approach to acquiring and retaining individuals with critical skills. As a result, strategic human capital management remains on GAO's High-Risk List.
This testimony is based on a large body of GAO work issued from May 2008 through May 2019. This testimony, among other things, focuses on key human capital areas where some actions have been taken but attention is still needed by OPM and federal agencies on issues including: (1) addressing critical skills gaps and (2) recruiting and hiring talented employees.
What GAO Found
GAO, along with the Office of Personnel Management (OPM) and individual agencies, has identified skills gaps in numerous government-wide occupations. According to GAO's 2019 analysis of federal high-risk areas, skills gaps played a role in 17 of the 35 high-risk areas. Causes vary but these skills gaps often occur due to shortfalls in one or more talent management activities such as robust workforce planning. Staffing shortages and the lack of skills among current staff not only affect individual agencies but also cut across the entire federal workforce in areas such as cybersecurity and acquisition management. Additionally, the changing nature of federal work and the high percentage of employees eligible for retirement could produce gaps in leadership and institutional knowledge, and threatens to aggravate the problems created from existing skills gaps. For example, 31.6 percent of permanent federal employees who were on board as of September 30, 2017, will be eligible to retire in the next 5 years with some agencies having particularly high levels of employees eligible to retire.
GAO's work has identified a range of problems and challenges with federal recruitment and hiring efforts. Some of these problems and challenges include unclear job announcements and a lengthy hiring process. Further, the federal workforce has changed since the government's system of current employment policies and practices were designed. Strategies that can help agencies better manage the current and future workforces include:
Manage the timing of recruitment . To address issues of funding uncertainty at the beginning of the fiscal year, agencies should recruit continuously, starting the hiring process early in the school year.
Write user-friendly vacancy announcements . GAO has reported that some federal job announcements were unclear. This can confuse applicants and delay hiring. OPM stated that when hiring managers partner with human resources staff, agencies can develop more effective vacancy announcements.
Leverage available hiring and pay flexibilities . To help ensure agencies have the talent they need, they should explore and use all existing hiring authorities. A variety of special pay authorities can help agencies compete in the labor market for top talent, but GAO has found that agencies only use them for a small number of employees.
Increase support for an inclusive work environment . An increasingly diverse workforce can help provide agencies with the requisite talent and multidisciplinary knowledge to accomplish their missions.
Encourage rotations and other mobility opportunities. Upward and lateral mobility opportunities are important for retaining employees, but few employees move horizontally because managers are sometimes reluctant to lose employees.
Without these measures, the federal government's ability to address the complex social, economic, and security challenges facing the country may be compromised.
What GAO Recommends
Over the years, GAO has made numerous recommendations to agencies and OPM to improve their strategic human capital management efforts. Agencies have taken actions to implement some of these recommendations, but many remain open. GAO encourages OPM and the agencies to fully implement the recommendations. |
gao_GAO-19-712T | gao_GAO-19-712T_0 | <1. Background> The Cannon Building, completed in 1908, is the oldest congressional office building and occupied by Members and their staffs. (See fig. 1.) The Cannon Building houses 142 office suites, five conference rooms, four hearing rooms, and the Caucus Room, which can accommodate large meetings. The building also includes a library, food servery, and a health unit. AOC began developing the scope for the Cannon project in approximately 2004 when its consultant conducted a facility condition assessment that identified the building s deficiencies. This condition assessment identified, for example, that the hot water heating and air-handling systems had components dating back to the 1930s that are in need of replacement. In addition, the assessment identified deficiencies such as an outdated fire alarm system for which repair parts were difficult to obtain, worn and damaged marble tile in corridors, and original windows that were damaged and often nonfunctional. AOC continued its planning and design work through 2014 to establish the final scope of the Cannon project, which entailed correcting most of the identified deficiencies and addressing current requirements such as for energy conservation, physical security, hazardous materials abatement, and historic preservation. Key components of the project, among other things, include: substantial reconfiguration of member suites and the reconstruction of the building s top floor to convert storage space into new suites, refurbishment of windows and installation of a new roof, preservation of the building s stone exterior, replacement of all plumbing, heating and cooling, fire protection, electrical, and alarm systems, and refurbishment of restrooms to make them more accessible to people with disabilities. As part of the development process for the Cannon project, AOC established a budget of approximately $753 million. Key components of the budget include costs for the construction contract; architect and engineering (A/E) design services; construction management support; security; furniture and fixtures; swing space design and construction; contractor incentive bonuses; and contingency. AOC is using the Construction Manager as Constructor (CMc) delivery method to implement the Cannon project. Under this approach, AOC: contracted with a construction contractor that consulted on the project s design, and negotiated with the construction contractor to set a guaranteed maximum price for the construction work based on the completed design. AOC also contracted with an A/E firm, which produced the design for the project and is providing consultation during construction, and with a Construction Manager as Agent (CMa), that provides administrative and technical support to AOC in managing the construction work. AOC scheduled the Cannon project s construction in five sequential phases with an initial phase (Phase 0) for utility work and four subsequent phases (Phases 1 through 4) to renovate the north-, south-, east-, and west-facing sides of the building. Each phase is scheduled around a 2- year congressional session. As the project progresses, tenants displaced during construction (Phases 1 through 4) are to move to temporary offices while other occupants are to remain in the building sections not affected by construction. <2. AOC Has Completed Two of Five Phases of the 10-Year Cannon Project> Currently, AOC has substantially completed Phase 0 and Phase 1 of the five phases planned for the Cannon project and is progressing with work on Phase 2, which it expects to complete in November 2020. (See fig. 2.) AOC completed Phase 0, as planned and under its budget estimate, from January 2015 through December 2016. This work primarily included the construction contractor s replacement of the utility infrastructure and distribution systems in the basement, garage, and courtyard. During this time, AOC also managed the work of its Construction Division to build 31 additional Member Suites to offset the suites that would be inaccessible when sections of the building were under construction. From January 2017 through December 2018, AOC managed the renovation of the first of four building sections, consisting of the building s west side (facing New Jersey Avenue) and Rotunda (Phase 1). AOC substantially completed Phase 1 to enable occupancy of the building section, as planned, on January 3, 2019, at the start of the 116th Congress. However, it is continuing to address punch-list items of incomplete or corrective work from Phase 1. AOC expects to complete the punch-list items by December 2019. Further, AOC encountered several issues during the Phase 1 renovation that have prevented it from settling the costs for this phase and that will affect the cost of the project s later phases. According to AOC s most current (July 2019) Executive Summary, unforeseen conditions, design issues, and scope changes have increased both the estimated cost for Phase 1 and the project s three remaining phases. For example, AOC found that more extensive exterior stone restoration was needed than planned and encountered some unforeseen asbestos-containing materials in the roof that it needed to mitigate. Further, AOC needed to provide additional security features to address U.S. Capitol Police requests. Collectively, these issues are creating cost pressures that have caused AOC to reassess the cost to complete the project. We discuss the project s costs in greater detail later in this testimony. AOC is currently progressing, as planned, in renovating the north side of the building (facing Independence Avenue), which is the second of the four building sections to be renovated (Phase 2). Because the work in this phase and the Cannon project s remaining phases is similar to work completed in Phase 1, AOC expects to benefit from its application of lessons learned. For example, AOC reported that its construction contractor experienced challenges installing the temporary roof enclosure that it used in Phase 1. Based on this experience, AOC officials told us that the contractor developed a new design for the temporary roof enclosure that the contractor expects to install more rapidly in the project s remaining phases than in Phase 1. Further, because the materials in Phases 2 through 4 are the same as in Phase 1, AOC officials expect that the process of approving the construction contractor s use of these materials should proceed faster in these later phases and enable construction to progress more efficiently. <3. AOC Had Consistently Estimated the Cannon Project Cost to be $753 Million, But Recently Increased Its Estimate> In 2009, we reported that AOC expected to request approximately $753 million for the Cannon project. At the time, AOC expected the project to be in five phases over 5 years. Because the project was in an early development stage at that time, we said: that AOC s estimate should not be considered sufficiently accurate for funding purposes, that the cost and scope were likely to change, and that it would be important for AOC to continue to refine the project s scope and cost estimate to provide Congress with the information it needed to make decisions about the project. When we next reported on the Cannon project in 2014, AOC had completed most of the planning and design and was preparing to award the contract for construction, which was to begin in January 2015. As part of our 2014 review of AOC s cost estimating policies and guidance, we compared AOC s cost estimate for the Cannon project still $753 million to our leading practices for developing high-quality, reliable cost estimates. We found the AOC s cost estimate reflected several, but not all, of our leading practices. In particular, we found that AOC s estimate included ground rules and assumptions; provided a reasonable explanation of the basic estimation methodologies; and integrated separately produced estimates from AOC s architect, construction manager, and construction contractor to enable a reasonably accurate assessment of estimated costs. Further, we found AOC had conducted a cost risk and uncertainty analysis in accordance with a key leading practice. This analysis concluded that based on AOC s inputs and assumptions, there was a high probability (over 90 percent) that actual costs would be equal to or less than AOC s $753 million estimate. This estimate included contingency factors to account for risks and uncertainties. However, our review of AOC s guidance for developing cost estimates found that the guidance did not provide documented reasons explaining how the actual contingency amounts were developed. In addition, we found that the method AOC used to model the project s risks in its cost risk and uncertainty analysis (1) resulted in an unusually narrow range of estimated costs and (2) provided managers limited ability to understand the effects of individual risks. We recommended that AOC improve its cost-estimating process, such as by incorporating leading practices we identified as lacking for cost estimating into its cost- estimating guidance and policies. AOC has since implemented our recommendations. In January 2018, while Phase 1 of the Cannon project was in progress, AOC updated its analysis of risks by undertaking a study (termed an integrated cost-schedule risk analysis) to determine the potential effects of these risks on the project s cost and schedule. Updating risk analyses and their effect on project cost estimates is consistent with leading practices for developing both a high-quality, reliable cost estimate and schedule. AOC s 2018 analysis arrived at the same conclusion as its 2014 analysis that the estimated $753 million total project cost was adequate and that there was a high probability (over 80 percent) that actual costs would be equal to or less than the $753 million estimate. However, this analysis was qualified on the assumption that AOC and project stakeholders are able to adequately mitigate risks identified through the analysis. Additionally, the analysis indicated that inaccurate estimates of costs for risk mitigations, currently unknown risks, and optimistic assumptions about the impact of risk mitigations on the project s cost and schedule could affect the project s total cost. As noted previously, the project is experiencing cost pressures from the greater-than-anticipated risks and ineffective mitigations stemming from unforeseen conditions, design issues, and scope changes. In June 2019, AOC reported that it expects that the cost to complete the Cannon project will increase by 10 to 15 percent over its initial estimate of $753 million, resulting in a final cost between approximately $828 million and $866 million. AOC reported that the following key factors affect the project s cost: Phase 1 completion costs. While Phase 1 work has been substantially completed, AOC has yet to settle all outstanding change proposals. AOC reported that the cost to complete Phase 1 is greater than it initially planned and that it will not know the final cost for this phase until it completes negotiations of the cost of unsettled change proposals. Phase 2 modifications. While Phase 2 work has begun, AOC is awaiting the contractor s proposal on the costs to address the requirements outlined in four design bulletins issued by AOC that, in part, describe changes to the project s scope based on lessons learned in Phase 1. AOC estimates that the contract modifications described by the design bulletins will increase the cost of Phase 2. Phase 3 and 4 modifications. AOC expects that it will award these future phases of the project at higher amounts than it initially planned based, in part, on the estimated cost of incorporating the additional work described in the design bulletins. In August 2019, AOC began updating its integrated cost-schedule risk analysis, with the aim of more accurately determining the extent to which the project s costs are increasing and its estimated cost at completion. By updating the analysis, AOC should be better able to make informed decisions as construction progresses. Further, updating the analysis should enable AOC to more precisely estimate the Cannon project s cost at completion and better position AOC to make a more accurate budget request to Congress for remaining costs. Chairperson Lofgren, Ranking Member Davis, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. <4. GAO Contacts and Staff Acknowledgments> If you or your staff has any questions concerning this testimony, please contact Terrell Dorn at (202) 512-6923 or dornt@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contacts named above: Michael Armes (Assistant Director); George Depaoli (Analyst-in-Charge); Geoffrey Hamilton; Malika Rice; Kelly Rubin; Steve Schluth; and Amelia Michelle Weathers made key contributions to the testimony. Other staff who made contributions to the reports cited in the testimony are identified in the source products. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
The Cannon project intends to preserve the historic character while improving the functionality of the 111 year-old Cannon Building—the oldest congressional office building—as well as address deterioration to the building and its components. The project—nearing the mid-point of its planned 10-year duration—is being implemented in five sequential phases with an initial phase (Phase 0) for utility work and four subsequent phases (Phases 1 through 4) to renovate the north-, south-, east-, and west-facing sides of the building. Each phase is scheduled around a 2-year congressional session. This statement describes: (1) the status of the Cannon project and (2) changes to the project's estimated cost at completion. This statement is based on GAO's prior reports in 2009 and 2014 and ongoing monitoring of the project. To monitor the project, GAO has been observing the ongoing construction, attending project meetings, and analyzing AOC documents.
What GAO Found
The Architect of the Capitol (AOC) has substantially completed two of five planned phases to renovate the Cannon House Office Building (Cannon project). AOC completed Phase 0 utility work; has almost finished the Phase 1 work to renovate the building's west side, as planned; and is progressing with Phase 2 work to renovate the building's north side.
From 2009 to 2018, AOC consistently estimated the project cost at $753 million, but AOC reported in June 2019 that it expects costs to increase by 10 to 15 percent, resulting in a total cost of approximately $828 million to $866 million. In 2014, GAO found that AOC's cost estimate of $753 million reflected several, but not all, of GAO's leading practices for high-quality, reliable cost estimates, including that AOC had conducted a risk and uncertainty analysis.
In January 2018, AOC updated its analysis of risks by undertaking an integrated cost-schedule risk analysis. AOC's 2018 analysis arrived at the same conclusion as its earlier analysis—that the project's estimated $753 million total cost was adequate to complete the project. However, AOC's 2018 analysis indicated that inaccurate estimates of costs for risk mitigations, unknown risks, and optimistic assumptions about the effect of risk mitigations on the project's cost and schedule could affect its total cost. In June 2019, AOC reported that greater-than-expected risks, such as from unforeseen conditions that led to more extensive exterior stone restoration than anticipated and the unplanned mitigation of asbestos in roof materials, would increase the project's cost. AOC is currently determining the effect of these and other changes on Phase 1, where work has been substantially completed, but costs have not been settled. AOC is also determining how the costs of the project's remaining phases will be affected by scope changes stemming from lessons learned in Phase 1. Toward this end, in August 2019, AOC began updating its integrated cost-schedule risk analysis, with the aim of more accurately determining the extent to which the project's costs are increasing and its estimated cost at completion.
What GAO Recommends
In 2014, GAO made recommendations pertaining to AOC's cost-estimating guidance and policies. AOC has implemented these recommendations. |
gao_GAO-19-250 | gao_GAO-19-250_0 | <1. Background> Collectively, the ongoing GPS acquisition efforts aim to (1) modernize and sustain the existing GPS capability and (2) enhance the current GPS system by adding a more cybersecure ground system that enables M- code. M-code is a stronger, encrypted, military-specific GPS signal designed to meet military positioning, navigation, and timing needs. It will help military users overcome GPS signal jamming by using a more powerful signal and protect against false GPS signals, known as spoofing, by encrypting the signal. Figure 1 below shows how GPS satellites, ground control, and user equipment in the form of receiver cards embedded in systems function together as an operational system. The Air Force s OCX program is primarily a software development effort to replace the current ground system, the operational control system (OCS), with a modernized and more cybersecure system. OCS lacks modern cybersecurity protections and cannot currently control or enable modernized features of the three latest generations of GPS satellites now in orbit, including M-code and three new civilian signals. Because existing military receivers were not designed to work with the new M-code signal, military users will have to make investments in new receiver development and procurement timed to when the new signal will be available before they can use it. Raytheon is the prime contractor working to deliver OCX in a series of blocks that enable additional capabilities. Block 0, which is a subset of block 1 broken out after development started, was delivered in September 2017. It helped to successfully enable the launch and initial testing of the first GPS III satellite, which was launched in December 2018, and will continue to support subsequent GPS III satellite launches. Blocks 1 and 2, originally planned as separate deliveries, have been combined into a single delivery and are currently in development. This combined delivery enables OCX to command and control each satellite and begin using the full M-code signal, as well as control new civilian signals, among other capabilities. Because of significant delays to OCX, the Air Force initiated two additional programs to modify OCS to deliver some of the planned capabilities before OCX is operational. The first program is Contingency Operations (COps) which will enable the control of GPS III satellites to operate with the same capabilities as current GPS satellites without the additional military and civilian signals. The second program is M-code Early Use (MCEU) which will permit some functions of M-code to be used before OCX is delivered. Neither COps nor MCEU will enable the additional civilian signals or the full M-code functionality that is expected with OCX. <1.1. Acquisition Cost and Schedule Baselines> DOD is required by statute to establish and approve cost and schedule baselines for major defense acquisition programs before those programs enter system development, also known as the engineering and manufacturing development phase. As part of program planning, including for major defense acquisition programs, DOD policy requires program managers to establish program goals for cost, schedule, and performance parameters. Approved program baseline parameters are reported in the program s acquisition program baseline as objective and threshold values. The objective values represent goals in terms of what the user in the case of GPS, the Air Force desires and expects. The threshold values represent the limit of what is acceptable meaning cost or schedule growth above threshold values are outside of the approved cost or schedule limits. For OCX, the cost and schedule objective and threshold dates in the baseline are tied to an event called ready to transition to operations, which will be the completion of the OCX acquisition program schedule. For the OCX program, this is a decision within the Air Force to switch control of the GPS constellation from the current GPS ground system, OCS at this future point with COps and MCEU modifications already added to OCX. The delivery date of the system by Raytheon and acceptance date by the Air Force will both come before the ready to transition to operations decision. These two dates are important because their timing may influence when OCX operates. What is a critical Nunn-McCurdy unit cost breach? For major defense acquisition programs, a critical Nunn-McCurdy unit cost breach of a unit cost threshold is triggered by cost increases of at least 25 percent or more of a program s current cost baseline or at least 50 percent or more of a program s original cost baseline. As an acquisition program works to achieve its objective and threshold values, the original baseline goals may become unachievable. When this occurs, a revised baseline, or rebaseline, is created so the program s cost and schedule goals are updated to more realistically reflect the program s current status. If the increase from the cost baseline meets certain thresholds, DOD is required to notify Congress in writing. This is known as a Nunn-McCurdy breach. This notification assists Congress with monitoring program progress, especially on troubled programs. A critical Nunn-McCurdy unit cost breach is the most serious type of breach and requires a program to be terminated unless the Secretary of Defense submits a written certification to Congress that certain criteria have been met, including that the new estimate of the program s cost has been determined to be reasonable by the Director of DOD s Office of Cost Assessment and Program Evaluation, and takes other actions, including restructuring the program. <1.2. History of Increasing OCX Cost and Schedule Baselines> As we have previously reported, the Air Force has had significant difficulties developing OCX. The program s cost and schedule baselines have been unstable and unexecutable since the first baseline was established in 2012. In total, there have been three OCX program baselines: 1. November 2012 original baseline at development start, 2. October 2015 rebaseline due to a schedule breach, and 3. September 2018 rebaseline prompted by a critical Nunn-McCurdy unit cost breach. Since 2012, reflecting the newest baseline and additional cost and schedule growth since the Nunn-McCurdy breach, the schedule has more than doubled and the costs have grown by approximately 68 percent. Figure 2 shows the three OCX baselines with their schedule and cost growth since the start of development. The National Defense Authorization Act for Fiscal Year 2017 required an independent assessment of OCX. The Act required an assessment of the Air Force s ability to complete blocks 0 through 2 on a schedule necessary to transition OCX to full operation and an estimate of the cost, among other issues. The MITRE Corporation conducted the study and DOD provided it to congressional defense committees in December 2017. As a result of the 2016 Nunn-McCurdy unit cost breach, the program repeated the milestone associated with system development start and established new cost and schedule objectives and thresholds, conducted a baseline review of the schedule to verify the work necessary to complete the program, and received approval of the acquisition program baseline by the milestone decision authority the Under Secretary of Defense for Acquisition and Sustainment. To support certification of OCX s new baseline, in May 2017 the Air Force produced an $8.7 billion OCX service cost position for development, sustainment, and disposal. The Air Force service cost position was subsequently reaffirmed in 2018 by the Air Force and supported by an additional independent cost estimate from DOD s Office of Cost Assessment and Program Evaluation in June 2018, which was approximately 3 percent higher in cost for the development portion. The Under Secretary of Defense for Acquisition and Sustainment selected the Air Force service cost position for the OCX baseline. <1.3. Root Causes of Schedule Delays> In 2014, the Air Force identified root causes for OCX cost and schedule growth and concluded that the problems were driven by (1) incomplete systems engineering, (2) inadequate process discipline, and (3) difficulties implementing cybersecurity due to its complexity. We reported in 2015 that the program office paused development in late 2013 to fix what it believed were the root causes of development issues, and significantly increased the program s cost and schedule estimates. Despite the pause to address root causes, problems persisted and in the same report we questioned whether all root causes such as a persistently high software development defect rate had been adequately identified, let alone addressed, and whether realistic cost and schedule estimates had been developed. We also found that the program was not following various acquisition best practices, such as the completion of a preliminary design review prior to development start. In 2015, we recommended that DOD assemble a task force to assess the OCX program and provide concrete guidance for addressing program problems, to determine root causes for OCX defects, and to establish a high confidence schedule and cost estimate, among other recommendations. DOD concurred with our four OCX-related recommendations and has taken some steps to implement some of them. However, to date, none have been fully implemented and they remain open. In 2016, DOD s Director of Performance Assessments and Root Cause Analyses concluded that the root causes for OCX s Nunn-McCurdy unit cost breach were (1) an unrealistic schedule driven by the need to sustain the GPS constellation, (2) an underestimation of the cost to fully implement information assurance, or cybersecurity, and (3) poor performance by both the government, caused by a lack of requisite software expertise, and Raytheon, caused by poor systems engineering that led to significant rework. We found and DOD s 2016 root cause analysis has shown a significant and recurring cause of delays on the OCX program has been a lack of mutual understanding of the work between the Air Force and Raytheon. In December 2017, we found risks to the latest proposed (but not yet then approved) OCX schedule, noting that the schedule to which the program was working at that time (1) was built on certain unproven assumptions regarding planned coding and testing improvements, (2) had not yet undergone a baseline review to verify that the schedule incorporated all of the work required for program completion, and (3) did not yet include a number of changes that the Air Force needed to incorporate into the contract with Raytheon as modifications, which may lead to additional schedule slips. In 2017, we did not make additional recommendations for OCX because the Air Force had undertaken the COps and MCEU programs to provide interim capabilities to mitigate OCX delays. <1.4. Changes in Software Development Methodology During the Program> In 2016, Defense Digital Service a DOD office established by the Secretary of Defense engaged with the OCX program to suggest improvements to Raytheon s software development practices. The office recommended that Raytheon change its software development approach to use an incremental development approach. This approach uses a continuous integration and testing process, where the software code is frequently integrated and tested so that defects are detected and addressed sooner. This is done through automation of the software development process, version control tools, and coordination between different teams building software. Traditional software development methods entail a more linear approach whereby each process is completed before proceeding to the next process in the sequence. By such an approach, the software development processes are completed prior to the testing of a full product before the product s release to the end user. In 2016, DOD told the Air Force and Raytheon to utilize the new approach, which Raytheon began implementing in a series of seven phases. The first phase began in late 2016 and the last phase is scheduled to be in place by the end of 2019. According to the Air Force and Raytheon, through this new approach, the program aims to introduce efficiencies building software in several ways: 1. discovering defects in software code earlier; 2. reducing the number of defects; 3. reducing the amount of time it takes to repair defects; and 4. reducing the overall time to code, integrate, and test OCX software through automation for some aspects of the software development. <2. OCX Schedule at Risk for Additional Delays to Delivery and Operation> OCX delivery, acceptance, and the ready to transition to operations decision will likely be delayed, potentially exceeding the April 2023 threshold date for completing the program. Actual development progress has been mixed, with some improvement to the pace of software development. However, the majority of the schedule reserve has been consumed and defect repairs are taking longer than assumed with significant work remaining. In addition, a number of new cost and schedule risks to OCX delivery have arisen since the program schedule was established. GAO s schedule and cost estimating best practices recommend that the schedule assessment be periodically updated to reflect actual progress and new risks. To mitigate program optimism, GAO s cost estimating best practices also state it is important to have an independent view of cost estimates and schedules. While the Air Force and the contractor periodically update their schedule estimates, no plans currently exist for further independent analysis of the full program schedule within DOD, and there is no requirement to do so. <2.1. Significant Development and Testing Remains Before OCX Is Operational> The OCX program has significant work remaining before OCX is operational, including years of integration and testing. Achieving the full program schedule requires two interrelated steps. First, in order to meet the program schedule there must be timely delivery by Raytheon and acceptance of the system by the Air Force. Second, there must be timely completion of government-run post-acceptance developmental testing. Once the Air Force determines that the developmental testing is completed, OCX will be ready to transition to operations, which ends the full program schedule. GPS operations will then be transferred from OCS to OCX. Figure 3 shows the major activities until the ready to transition to operations decision. OCX development is expected to continue for approximately 2 more years, after which Raytheon will submit a Material Inspection and Receiving Report (Form DD 250) at delivery. The Air Force will then evaluate OCX for acceptance. Air Force acceptance will be informed by numerous contractor-run developmental tests conducted to help the Air Force understand the maturity of the system. Air Force officials will use information from these contractor tests to inform their approval and complete acceptance. For example, the Air Force will review data and demonstrated system capabilities from the tests to determine whether OCX is ready for integration into the overall GPS. These tests have formal entrance criteria to demonstrate the system is ready for testing and exit criteria to ensure tests are successful before proceeding to the next activity. At the conclusion of contractor testing and delivery to the Air Force, the Air Force will inspect OCX over approximately 2 months before OCX is officially accepted. The Air Force will indicate acceptance by signing the Form DD 250. Currently, the period of performance under the contract ends June 30, 2021. Consequently, acceptance of the delivered OCX would need to occur prior to that date. <2.1.1. Air Force Developmental Testing and Rehearsals> After acceptance, Air Force program officials said OCX will go through government-run developmental testing currently scheduled to last 7 months that includes operator transition exercises and rehearsals of the system. According to OCX program officials, Raytheon will provide interim contractor support to address any defects or incomplete work as well as address any additional issues found during the planned 7 month post- acceptance developmental testing. According to program officials, the ground control operators who have already been working and providing feedback and training and readiness oversight personnel will continue to work with the new ground system to assess the system s readiness through hands-on engagement with the installed system. <2.1.2. Ready to Transition to Operations> At the end of this 7 month period, the Air Force will determine whether the system is ready to transition to operations. To make the ready to transition to operations decision, Air Force officials said the system must receive approval from different groups, including senior leadership within the Air Force. Once the decision has been made, the Air Force will transition ground control of the GPS satellite constellation from OCS to OCX. Additionally, after this transition, which completes the program schedule, OCX will undergo an operational test and evaluation period, which will support the Air Force s separate operational acceptance decision for OCX. <2.2. Delivery: Contractor s Date Remains Optimistic Compared to Other Estimates> The OCX contractor s delivery date is optimistic and much earlier than Air Force and independent projections. All government and independent analyses project OCX delivery will exceed June 2021 by at least 6 months, but still deliver in time to support the April 2023 threshold (latest acceptable) date for the full program schedule. However, meeting the ready to transition to operations threshold date depends on acceptance of OCX by September 2022, at the latest. This will allow for a planned 7 months of government-run developmental testing that must occur before April 2023. Numerous OCX schedule estimates were produced between December 2017 and January 2019. Table 1 indicates the estimator, date of the estimate, and the reason the estimate was completed. Figure 4 shows the results of the varying estimates for the start of OCX operations in months as measured from the beginning of calendar year 2019. The most recent independent OCX assessment of the delivery date is from the Defense Contract Management Agency in January 2019. That assessment estimates that Raytheon s projected delivery and the cost at completion are both unrealistic based on staffing profiles, task movement, completion rates, baseline execution, and schedule performance metrics. The Defense Contract Management Agency projects that there are not enough cost and schedule reserves left to cover its own estimate to complete the work plus all of the identified risks. In fact, the Defense Contract Management Agency estimates Raytheon will need over $400 million more in cost reserves and that OCX will likely be delivered 11 months after June 2021. <2.3. Delivery: Actual Development Progress Is Mixed> Actual development progress has been mixed. While the pace of software development has improved, implementing the new software development approach has been slower than expected. As a result, Raytheon has used the majority of its schedule reserve and delayed planned staff reductions, indicating that work is not being completed as quickly as planned. In addition, the schedule assumes improvements to software defect discovery have not all come to fruition and repair rates have not been achieved. <2.3.1. Pace of Software Development Has Improved> Under its new software development approach, Raytheon is building and testing OCX software more quickly than under its previous approach. In 2018, the software development rate to build and test software was reduced in duration from 4 weeks or more to less than 7 hours on average better than planned. Defense Contract Management Agency officials said that software development has improved compared to block 0 by having a better software development process in place. These officials cited in particular the improvement that has occurred with the introduction of software testing automation in some areas. The pace of software development is one area of many that is necessary to improve overall performance and achieve the delivery schedule. <2.3.2. Implementing the New Software Development Approach Took Longer than Planned> OCX program officials told us that the full implementation of the new software approach is foundational to the success of the program; failure to successfully implement the new approach on time would lead to cost growth and schedule delays. However, implementation of the new software approach has taken longer than planned, using a majority of the available schedule reserve. Defense Contract Management Agency officials found that since the current baseline was established, Raytheon consistently takes 5 months to perform 4 months of planned work. This has not yet delayed the delivery schedule because the program has been able to use cost and schedule reserves to cover the delays. Between April 2017, when the current schedule baseline was established, and January 2019, Raytheon used 4 of the 6 months of total schedule reserve. As of April 2019, Raytheon had approximately 26 months of work remaining until June 2021, but only 2 months of schedule reserve. As a result, there will not be enough time to complete OCX development and have it accepted by June 2021 unless the contractor significantly reduces its use of schedule reserve. Raytheon started using the new software approach on April 1, 2018 to improve software development, but implementation took longer than planned for six of the seven initial adoption phases, with two completing more than a year late. Some of the subsequent expansion phases are also experiencing delays. For example, phase 3 expansion was completed more than a year behind the planned schedule. Three other expansion phases are still in progress and scheduled to complete in mid- to late-2019. Raytheon s divergence from the baseline staffing plan indicates that work is not being completed as quickly as planned, and more staff have been needed to prevent additional delivery delays. Raytheon had planned to reduce the number of staff working on the program from approximately 1,000 to 700 between the autumn of 2017 and the end of 2018. However, to maintain schedule, Raytheon delayed those reduction plans and increased staff by approximately 10 percent from January to August 2018. Figure 5 shows the difference between the staffing baseline and actuals for OCX between January 2018 and December 2018. Our analysis shows a gap between the January 2018 baseline planned staffing reduction and actual contractor staffing levels in each month from January to December 2018, collectively indicating an increase of approximately 29 percent above the plan. According to DOD s Office of Cost Assessment and Program Evaluation officials, this increase is likely to continue through mid-2019. These officials estimated in their June 2018 independent cost estimate that contractor staffing levels will be higher than planned through May 2019 so that Raytheon can complete key software coding events. OCX program officials told us that the program has been able to afford the additional contractor staff as there are cost risks to support higher than anticipated staffing levels. They said that continued increases too far into 2019, however, will result in a breach of the cost threshold. Further, they said the increased contractor staffing is consistent with their priority on achieving the delivery schedule. The new software approach implementation will remain a cost and schedule risk until at least late 2019. At this time, the final expansion phase for the new software development approach is planned to be completed in order to support final testing of the entire system. <2.3.3. Assumed Earlier Defect Discovery Shows Mixed Results and Reduction of Time to Repair Defects Has Not Occurred> Progress finding software defects sooner in development is also mixed. Raytheon officials told us that cost reductions are possible if they are able to find defects earlier, as this approach would lead to earlier defect resolution and reduce any backlog of defects. Further, they said there is efficiency in having the same developers repair software code that they created instead of different developers repairing the code later. In March 2018, Raytheon reported increasing the percentage of defects found in the phase of development where the defect was created from 27 percent in block 0 to 66 percent in blocks 1 and 2. However, an independent DOD assessment contradicted this improvement. DOD s Office of Cost Assessment and Program Evaluation analyzed Raytheon s defect discovery progress a few months into 2018 and found that after showing some initial improvement, the defect discovery rate dropped from approximately 53 percent to 24 percent. In addition, assessing progress discovering defects is now more difficult to compare with earlier development since Raytheon changed how it tracks and counts defects in 2018. According to OCX program officials, Raytheon now only counts a defect if it is repaired in a later phase. Therefore, if a defect is found and repaired in the same phase, it is not counted. As of November 2018, Raytheon officials said the predictive measure they are now using estimates the total number of defects expected while measuring the actual defects discovered. From this data, Raytheon found fewer total defects than it predicted, which Raytheon officials said will result in fewer defects likely to be discovered later in subsequent phases. Further, the defect repair-rate or how many hours it takes to find and repair a defect is currently projected to be higher than planned, placing additional pressure on the delivery schedule. According to Defense Contract Management Agency officials, the delivery schedule included defect repair assumptions that were unrealistic. That schedule assumed 30 hours to repair each defect. However, as of November 2018, Raytheon projects it will need 52 hours to repair each defect on average. For example, in one area of the program, defects required 61 hours to repair on average as of December 2018. Defense Contract Management Agency officials told us that they had concerns that the complexity of the defects was driving the time needed to repair them. They said that the more mature software created under the new software approach may be creating much more complex defects to repair than originally planned. This may lead to additional schedule delays as the time to repair these more complex defects may continue to be significantly higher than the delivery schedule assumed. More complete data on defects and defect repair rates will likely be available by the end of 2019, when the final expansion phases of the new software approach and more software development are completed. <2.4. Delivery: Risks Have Changed Substantially Since Schedule Established> How do programs track risk as progress is made and risks evolve? A risk is an uncertain event that could Programs track risks to help manage and mitigate their effect on cost and schedule. completion requires knowing potential risks and identifying ways to respond to them before they happen using risk management to identify, mitigate, and assign resources to manage risks so that their effects can be minimized. Raytheon s estimate that OCX will be accepted by the end of June 2021 is further challenged because of significant identified risks that remain in the schedule and changeover in those risks in 2018. As of January 2019, Raytheon was tracking 48 risks it has identified with cost effects 26 with a moderate likelihood of occurring. For example, a moderate risk includes the possibility of finding more defects than planned, which could have both cost and schedule consequences. Other moderate risks include the possibility of software development taking longer to complete or needing to create more software code than planned. If realized, both of these risks have cost effects to pay for additional work and schedule effects to allow additional time to complete work. As of January 2019, Raytheon has no high risks that it tracks. There was also a significant amount of change in the risks themselves in 2018, as Raytheon added 27 new risks while closing 30. The majority of the risks that are currently tracked will not be realized or retired until late 2019, with at least one key risk of concern to the Air Force not realized or retired until 2020. As the program progresses, risks can (1) According to OCX program officials, approval to transition OCX to operations assumes a 7-month developmental test schedule after acceptance. As currently formulated, this period will be used to prepare for the transition from OCS to OCX via (1) transition exercises to train operators, (2) transition rehearsals to practice the actual handover from OCS to OCX, and (3) a 156-day integrated system test to verify OCX s requirements, operational suitability, and readiness to enter operational testing. However, that 7-month duration may not be sufficient to conduct all of the activities that are necessary to verify OCX is ready to transition to operations. First, the head of the GPS Directorate s Lead Development Test Organization, which plans and executes the 7-month developmental testing, said that there is some schedule risk because of concurrent activities that need to be accomplished, including crew rehearsals and other test events. Second, the content of the test period has not yet been finalized. The planned testing events will be reviewed and refined about 6 months before beginning the test as it becomes clearer what can be tested and what data will be available from the system. The test director and the OCX program manager are considering combining some test events and, if possible, starting some testing prior to acceptance. Third, the test director and the OCX program manager described a number of risks that could delay completion of developmental testing including (1) the late identification of issues requiring significant new software coding and retesting and (2) identification of new requirements that are not in the scope of the current effort. In addition, OCX program officials stated that neither they nor senior Air Force leadership would transition OCX to operations if the operators are not ready or requirements have not been verified. They also stated that there are numerous levels of review within the Air Force, and any of these decision makers can refuse to approve the transition of OCX to operations. As a result, according to OCX program officials it could take 5 to 7 months longer than planned, or potentially 14 months total, to complete developmental testing. In addition, experience with prior upgrades to the current GPS ground system indicates that the completion of developmental testing may require more time than the 7 months assumed in the schedule. Air Force Cost Analysis Agency officials provided us with data for two upgrades that were made to OCS, the existing operational ground system. Those upgrades took 11 and 8 months, respectively. The 11-month upgrade to OCS from 2006 to 2007 was for an effort that was significantly smaller in software size in comparison to the size of OCX, but similarly brought new capabilities to OCS related to the command and control of satellites. The 8-month upgrade to OCS from 2009 to 2010 also provided command and control of a new type of GPS satellite and enhanced security for the current GPS receiver cards. Figure 6 shows the different forecasts with 7- and 14-month developmental test periods as measured from the beginning of calendar year 2019. If the time doubles for the completion of post-acceptance government-run developmental testing, most OCX schedule estimates would exceed the program schedule threshold. <2.5. Air Force and Contractor Updating Schedule Assessments in Accordance with Best Practices, but No Independent Assessments Are Planned of the Full Program Schedule> GAO s Cost Estimating and Assessment Guide (Cost Guide) and Schedule Assessment Guide (Schedule Guide) identify best practices for managing a program s cost and schedule. According to these best practices, a well-planned schedule is a fundamental management tool that can help government programs use public funds effectively by specifying when work will be performed and measuring program performance against an approved plan. Typically, schedule delays are followed by cost growth. When this occurs management tends to respond to schedule delays by adding more resources or authorizing overtime. Therefore, a reliable schedule can contribute to an understanding of the cost effect if the program does not finish on time. Moreover, an integrated and reliable schedule can show when major events are expected, as well as the completion dates for all activities leading up to them, which can help determine whether a program s parameters are realistic and achievable. Further, the Cost Guide states that, too often, programs overrun costs and schedule because estimates fail to account for the full technical definition, unexpected changes, and risks. The Cost Guide states that one of many challenges program managers face is too much optimism in the original estimate. The Cost Guide also states that because optimism is often prevalent, organizations will encourage goals that are unattainable by accentuating the positive. Because over-optimism potentially affects both cost estimates and schedules, an independent view and analysis is important to properly overcome this bias. An independent view also allows decision makers to react sooner and take steps to minimize any identified risks, like schedule delays. The following best practices recommend that the schedule estimate should be periodically updated to reflect (1) actual progress and (2) newly identified risks. Periodic Updates and Actual Progress: GAO s Schedule Guide states that updating a schedule to reflect actual progress is important when assessing the realism of the initial schedule duration assumptions. Programs should make adjustments, if necessary, to the forecast of the remaining effort. Periodic Updates and Risk: GAO s Schedule Guide states that prudent organizations recognize that uncertainties and risks can become better defined as the program advances and should conduct periodic reevaluations of risks. GAO s Cost Guide states that program managers often do not sufficiently account for risks because they tend to be optimistic and because they believe in the original estimates for the plan without allowing for additional changes in scope, schedule delays, or other elements of risk. Since the current schedule was approved in September 2018, Raytheon has updated its delivery schedule estimate quarterly or as needed to reflect changes, and modifies the delivery and acceptance dates accordingly. Raytheon does not update the full program schedule because the government-run developmental testing is not included in its schedule estimate. OCX program officials said they are currently updating their program schedule estimate by incorporating Raytheon s data through the end of 2018. No plans currently exist to conduct another OCX independent cost estimate which would include a full, independent program schedule assessment at the DOD-level, and currently there is no requirement to do so. An independent assessment of the schedule would normally be produced in conjunction with the statutory requirement to conduct another independent cost estimate at the next major program milestone. However, in September 2018 the milestone decision authority waived the requirement to hold the next major program milestone. DOD s Office of Cost Assessment and Program Evaluation conducts independent cost estimates which account for a full program schedule when statutorily required. In addition, according to an official in that office, they also conduct only schedule assessments, without completing a full independent cost estimate, when requested by a program s milestone decision authority. In June 2018, the Office of Cost Assessment and Program Evaluation provided the last full, independent cost estimate with a schedule assessment to the Under Secretary of Defense for Acquisition and Sustainment, the milestone decision authority, to support the decision to approve the OCX baseline. Officials from the Office of Cost Assessment and Program Evaluation said that they have not been asked by the OCX milestone decision authority to conduct another independent assessment. Without an independent schedule assessment, decision makers may lack updated information when determining whether to take new steps to avoid or mitigate additional delays. <3. Conclusions> It is still unknown when OCX will be ready to support the command and control of the next generation of GPS satellites. While Raytheon has improved the pace of building and testing software, the majority of schedule reserve has already been consumed and work is not being completed as quickly and efficiently as the delivery schedule predicted. Once software development is complete, it must go through developmental testing. The schedule for this phase may also be optimistic as risks associated with competing activities have the potential to double the amount of time needed for testing. DOD will be in a better position to assess OCX s progress and the potential for additional delays when the majority of its changes to its software development approach are completed at the end of 2019. At this time, however, while the program plans to continue assessing schedule progress, there are no plans in place for an independent schedule assessment. The program s history has consistently shown program and contractor estimates to be optimistic and that independent assessments have provided useful insights about risks as well as past experience with similar activities. Our best practice guidance also emphasize that independent assessments are a necessary step to counter balance schedule optimism. Decision makers in DOD and Congress could use realistic knowledge about the schedule to either request or provide additional funds to complete the acquisition of OCX or develop contingency plans for delays. <4. Recommendations for Executive Action> We are making the following recommendation to DOD: The Secretary of Defense should direct the Director, Office of Cost Assessment and Program Evaluation to conduct an independent schedule assessment of the full program schedule for the Global Positioning System s next generation operational control system based on progress made through the end of calendar year 2019. (Recommendation 1) <5. Agency Comments and Our Evaluation> We provided a draft of this report to DOD for review and comment. In its written comments (reproduced in appendix II), DOD did not concur with our recommendation to conduct an independent assessment of the full OCX program schedule based on progress made through the end of calendar year 2019. DOD said that the Office of Cost Assessment and Program Evaluation conducted an independent cost and schedule estimate supporting the OCX program s September 2018 system development milestone and that DOD subsequently funded OCX consistent with that estimate. Further, DOD said that the Office of Cost Assessment and Program Evaluation as well as the Defense Contract Management Agency continually assess the program s ability to meet cost, schedule, and performance objectives. DOD also said the OCX forecast is currently holding to the government schedule, which is ahead of the Office of Cost Assessment and Program Evaluation s independent cost estimate. Finally, DOD said senior executive reviews continue on a bi-annual basis to monitor the program s progress. We continue to believe the recommendation is necessary. As stated in our report, DOD has not conducted an assessment of the full schedule since June 2018, since which time program risks have evolved. In addition, the other potential sources for schedule oversight suggested by DOD are limited in scope. The Defense Contract Management Agency does not look at the full OCX program schedule, as it examines the schedule only until contractor delivery. Officials from the Office of Cost Assessment and Program Evaluation said they do some programmatic monitoring of OCX, including on selected metrics, to inform DOD s annual program and budget submission. But those metrics do not examine the full schedule that includes the developmental test period after delivery. The Office of Cost Assessment and Program Evaluation is in a position to independently assess the full OCX program schedule, as it has previously done, but only if DOD requests that it do so. We maintain that for complex programs, such as OCX, best practices state an independent view is necessary and that a periodic schedule assessment should be performed as progress is made and risks change. Given the mixed progress developing software, the number of new contractor risks discovered in 2018, the limited remaining schedule reserve held by the contractor (with at least two years of significant work remaining), and the potential for doubling the time frame for the planned 7-month post-acceptance government-run developmental testing period, we determined that the recommendation remains a prudent step. Such an assessment would help inform congressional and DOD decision makers as they consider what steps may be taken to address delays to the start of OCX operations and ensure the investments in needed new receivers are properly aligned. We are sending copies of this report to the appropriate congressional committees, the Acting Secretary of Defense, the Secretary of the Air Force, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerns this report, please contact me at (202) 512-4841 or by email at chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology To determine the extent to which schedule risks may delay the delivery, acceptance, and approval for the operation of the Global Positioning System (GPS) next generation operational control system (OCX), we reviewed information relevant to OCX from Air Force GPS quarterly reports, senior management reviews, the program acquisition baseline, software development plans, monthly program management reviews that included schedule risks and progress updates, Air Force monthly acquisition reports, Air Force service cost position documentation, independent cost estimate documentation and analysis, earned value management data, Defense Contract Management Agency performance assessment reports, and slides and information provided by Raytheon Company (Raytheon), the prime contractor, on topics of our request. We reviewed the Air Force s 2018 integrated baseline review results of the period until government acceptance and assessed the full program schedule which includes the contractor s schedule, government acceptance, and post-acceptance government-run developmental testing until OCX is ready to transition to operations. We reviewed GAO s best practice guides for cost estimating and assessment and schedule assessment to identify best practices for assessing a program s cost and schedule and applied selected best practices. We also reviewed relevant reports and assessments focused on OCX completed by the government or required by Congress. We interviewed officials from the OCX program office and GPS Directorate, Defense Contract Management Agency, DOD s Office of Cost Assessment and Program Evaluation, Air Force Cost Analysis Agency, the Lead Development Test Organization for the GPS Directorate, Defense Digital Services, Office of the Director, Operational Test and Evaluation, the MITRE Corporation, and Raytheon. We conducted this performance audit from November 2017 to May 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments <6. GAO Contact> <7. Staff Acknowledgments> In addition to the contact named above, the following staff members made key contributions to this report: David Best (Assistant Director), Patrick Breiding (Analyst-in-Charge), Marie P. Ahearn, Pete Anderson, Brian Bothwell, Jonathan Mulcare, Andrew Redd, Karen Richey, Roxanna Sun, and Robin Wilson. | Why GAO Did This Study
The U.S. military and the public depend daily on GPS data. OCX, the ground system that will command and control next generation GPS satellites, is one of several interdependent systems the Air Force is developing to modernize GPS. OCX has been hampered by delays and $2.5 billion in cost growth since the program started in 2012. The Air Force set a new baseline for cost and schedule in 2018 after OCX breached its cost threshold in 2016.
The National Defense Authorization Act for Fiscal Year 2016 contained a provision that the Air Force provide quarterly reports to GAO on the next generation GPS acquisition programs, and a provision that GAO brief the defense committees as needed. GAO provided numerous briefings from 2016 through 2018 and issued reports in 2016 and 2017. Continuing this body of work, this report focuses on the extent to which schedule risks may affect OCX delivery, acceptance, and approval for operation.
GAO reviewed the Air Force's baseline review results, schedule risks, and progress, and applied selected best practices for cost and schedule management. GAO also reviewed OCX monthly management briefings and quarterly assessments, and interviewed officials from the OCX program office and Raytheon (the prime contractor), among others.
What GAO Found
The Global Positioning System's (GPS) next generation operational control system's (OCX) program schedule continues to be optimistic and, with significant development remaining, more delays are likely for delivery, acceptance, and operation. See the figure below for previous delays, cost growth, and the current baseline.
Completing the full OCX program schedule requires (1) timely delivery by the contractor and acceptance by the Air Force and (2) an efficient completion of a planned 7-month government-run post-acceptance developmental testing. GAO found that there is potential for significant delays on both fronts. While there has been some improvement to the pace of software development, the rollout of the new development methodology has been delayed to a point where most of the contractor's schedule reserve has been used. Assumed improvements in how long it takes to repair software defects has not occurred as planned, placing additional pressure on the contractor's delivery date. Additionally, Air Force officials have acknowledged that the government developmental test period after acceptance could double in duration and delay operations further because of concurrency, test plan uncertainty, and risks of late discovery of problems.
With approximately 2 years of work remaining before delivery, there is no plan to have the full schedule independently assessed. For complex programs, such as OCX, best practices state an independent view is necessary and that a periodic schedule assessment should be performed as progress is made and risks change. Such an assessment would help inform congressional and DOD decision makers as they consider what steps may be taken to address delays to the start of OCX operations and ensure the investments in needed new receivers are properly aligned.
What GAO Recommends
GAO recommends that DOD conduct an independent schedule assessment of the full program schedule at the end of 2019. DOD did not concur with the recommendation. GAO believes the recommendation remains valid. |
gao_GAO-20-360 | gao_GAO-20-360_0 | <1. Background> <1.1. Overview of FEMA s Disaster Workforce> The federal disaster workforce is designed to scale up or down depending on the timing and magnitude of disasters. Specifically, FEMA has the authority to augment its permanent full-time staff with temporary personnel and deploy non-FEMA staff members when needed. FEMA has historically relied on both permanent and temporary staff members to respond to presidentially declared disasters. FEMA s disaster workforce is organized according to position categories, employee types, functions, and job titles. Every FEMA employee is assigned to one or more of four position categories. Staff assigned to incident management positions deploy to disaster sites to administer federal emergency response and recovery programs. Staff assigned to the other three position categories incident support, ancillary support, and mission essential provide support services to deployed incident management staff, as well as to FEMA more generally. For example, incident support staff assist with disaster operations at the regional or national level, while mission essential staff maintain basic agency operations, such as payroll and information technology. FEMA has several different employee types that operate out of the agency s national headquarters, regional offices, and joint field offices at specific disaster locations. Each of the different employee types hold one or more of the four position categories described above. Permanent full-time employees are steady-state federal employees that support FEMA s mission areas and operations on a daily basis. Cadre of On-Call Response/Recovery Employees (CORE) are a type of temporary full-time employee hired to directly support response and recovery efforts related to disasters for a 2- to 4-year term. These positions may be renewed if there is ongoing disaster work and funding is available. Incident Management COREs are a type of CORE employee that maintain a regular state of readiness to provide emergency-state support and can be deployed up to 300 days a year in mission areas. Incident Management Assistance Teams are rapid-response teams of CORE employees that deploy to disaster sites with little to no notice and remain at disaster sites for unspecified amounts of time, depending on mission needs. Members generally receive 4-year appointments. Reservists are on-call employees who work intermittently as required during incident management operations. Reservists must be available to deploy as needed on 24 hours notice at all times during their 24 month appointment. FEMA also has the authority to augment its disaster workforce with temporary employees. This includes local hires, Surge Capacity Force volunteers, and FEMA Corps members. FEMA further augments its workforce with technical assistance contractors, who are specialized contractors hired to perform specific responsibilities. See figure 2 for more information on FEMA s employee types. As shown in figure 3, reservists made up the largest portion of FEMA s deployed workforce during peak deployments for the 2017 and 2018 disaster seasons. In 2017, reservists made up about 32 percent of FEMA s deployed workforce, followed by COREs, permanent full-time staff, and local hires. In 2018, reservists made up about 44 percent of FEMA s deployed workforce, followed by local hires, COREs, and permanent full-time staff. <1.2. Organizational Structures for Incident Management Staff> FEMA s incident management workforce is organized into 23 cadres. Cadres are groups organized by operational or programmatic functions. They are composed of full-time equivalent and intermittent staff members who perform incident-related duties during disaster response. For example, the Public Assistance cadre administers financial assistance to state, local, tribal, and territorial communities for debris removal, implementation of emergency protective measures, and permanent restoration of infrastructure. FEMA s incident management workforce performs functions to support its response, recovery, and mitigation missions. Each cadre supports at least one of these missions, and some cadres perform functions across more than one. Cadres also generally deploy to an incident at varying points in the response and recovery phases, depending on their functions. For example, FEMA officials said that the Logistics cadre, which coordinates and monitors all aspects of resource planning, movement, and order tracking, typically deploys staff to an incident before the Hazard Mitigation cadre, which supports risk reduction activities later during the recovery phase. See figure 4 for an example of peak deployment by cadre during Hurricane Florence and appendix II for a description of each cadre and their primary duties. FEMA manages the staffing, training, and deployment of its cadres at the national level. FEMA employees whose primary responsibilities are incident management and disaster response, such as Incident Management CORE and reservists, are generally considered national assets and may be deployed to a disaster anywhere in the country, regardless of permanent duty station. FEMA organizes incident management positions into four tiers denoted by increasing levels of leadership managerial responsibilities and further categorizes senior leaders and officers by level of disaster complexity experience. See figure 5 for more information on FEMA s position tiers and titles. All FEMA incident management employees have a primary title, which specifies their principal roles and responsibilities, and may also hold subordinate titles for additional roles and responsibilities that the employee can perform. Incident management staff members have one primary incident management title and may have multiple subordinate titles. FEMA may also assign specialties categories used to identify a specific measured (documented or credentialed) skill, task, experience, or certification that may enhance performance of an associated position to certain staff. For example, a full-time equivalent staff member who works day-to-day in FEMA s Office of Policy and Program Analysis could hold a primary incident management title as a Facilities Manager in FEMA s Logistics cadre and a subordinate title of Logistics Specialist in the same cadre, and may be certified to operate certain types of forklifts. <2. FEMA Has Mechanisms in Place to Qualify and Deploy Staff to Disasters and Faced Staffing Shortages during the 2017 and 2018 Disaster Seasons> <2.1. FEMA Designed and Implemented a System to Ensure Standards for Disaster Workforce Qualifications and Capabilities> FEMA designed and implemented the FEMA Qualification System in 2012 to standardize and manage the agency s incident workforce capabilities through prerequisite experience, training, and demonstrated performance. FEMA uses the system to track requirements for incident management positions and the proficiency level of staff members in those positions. According to the 2019 FEMA Qualification System Guide, training and demonstrated performance are foundational elements of the system. Required qualification system training consists of courses designed to build competency in specific position responsibilities and is offered in a variety of settings, such as the Department of Homeland Security (DHS) Center for Domestic Preparedness or at a joint field office, and through a variety of mediums, such as in a classroom, online, or on the job. Demonstrated performance involves validation of the ability to successfully and independently perform specific tasks. According to FEMA, employees conduct required training concurrently with demonstrated performance so that training builds on previous experience and coursework. After FEMA assigns an incident management position to staff, they are issued an electronic position task book, which lists and tracks the training and demonstrated performance requirements for that position. Tasks in the position task book need to be signed off by a coach-and- evaluator an individual that is trained and designated as a subject matter expert by their cadre to evaluate one or more FEMA Qualification System positions. After staff members have worked with a coach-and- evaluator to complete the tasks and trainings included in their task book, they may submit it to cadre management as part of their qualification application package. Submitted qualification packages go through a number of rounds of review before a final decision is made. First, FEMA s Qualification System Branch conducts an initial review to validate qualification package completion and requirement waivers, among other things. The branch then forwards the qualification package to cadre management for review. Cadre management reviews employees applications for all positions, including specialists and first-line supervisors, and a Qualification Review Board also reviews employees applications for leadership, upper management, and middle management positions. See figure 6 for an overview of FEMA s qualification system process. <2.2. FEMA Has a Process to Deploy Its Workforce to Disasters> A regional or national Incident Management Assistance Team is generally among the first FEMA units to arrive on the ground at a disaster site to, among other things, set up federal facilities, establish a joint field office, and coordinate with officials from the impacted region and other relevant federal, state, tribal, territorial, or local partners. If there are staffing shortages among regional full-time equivalent staff members, FEMA leadership in the region where the disaster occurs may submit a deployment request for additional incident management staff members through the Deployment Tracking System. After the Incident Management Assistance Team stands up a joint field office, the Federal Coordinating Officer assumes authority over all emergency response and recovery efforts for the disaster, which includes filling staffing needs. To determine the number and type of incident management personnel needed in each position to meet FEMA requirements for any given disaster, the Federal Coordinating Officer coordinates with regional leadership, the joint field office s Chief of Staff, and cadre management. The basis of this determination is a variety of factors related to the nature and scope of the disaster. For example, Individual Assistance and Public Assistance needs are based in part on preliminary damage assessments to determine the level of program assistance that may be required. To fill identified staffing needs, field leadership uses a standard process to request specific FEMA Qualification System titles and proficiency levels. Once a standard deployment request is approved, the Deployment Tracking System identifies staff members that match the requested positions, skill sets, and qualification status using a preprogrammed, automated process. The Deployment Tracking System then notifies staff members selected in a rotational order based on length of time since their last deployment, among other things. If an employee declines a deployment request, the Deployment Tracking System automatically sends a request to the next staff member with that incident management position title on the deployment order list. Standard deployment requests are filled by deploying employee types in the following order: 1. Incident Management COREs 2. Reservists 3. Full-time equivalent employees other than Incident Management At the incident, the Federal Coordinating Officer and other field leadership staff are responsible for overseeing coordinating the responders working for FEMA. Generally, after response operations and programs are initiated, staffing needs may change. At this point, field leadership may decide to demobilize some personnel deployed in certain cadres. The decision to do so is based on workload, complexity of operations, and disaster needs. <2.3. FEMA Faced Staffing Shortages in Key Cadres during the 2017 and 2018 Disaster Seasons> According to FEMA s 2017 Hurricane Season After-Action Report, FEMA did not meet its annual staffing target for disaster personnel during the 2017 hurricane season. FEMA uses force structure and force strength targets to estimate staffing requirements for incidents and analyze the number of staff the agency has available against these targets. FEMA establishes a longer-term target for the number of incident management staff needed to meet mission needs, called force structure, and tracks the actual number of incident management staff who can deploy at a point in time, which it calls force strength. FEMA uses its force strength measure to track progress towards meeting staffing goals set out in the force structure target and also sets interim targets each fiscal year for reaching the longer-term force structure target. In 2015, FEMA s Workforce Management Division conducted a review of FEMA s workforce in coordination with the 23 cadres and adopted a force structure target of 16,305. According to FEMA, this target was established based on a number of considerations, including potential incident levels and historical staffing data for incident management staff deployed to different level events. The agency s force strength at the end of fiscal year 2017 was 11,656. On average, reservists had the largest gap between force strength and established annual targets. For example, at the end of fiscal year 2017, FEMA s force strength for reservists was 6,793, which was 15 percent short of its target of 7,982 for that year. In 2019, FEMA s Workforce Management Division completed a similar review of its incident management workforce and updated its force structure target to 17,670 incident management personnel, which it aims to achieve by 2025. This new target was established using an updated methodology based on information on historical disasters and deployed incident management staff, along with input from each cadre s management and other officials with expertise on staffing patterns throughout disasters. According to FEMA s 2017 Hurricane Season After-Action Report, FEMA faced shortages across over half of its cadres when disasters made landfall or began during the 2017 season, and we found that FEMA faced similar shortages during the 2018 disaster season. For instance, according to FEMA s deployment data, 18 of 23 cadres operated with 25 percent or fewer staff available to deploy when Hurricane Maria made landfall shortly after Hurricane Irma hit Florida, including the Individual Assistance, Logistics, and Hazard Mitigation cadres. See figure 7 for more information on cadre staffing levels at the start of major disasters during the 2017 and 2018 disaster seasons. In addition, many staff members who showed availability to deploy declined when requested to do so. For example, according to FEMA officials, the austere conditions in Puerto Rico and fatigue from previous deployments to hurricanes Harvey and Irma contributed to the high declination rate for Hurricane Maria deployment requests. In addition, FEMA officials stated that permanent full-time employees may not consistently update their availability or may be unavailable to deploy because of their steady-state responsibilities. Further, reservists may decline deployment requests because of the lack of employment protections, which can create difficulties with leaving their jobs to work intermittently in disasters. See table 1 for the declination rates for eight major disasters during the 2017 and 2018 disaster seasons. According to FEMA officials, the agency s staffing shortages may have originated in part from policy changes in 2012. Specifically, officials said that a large number of incident management staff left the agency after changes were made to the agency s reservist program and qualification system for disaster personnel in 2012. For instance, officials told us many reservists with years of experience and technical skills left FEMA when the reservist program increased the number of days they were required to deploy or when many were assigned to positions in the qualification system with lower levels of responsibility than what they previously held in order to meet force structure targets. FEMA took various actions to address the staffing shortages during the 2017 and 2018 disaster seasons and used new approaches to augment its workforce. For example, in 2017, FEMA reported that it coordinated the deployment of 2,740 Surge Capacity Force volunteers from eight DHS components. DHS also expanded the Surge Capacity Force to other federal agencies outside DHS for the first time in 2017, including 34 federal departments and agencies in the program, increasing the Surge Capacity Force by 1,323 employees. Additionally, FEMA used local hires to augment its workforce and expedited the local hiring process in response to hurricanes Harvey, Irma, and Maria, resulting in the onboarding of 4,095 local hires from August to November 2017. The Federal Coordinating Officer who initially managed the Puerto Rico joint field office instituted a goal of having a staff composed of 90 percent local hires for recovery efforts. According to the official, investing heavily in local hires and converting them to COREs will help build FEMA s disaster workforce for long-term Puerto Rico recovery efforts and any future disasters that may occur in the region. As mentioned previously, FEMA also conducted a review of its incident management workforce in 2018 to determine the force structure needed to effectively respond to disasters moving forward. FEMA officials we spoke with said the agency has taken several steps to meet its new force structure, such as establishing a program management office that is dedicated to achieving the agency s staffing targets. Cadre management officials we spoke with said that FEMA has hiring initiatives in place or planned to help meet the staffing needs established from the review and noted that it will take time for new staff to develop the skills and experience to meet mission needs in the field. <3. FEMA Did Not Provide Reliable and Complete Staffing Information to Field Officials during Disasters and Lacks Mechanisms to Assess How Effectively It Deployed Staff> FEMA s qualification and deployment processes did not provide reliable and complete information on staff skills and abilities to ensure its workforce was effectively deployed and used to meet field needs during the 2017 and 2018 disaster seasons. In addition, FEMA lacks mechanisms to assess deployment outcomes or the extent to which it deployed the right mix of staff at the right time to meet mission needs. <3.1. FEMA s Qualification and Deployment Processes Did Not Provide Reliable Staffing Information to Ensure Its Workforce Was Effectively Deployed and Used in the Field> FEMA field officials in our focus groups and interviews said they experienced a number of challenges with the reliability of information from FEMA s qualification and deployment processes and systems during the 2017 and 2018 disaster seasons. Specifically, these officials reported that qualification status was not an accurate indicator of ability to perform, which affected disaster assistance delivery and created difficulties with ensuring the right mix of staff were deployed and effectively assigning responsibilities at disaster sites. Officials also reported other challenges with identifying and leveraging staff skills, including lack of information about specialized abilities and expertise. In response to its experience with recent disaster seasons, FEMA has taken or planned some actions to improve its qualification and deployment processes. However, these actions have not been fully implemented and do not fully address the information shortcomings that field officials experienced, as discussed below. <3.1.1. Field Officials Reported Qualification Status Was Not a Reliable Indicator of Staff s Ability to Perform Their Positions in the Field> FEMA s qualification and deployment processes and systems do not provide accurate and complete information about staff members abilities to ensure field leadership and managers get staff with the right skills at the right time or to most effectively employ and leverage the staff that are deployed to support FEMA s missions. As discussed earlier in this report, field leadership use the Deployment Tracking System to request staff based on mission needs. The system uses an automated process to select who to deploy from a list of available staff by position and qualification status, and relies on the FEMA Qualification System to identify staff members who are qualified in their positions and those who are trainees. Qualified staff members are expected to be able to successfully and independently perform the duties of their position. However, as shown in table 2, our focus groups with incident management staff and interviews with field and regional officials indicate that disaster personnel experienced significant limitations with qualification status in the FEMA Qualification System matching performance in the field. Very few found that it was a good indicator of staff abilities. For example, participants in two of 14 focus groups described positive experiences with qualification status as an indicator of staff abilities; while, in all 14 groups, participants stated that staff members who were designated as qualified in the system did not have the skills or experience to perform effectively in their positions. Officials in 15 of our 29 field and regional office interviews had similar experiences. For example, Individual Assistance managers in one of the joint field offices we visited said that they had 20 staff members who were designated as qualified but not capable of performing basic tasks, such as knowing how to access the program s information system. Hazard Mitigation managers from the same joint field office said that about half of their staff who were designated as qualified could not proficiently perform their job duties. Participants in our focus groups and field leadership and managers we interviewed cited numerous operational challenges that resulted from qualification status not being an accurate indicator of staff abilities. Specifically, they stated that (1) staff designated as qualified who lacked the skills and knowledge to perform their positions negatively affected disaster assistance delivery, staff workload, and morale and (2) the unreliability of qualification designations hindered their cadre s ability to staff disasters with the right mix of staff at the right time and effectively assign responsibilities. Table 3 provides examples of the challenges they experienced. Participants in our focus groups also cited a range of challenges with position task books and the qualification process that in their view contributed to qualification status not being an accurate indicator of staff proficiency. For example: Position task book tasks. In 12 of our 14 focus groups with FEMA incident management staff, participants said the tasks in the position task books did not fully reflect the skills or competencies needed to perform positions. For example, a participant in one focus group said that the tasks in her book did not incorporate sufficient soft skills, such as the ability to communicate with sensitivity and empathy and other interpersonal skills, which are important because staff in her cadre often interact with disaster survivors who have suffered great losses. Coach-and-evaluator process. Participants in 12 of our 14 focus groups also raised concerns with how coach-and-evaluators endorsed tasks, such as lack of consistency and objectivity with signing off on tasks. These issues included coach-and-evaluators signing off on large numbers of tasks too quickly or easily, which participants in 12 focus groups said occurred. Some participants who functioned as coach-and-evaluators said they felt pressure from staff to endorse tasks because reservists receive salary increases when they get qualified. Participants also told us that cadre management may push for staff to be qualified to meet qualification rate targets. A participant in one of our supervisory-level focus groups said he felt pressure from both these sources and admitted to signing off on tasks even though he did not feel the staff member could proficiently perform them. He said that the staff member was qualified in the FEMA Qualification System and later deployed to a smaller disaster, where she was the sole person responsible for her functional area and unable to do the job. Difficulties completing position task books. Participants in all 14 of our focus groups also raised various challenges with completing their task books. These challenges include a lack of available coach-and-evaluators to sign-off on tasks; lack of opportunities to deploy or perform certain tasks; and being unable to complete all the training courses in their task books because classes were unavailable, full, or conflicted with mission needs; among others. As a result, staff members who are able to perform their positions may not be designated as qualified in FEMA s qualification system. <3.1.2. Field Officials Cited Challenges with Using FEMA s Qualification and Deployment Processes to Fully Identify and Use Staff Skills and Experience> Participants in our focus groups and leadership and managers in our field and regional office interviews reported other challenges with being able to fully identify and use staff skills and experience during disasters. For example: Position titles not fully reflecting staff abilities. FEMA allows staff to have one primary position title in which they are qualified or have an open task book (trainee or candidate status). Officials in 14 of our 29 field and regional interviews and participants in eight of our 14 focus groups with incident management staff raised concerns with this policy. Specifically, officials noted that many employees have experience and expertise in multiple cadres or programs within a cadre, but their full abilities are not reflected in FEMA s qualification and deployment systems. As a result, field leadership and managers may not be able to fully identify and use the available skills and experience of FEMA s workforce during disasters, which can limit FEMA s capacity and flexibility to meet disaster needs, especially when demand is high. For example, one regional official said the Deployment Tracking System has Operations Section Chief as her position title but does not capture her ability to deploy as an Individual Assistance Branch Director, another position in which she has considerable experience. Overly broad position titles and lack of information on specialized skills. In addition, participants in our focus groups told us that some cadre position titles are overly broad (five of 14 groups) and that FEMA s qualification and deployment systems do not include information on specialized skillsets and experience that would be useful for making deployment and staffing decisions (10 of 14 groups). Officials in 14 of our 29 field and regional interviews raised one or more of these same issues. For example, Logistics managers in one of the joint field offices we visited said that the Logistics Specialist title is too general and captures the majority of warehouse personnel without specifying the actual responsibilities they are able to perform. They noted that, as a result, management needs to query staff members when they arrive to help determine their skills and, in many cases, assign responsibilities by trial and error. According to officials, this can create a safety hazard because some responsibilities require specific skills, such as the ability to operate a certain type of forklift. They also noted that while the Deployment Tracking System allows cadres to input specific skillsets, such as forklift certification, this field has not been consistently filled in. Limitations with fully capturing permanent full-time employee and CORE qualifications. In seven of our eight focus groups with permanent full-time employees and COREs, participants stated that it is not a priority for them to complete their task books because they have little or no incentives to be designated as qualified in the FEMA Qualification System. For example, some participants noted that unlike reservists, their pay and professional development is not directly tied to their qualification status or position. Another participant said that he has been deployed to many disasters and does not have any tasks in his task book endorsed because he is focused on meeting mission needs and does not care enough about being qualified in the system to take the time to complete his task book. Some regional and field officials also raised the same issues. For example, Response Division managers in one of the regions we selected for interviews said that some of the best talent at FEMA, such as staff members with years of experience who work full-time in positions similar to their incident management titles, have never opened or completed a task book because there is no incentive for them to do so. As a result, FEMA may not be fully capturing the qualifications and skills of permanent full-time employees and COREs. <3.1.3. FEMA Has Taken Actions to Help Improve Its Qualification and Deployment Processes, but These Actions Do Not Fully Address the Key Challenges Field Officials Identified> FEMA has taken a number of actions intended to help address some of the challenges with its qualification and deployment processes that hindered its ability to provide accurate and complete staffing information to field officials. FEMA headquarters officials acknowledged the challenges we identified with the FEMA Qualification System and noted that the system is still evolving. Key efforts to improve the reliability of qualification designations include: Qualifying staff members who could proficiently perform their positions. During the 2017 hurricane season, FEMA took steps to qualify staff members who were not designated as qualified in the FEMA Qualification System but could proficiently perform the duties of their position. For example, according to the agency s after-action report for the hurricane season, FEMA temporarily changed qualification procedures during the season to more rapidly qualify employees who had demonstrated their skills outside the traditional process. FEMA headquarters officials stated that this helped qualification designations better reflect the skills and abilities of these staff members. Other actions that FEMA has taken to help qualify staff include allowing cadre management to waive certain tasks or training, allowing specified tasks to be signed-off on during training exercises, and, as discussed later in this report, conducting a pilot on deploying staff to specifically serve as coach- and-evaluators during disasters. Revising position task books. FEMA headquarters officials said they began reviewing task books in spring 2017 to help ensure that tasks are measurable and better align with the knowledge, skills, and abilities needed to perform positions. Officials said this project was completed in November 2018 and the revised task books have been implemented. They noted that this will help streamline the qualification process, allow for more objective evaluation, and help ensure tasks better reflect the skills needed on the job. According to FEMA officials, they plan to continue to work with the cadres to ensure task books align with the skills and competencies required to complete disaster missions. Enhanced coach-and-evaluator training. FEMA revised its training for coach-and-evaluators to provide more guidance and tools for how to accurately evaluate staff and improve the integrity of the evaluation process. Specifically, in October 2017, FEMA updated the coach-and- evaluator training class and added material on, for example, techniques for evaluating with integrity, types of observation, and documenting task performance by including comments in the task books. All staff members must pass the class by performing a capstone exercise and taking a written exam before being able to serve as a coach-and-evaluator. Additional controls in the qualification process. Since 2017, FEMA has established additional controls to provide more oversight on the qualification process. For example, headquarters officials said that as part of the qualification review process, officials may review the qualification packages to check how many tasks were endorsed during a given period of time. If the number is unusually large, they may note it for cadre management to consider when making qualification decisions. This step was incorporated in the new FEMA Qualification System Guide that was issued in August 2019. The guide also includes changes to the Qualification Review Board process, such as requiring candidates for leadership and upper-level management positions to attend the review in person and answer questions about their deployments, training history, and task book completion. FEMA has also taken some initial actions and considered options to better identify and use staff skills and experience in the field. For example, FEMA headquarters officials said they are aware that limiting staff to one primary position or one open task book may not fully capture their experience and abilities and are exploring ways to address it. However, they stated that this is a complex issue and allowing staff to hold multiple primary positions could affect the time it takes for staff to complete task books and, on a broader level, pay scales, career progression paths, and training budgets. They also noted that this could create complications with how to deploy staff if multiple cadres need positions filled during times of scarce resources. FEMA headquarters officials told us that staff can be deployed in positions other than their FEMA Qualification System positions but acknowledged that because these other positions are not systematically recorded in the Deployment Tracking System, leadership would need to be aware of staff skills and abilities through informal means. Further, FEMA headquarters officials said that as part of its review of the incident management workforce, they have revised the position titles for certain cadres, which they noted could potentially result in the titles better reflecting staff roles and responsibilities. Officials added that they need to balance the enhanced staffing information that more specific titles provide with the flexibility of broader titles, particularly for entry-level positions. FEMA has also included information on assigning specialized skills to staff in the Deployment Tracking System in its new FEMA Qualification System and deployment guides. While FEMA has taken some steps to improve its qualification and deployment systems, its efforts primarily affect the qualification process moving forward and do not fully address field officials experiences with not having reliable information on staff qualifications and skills to effectively use the available workforce to meet mission needs. For example, the changes to the position task books, coach-and-evaluator program, and FEMA Qualification System guide do not affect staff members who have already been qualified in the system but cannot perform their duties, and as discussed later in this report, FEMA currently does not have an effective performance evaluation system in place to identify and address skill deficiencies. In addition, the agency has not taken actions to address the challenges with identifying staff who can serve multiple incident management positions and fully capturing the qualifications of permanent full-time employees and COREs. Also, headquarters officials stated that FEMA has revised some of its position titles, but not all the cadres that reported challenges with overly broad titles had adjustments made to their positions. Further, this initiative is in the early stages of implementation and it is too soon to assess whether the revised positions will provide better information to field officials on staff members specific responsibilities. Further, the lack of reliability of qualification status as an accurate indicator of staff skills and abilities has been a persistent issue with the FEMA Qualification System. For example, we stated in our 2015 report on FEMA workforce management that in five of 11 focus groups with permanent full-time employees and COREs, participants cited concerns about the implementation of the FEMA Qualification System, and some observed employees whose training and experience did not reflect the position and qualification level to which they were assigned. Also, in a 2016 report on the reservist workforce, the DHS Office of the Inspector General stated that in five of the seven disaster deployments included in their review, joint field office staff encountered problems obtaining capable reservists with position titles under the FEMA Qualification System. These officials said that reservists sometimes lacked the experience and training to perform their duties, and as a result, were reassigned to positions outside their system titles. One of the purposes of the FEMA Qualification System is to ensure consistency in skill identification and deployable assets for positions across the agency. In addition, FEMA s 2018-2022 Strategic Plan states that the agency should guarantee that the FEMA Qualification System maximizes the ability of FEMA staff to use their skills and talents to deliver the best outcomes possible. However, as discussed above, FEMA experienced challenges with achieving these objectives. In addition, Standards for Internal Control in the Federal Government directs management to use quality information to achieve the agency s objectives. It states that, as part of designing control activities for human capital management, management should continually assess the knowledge, skills, and ability needs of the agency to help achieve organizational goals. According to the standards, only when the right personnel for the job are on board and are provided the right responsibilities, among other things, is operational success possible. In addition, according to The Standard for Program Management, program monitoring, reporting, and controls include the development of plans to respond to identified issues. It also states that program management should include timeframes and milestones for achieving program benefits and obtaining feedback from stakeholders to better understand the concerns related to the program and impact of the program. Given the complexity of FEMA s workforce and the persistent issues with the reliability of qualification designations and other challenges with identifying the knowledge, skills, and abilities of its staff, FEMA would benefit from developing a comprehensive plan with timeframes and milestones to address issues with the quality of information its qualification and deployment processes and systems provide to field officials. Such a plan would also benefit from the inclusion of perspectives from field leadership who depend on the information. FEMA officials acknowledged the staffing information challenges we identified and noted that they have not developed a plan to address them because the issues are multifaceted changes in policy can potentially affect numerous areas of the workforce and they had been focused on other initiatives, such as revising force structure targets and streamlining the qualification process. However, they said that such a plan would be useful. Developing a plan to address the challenges that hindered FEMA s ability to provide reliable and complete information about staff skills to field leaders and managers would better enable the agency to use its disaster workforce as flexibly and effectively as possible to meet mission needs in the field. <3.2. FEMA Does Not Have Mechanisms to Assess How Effectively Its Disaster Workforce Was Deployed to Meet Field Needs> FEMA does not have mechanisms to assess the extent to which its deployment strategies met mission needs in the field during disasters. FEMA s Deployment Guide states that for the agency to fulfill its preparedness, response, recovery, and mitigation missions, it must be able to effectively and efficiently deploy its responders through a process that sends the right people to the right place at the right time with the right qualifications. FEMA has measures and collects data related to staffing levels and availability, such as comparing cadre force strength to annual targets, comparing staff qualification rates to targets, determining the percent of staff in each cadre that show availability in the Deployment Tracking System, and tracking the number of staff deployed to disasters. However, none of these measures or data directly demonstrate deployment outcomes or how effectively FEMA deployed available staff to meet mission needs. Headquarters officials said that, among other things, they generally have looked at the number of staff members that were deployed to disasters, as well as declinations, to assess the extent to which they were able to meet staffing needs. They noted that this assumed the number, type, and timing of staff deployments matched field needs. However, our focus groups and interviews with field officials indicate that this was not generally the case. For example, in all 17 of our focus groups, participants experienced challenges with the staffing, skill, or experience levels of the deployed workforce, such as having too few staff members with the right technical skills to perform their missions efficiently and effectively. Further, in 12 of the 17 focus groups we conducted, participants said that there were challenges with the timing of deployments, such as staff from certain cadres being deployed too early or redeploying staff from key positions when the mission need was still high. In most of our interviews with field leadership and managers, officials described similar challenges with the number, skill level, or timing of staff deployments. Participants in our focus groups and field officials we interviewed said they make every effort to meet mission needs despite challenges with staff deployment, but noted that these challenges with deployment outcomes not meeting field needs can increase staff workload and delay disaster assistance, among other impacts and inefficiencies. Our work on strategic human capital management states that effective geographic and organizational deployment strategies can enable an organization to have the right people, with the right skills, doing the right jobs, in the right place, at the right time by making flexible use of its internal workforce. Additionally, Standards for Internal Controls in the Federal Government states that management should establish and operate monitoring activities to continually monitor the internal control system, evaluate results, and remediate any deficiencies identified on a timely basis. As part of remediating deficiencies, the standards advise management to report and evaluate issues that were identified as a result of the monitoring and take corrective actions to address them. As discussed earlier in this report, field leadership request staff based on cadres anticipated needs using estimates of the severity of damage and the nature and scope of the disaster, among other factors. However, FEMA headquarters officials told us their data systems cannot determine the extent to which field deployment requests were met during disasters. In addition, these officials noted that they have not established other mechanisms to assess deployment outcomes because this is extremely complex and they are considering how best to do so. They noted that they have been working with in-house data science experts to consider what kinds of measures and metrics they could design to assess deployment outcomes, but they did not have any concrete proposals or time frames for when this might be completed. Without mechanisms to assess deployment outcomes, FEMA officials in headquarters lack critical information to monitor and evaluate the extent to which its deployment policies and strategies effectively placed staff with the right skills in the right place at the right time to meet mission needs in the field. As a result, FEMA may miss opportunities to identify when corrective actions are required to better deploy its workforce to meet field needs, such as adjusting the timing and staging of deployments, and the amount of staff deployed. <4. FEMA Staff and Managers Experienced Challenges with Staff Development Efforts Intended to Enhance the Skills and Competencies Needed During Deployments> We found significant shortcomings in FEMA s ability to ensure staff development which consists of courses, on-the-job-learning, and coaching and mentoring for the skills and abilities needed in the field. Specifically, although the current approach to developing staff includes efforts to provide training courses, opportunities for on-the-job training and mentoring, and a performance evaluation system, each of these elements has limitations as implemented, and they are not effectively coordinated to help ensure systematic and comprehensive staff development. Staff and managers cited certain recurrent challenges with staff development in focus groups and interviews, such as (1) limitations on the ability to take useful classroom training, (2) challenges providing or receiving on-the-job training and mentoring, (3) inconsistent use of performance evaluations, and (4) difficulty with ongoing development when not deployed to a disaster. <4.1. Some Staff and Managers Cited Challenges with the Ability to Take Useful Classroom Training> One way staff members develop skills and competencies is through completing required courses in their position task books. However, in 10 of our 17 focus groups, participants discussed barriers to taking courses through FEMA s qualification system that in their view would help them better perform their jobs, such as being unable to take courses that are not in their position task books or if they are already qualified in their positions. Officials in 11 of the 29 field and regional interviews we conducted raised the same issue. FEMA headquarters officials stated that staff are generally required to obtain cadre management approval before they can register for incident management-related courses that are not specifically listed in their position task books, but staff told us it can be difficult to receive approval because of funding limitations. For example, a Hazard Mitigation official at one joint field office we visited described a situation where a staff member wanted to take a course on mitigation and engineering techniques for coastal construction that would have benefitted the work the person was doing, but was not able to get approval. Participants in our focus groups also told us that staff deployed to a position other than their FEMA Qualification System title had been unable to take courses related to the work they were doing. Moreover, staff members said the FEMA Qualification System limits training opportunities for those already qualified in their positions. For example, some staff members said that once they had completed their position task book, they were sometimes unable to get training that included new information on updated policies or procedures specific to their work. An official in one of the FEMA regions we selected for interviews said that some staff members in the region who were qualified would have preferred to be designated as trainees in the FEMA Qualification System because it would allow them to take relevant courses. In March 2020, FEMA officials told us the agency has recently taken actions to make it easier for cadres to send staff to courses that are not required in their position task book or for positions where the person is qualified. Finally, participants in our focus groups with permanent full-time staff members reported challenges with being able to take courses to develop their incident management competencies. These participants told us it is challenging for them to take disaster-related courses while performing their steady-state work. They said this is because there is no budget for localized disaster-related courses in their offices and it can be difficult to get approval and take time from their duties to travel for this type of training. <4.2. Some Staff and Managers Cited Challenges with Providing and Receiving On-the-Job Training and Mentoring during Disasters> Focus group participants frequently said developing skills on the job was the most useful type of training they receive. Specifically, participants in 12 of our 17 focus groups said on-the-job training was the most useful kind of training and participants in 13 of the 17 focus groups said this is how they received most of their training. In addition, headquarters officials in the Individual Assistance cadre said one of the benefits of on-the-job training during deployments is that it provides an opportunity for staff to learn and practice their craft in a setting that is difficult to simulate during training. The FEMA Qualification System Guide states that FEMA uses coach- and-evaluators as the primary mechanism for staff to learn the specific skills needed for each position. However, staff members we spoke with said they have difficulties developing their skills through the qualification process. Specifically, in seven of the 17 focus groups, participants told us they did not get feedback or coaching on the job. According to staff in our focus groups, the coach-and-evaluator aspect of the qualification system is not the ideal mechanism to support on-the-job training and development because it often emphasizes the evaluation role over the coaching role. In nine of 14 focus groups, participants told us the position task book process focuses more on completing tasks than on performance, development, or building competencies. Officials in eight of our 29 field and regional interviews reported similar experiences. Some staff who did receive coaching said it was often based on the interest level and time that an individual who was willing to invest and was not done in a systematic or consistent way. Moreover, a commonly cited challenge in 11 of our 14 focus groups was the lack of coach-and-evaluators to sign off on position task books. Officials in 16 of our 29 field and regional interviews raised the same issue. Participants in our focus groups said they had difficulties finding available coach-and-evaluators at disaster sites. For example, our analysis of FEMA data found that 36 percent of FEMA s incident management workforce did not have a coach-and-evaluator at the start of their deployment during the 2017 and 2018 disaster seasons. In addition, according to staff in our focus groups and interviews, coach-and- evaluators at the disaster often do not have time to coach staff. For example, officials at one of the joint field offices we visited said mission needs always come first and coaching and evaluating responsibilities are often not anyone s priority. In addition to on-the-job training challenges related to the FEMA Qualification System, focus group participants also reported more general challenges with on-the-job training. For instance, multiple supervisors in the Logistics cadre at one joint field office said that in addition to doing their own work, experienced staff members need to spend significant time training others, which competes with performing their mission. Furthermore, participants in seven of the 17 focus groups said providing on-the-job training was particularly challenging at the beginning of a disaster, when the disaster is often hectic and at its busiest. Recovery Division officials in a FEMA regional office said a challenge at the start of the disaster is finding staff members who know what to do and have the time to train those who do not. Staff members also described difficulties with providing and receiving on-the-job training in later phases of a disaster. In one focus group with supervisors, a participant said that once the disaster has reached a pace where they have time to train, staff members are often redeployed. Finally, in 16 of our 29 field and regional interviews, officials said there was a lack of mentoring and sustained staff development across disasters. For example, officials at one joint field office told us that once staff members complete their position task book, they generally do not receive any additional coaching or mentoring in that position. This official stated that reservists have a more difficult time identifying mentors than other employee types because they deploy intermittently and likely have different supervisors and coach-and-evaluators each time they deploy. In addition, FEMA officials said coach-and-evaluators are not meant to serve as mentors. FEMA human capital officials said that different offices can develop their own mentoring programs but these may not be available to all employee types. As a result, not all staff members know to ask for, or expect to receive, mentoring. FEMA headquarters officials acknowledged some of these staff development challenges and described actions they have planned, or are underway, to help address some of them. Specifically, FEMA revised the coach-and-evaluator course in 2017 to place a greater emphasis on the coaching responsibilities of the coach-and-evaluator role. For example, the revised course teaches effective coaching strategies, including how to give effective, actionable feedback. Also, in summer 2019, FEMA conducted a pilot with the National Disaster Recovery Support cadre to deploy a single coach-and-evaluator solely in that position and communicated to cadre management that this individual was not to be used for other disaster-related responsibilities. FEMA officials said this pilot was a success. In evaluating the pilot, FEMA said the coach-and- evaluator was able to devote time to proper training and answering any questions presented. Finally, the agency revised the FEMA Qualification System Guide in August 2019, which included clarifying differences between coaching and evaluating. The revised guide states that, as part of the position task book process, a coach explains, demonstrates, trains, assesses, and documents an individual s task performance while an evaluator observes, assesses, documents, and endorses an employee s independent performance of specific tasks. <4.3. FEMA Officials Reported Inconsistent Use of Performance Evaluations at Disasters> Headquarters officials told us that during the 2017 and 2018 disaster seasons, disaster workforce employees inconsistently received performance evaluations when deployed. Performance evaluations at disasters are to be completed on a paper form by a temporary duty supervisor. If the staff member has a coach-and-evaluator, the temporary supervisor may request input regarding progress toward mastering the skills covered by the position task book. The temporary supervisor is supposed to provide that evaluation to cadre management if an evaluation was completed. However, FEMA officials told us there are no mechanisms in place to ensure these steps occur or that the evaluations will be used to help develop staff competencies, and it is not something FEMA officials monitor. Further, FEMA headquarters officials stated there are no controls in place to ensure supervisors rate staff consistently from supervisor to supervisor. These officials told us they are aware of some problems with how the agency conducts performance evaluations for the disaster workforce and are developing changes to address them. For example, in the months prior to the 2017 disasters, the agency began revising its performance evaluation system, but suspended its efforts when that year s disasters occurred. In 2019, FEMA resumed this initiative and agency officials told us they expect it will be implemented by June 2020. They said the new system will include replacing the paper form with an electronic program that will be integrated into FEMA s other personnel systems, such as the Deployment Tracking System. Further, in March 2020, FEMA officials told us they are finalizing a directive intended to provide guidance to supervisors at disasters on how they are to provide deployment performance evaluations. In addition, in April 2020, FEMA issued guidance for the administration, implementation, and oversight of a performance management process that will provide reservists with annual performance appraisals. FEMA officials told us this will help ensure that reservist performance appraisals accurately reflect their job performance and assist them in maintaining and improving performance in the future. The agency s reservist performance management initiative is expected to be completed by January 2021, but officials have not provided specific interim milestones or target dates. <4.4. Staff and Managers Cited Difficulties with Receiving Staff Development When Not Deployed to a Disaster> Many disaster workforce staff members are not likely to get ongoing development directly from their cadre management when they are not deployed. According to data from FEMA, there was one cadre supervisor of record for every 128 reservists and Incident Management CORE staff as of June 1, 2019. During the 2017 and 2018 disaster seasons, this ratio was higher in certain cadres. For example, there was one supervisor of record for every 807 reservists and Incident Management CORE staff as of June 1, 2017 in the Individual Assistance cadre. FEMA headquarters officials told us they are assessing what the right mix of supervisors to reservists should be across the cadres. Further, staff members told us they have difficulties getting ongoing development through hands-on training outside of a disaster. While FEMA headquarters officials told us that cadres periodically conduct mission rehearsal trainings each year to prepare their staff for disasters, they also said not all staff can attend them because cadre management determines which staff to invite. These trainings are designed for staff members to simulate a potential disaster scenario while in a training environment. Finally, FEMA headquarters officials stated that receiving ongoing development for staff who do not deploy frequently, such as reservists, can be a challenge. The only instances when reservists are paid while not deployed occur when they complete 40 hours a year of mandatory training or 32 hours a year coordinating with their cadre. In addition, an individual in one of our focus groups with permanent full-time employees said reservists had difficulties accessing online mandatory training because they did not have a FEMA laptop. A recovery manager in a FEMA regional office told us that it can be challenging to provide staff development for reservists because they are generally sent to the field to do a discrete job and have limited opportunities to develop their skills and competencies when not deployed. As discussed above, FEMA s disaster workforce reported challenges receiving staff development through the agency s existing methods, which consists primarily of classroom training, on-the-job training and mentoring, and performance evaluations. While FEMA has taken actions to address some of the challenges staff experienced, opportunities remain to ensure more effective and consistent staff development. Specifically, FEMA does not have a staff development program in place to provide assurance of effective and comprehensive staff development of the skills and abilities needed during deployments. Further, FEMA headquarters officials said it is primarily the responsibility of staff members to find available coach-and-evaluators at disaster sites and the agency has not developed a mechanism to help ensure deployed staff are consistently paired with coach-and-evaluators. In addition, FEMA headquarters has not taken actions to address the challenges we identified with the lack of mentoring for staff deployed to disasters. Further, given that FEMA s performance evaluation initiatives are not yet implemented, it is too early to assess how effective they will be in enhancing staff development, including whether they will have mechanisms in place to ensure employees receive useful evaluations or the extent to which they will be coordinated with other development activities, such as coaching through on-the-job training. Standards for Internal Control in the Federal Government states that management recruits, develops, and retains competent personnel to achieve the entity s objectives. This includes enabling individuals to develop competencies appropriate for key roles, reinforcing standards of conduct, and tailoring training based on the needs of the role. It also includes mentoring to develop individual performance based on standards of conduct and expectations of competence that align the individual s skills and expertise with the entity s objectives and help personnel adapt to an evolving environment. In addition, we have previously reported that identifying where an agency s development process is lacking can help address barriers that hinder its ability to achieve meaningful results. We also reported that it is important for agencies to treat continuous learning as an investment in success as it can address employees career development issues, skill-specific training needs, and provide managers with opportunities to identify where training and development is appropriate. Effective and consistent staff development is particularly important because FEMA has hired a large number of reservists over the past few years. Our analysis of FEMA data shows that from June 1, 2017 to May 31, 2019, the agency hired over 3,200 reservists, which was 40 percent of the agency s entire reservist workforce as of June 1, 2019. Creating a staff development program that systematically and comprehensively addresses staff development through courses, on-the-job training and mentoring, performance evaluation, and ongoing developmental opportunities would provide better assurance that staff develop the skills and competencies needed to meet mission needs during field operations and help ensure the best results for disaster survivors. <5. Conclusions> The large-scale and concurrent disasters during the 2017 and 2018 disaster seasons highlighted the complex challenges facing FEMA s workforce. The agency deployed 14,684 and 10,328 personnel, respectively, at the peak of each of these disaster seasons, and the increased demand for its workforce is expected to continue. Without accurate and complete information on the knowledge, skills, and abilities of these staff members, field officials face challenges with efficiently providing disaster assistance, managing staff workload, and assigning responsibilities. FEMA has taken some initial actions to improve the information provided by its qualification and deployment systems, such as establishing additional controls in its qualification process. However, developing a plan to address the information challenges experienced during the 2017 and 2018 disaster seasons would be beneficial to enhance field leadership s ability to identify and leverage staff skills and, given the persistence of some of these challenges, help ensure they do not continue to affect FEMA s ability to support mission needs in future disasters. Further, in light of the staffing constraints that FEMA faces, it is important that the agency be able to assess how effectively it deploys available staff to disasters to meet field needs. Developing a mechanism to assess FEMA s deployment outcomes would provide officials in headquarters with critical information to monitor and evaluate the extent to which its deployment policies and strategies effectively place staff with the right skills in the right place at the right time to meet mission needs and take corrective actions if needed. Finally, creating a staff development program for its disaster workforce that addresses access to training, delivery of on-the-job training and mentoring, use of performance evaluations, and developmental opportunities when not deployed would help FEMA ensure more consistent and comprehensive development of the skills and abilities needed during deployments. Consistent and effective staff development is particularly important to help build the skills of staff who are qualified in the FEMA Qualification System but unable to proficiently perform their duties and develop the large number of staff that FEMA has recently hired to meet its new disaster workforce targets. <6. Recommendations for Executive Action> We are making the following three recommendations to FEMA: The FEMA Administrator should develop a plan with time frames and milestones and input from field leadership to address identified challenges that have hindered FEMA s ability to provide reliable and complete information to field leaders and managers about staff knowledge, skills, and abilities. (Recommendation 1) The FEMA Administrator should develop mechanisms, including collecting relevant data, to assess how effectively FEMA s disaster workforce was deployed to meet mission needs in the field. (Recommendation 2) The FEMA Administrator should create a staff development program for FEMA s disaster workforce that, at a minimum, addresses access to training, delivery of on-the-job training and mentoring, use of performance evaluations, and consistent developmental opportunities regardless of deployment status. (Recommendation 3) <7. Agency Comments and Our Evaluation> We provided a draft of this report to DHS for review and comment. DHS provided written comments, which are reprinted in appendix III and summarized below. In its comments, DHS concurred with our three recommendations and provided a number of ongoing and planned actions that it intends to leverage in addressing them. DHS also provided technical comments, which we incorporated as appropriate. With regard to our first recommendation for FEMA to develop a plan to address identified challenges with providing reliable and complete staffing information to the field, DHS reiterated some of the steps described in this report that FEMA has taken to improve the coach-and-evaluator program. DHS noted that FEMA plans to engage field leaders on these initiatives to develop a plan to address identified challenges. DHS also reported that FEMA plans to increase training offerings and align its curriculum so that FEMA Qualification System status matches workforce capability. DHS anticipates these efforts will be completed by March 31, 2021. While these are positive initial steps, they focus solely on the coach-and- evaluator program and staff training. Our report identified a number of complex and interrelated challenges with the agency s qualification and deployment processes that hindered FEMA s ability to provide reliable information to field officials about staff members skills and abilities, including their qualifications, specialized skillsets, and experience within and across program areas. As such, in developing the plan we recommended, it will be important for FEMA to take a comprehensive approach and consider solutions that may cut across multiple systems and processes. We will monitor DHS s and FEMA s efforts in this area to assess the extent to which they fully implement our recommendation. With regard to our second recommendation for FEMA to develop mechanisms to assess how effectively FEMA s disaster workforce was deployed to meet mission needs in the field, DHS reiterated the actions described in this report that FEMA took to establish new force structure targets for its incident management workforce. DHS also reported that FEMA plans to convene subject matter experts to develop mechanisms that demonstrate how effectively FEMA s disaster workforce deploys to meet mission needs in the field, which are expected to be completed by March 31, 2021. When they are complete, we will assess the mechanisms to determine the extent to which they address our recommendation. Regarding our third recommendation for FEMA to create a staff development program, DHS reiterated some of the actions FEMA has taken to develop its disaster workforce that were described in this report. Our report identified recurrent challenges with FEMA s efforts to develop staff through training courses, on-the-job training and mentoring, and performance evaluations and noted that the agency s current and planned efforts do not fully address these challenges. In creating the staff development program we recommended, it is important for FEMA to consider how its overall control environment and the initiatives it puts in place are coordinated to ensure staff receive comprehensive and consistent development to build the skills needed during disaster field operations. DHS anticipates that FEMA s efforts to implement our recommendation will be completed by March 31, 2021. At that time, we will assess the agency s actions to determine the extent to which they address the intent of our recommendation. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, the FEMA Administrator, and other interested parties. If you or your staff have any questions about this report, please contact me at (404) 679-1875 or curriec@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report addresses (1) how the Federal Emergency Management Agency s (FEMA) disaster workforce is qualified and deployed, and workforce staffing levels during the 2017 and 2018 disaster seasons; (2) how effective FEMA s qualification and deployment processes were during the 2017 and 2018 disaster seasons in helping ensure workforce needs were met in the field; and (3) the extent to which FEMA s disaster workforce receives staff development to enhance skills and competencies to support the agency s disaster missions. Our review focused on FEMA s incident management workforce, which is composed of FEMA staff who deploy to disaster sites. We defined the 2017 and 2018 disaster seasons as the time periods from August 23, 2017 through January 31, 2018, and September 7, 2018 through November 25, 2018. The 2017 dates represent the start of the FEMA incident period for Hurricane Harvey through the end of the incident period for the California wildfire season. The 2018 dates represent the start of the FEMA incident period for Hurricane Florence through the end of the incident period for the California wildfires. To address all three objectives, we (1) analyzed documentation and data on incident management workforce qualification, deployment, staffing levels, and development; (2) conducted focus groups with members of FEMA s incident workforce across a range of employee types permanent full-time employees, Cadre of On-Call Response/Recovery Employees (CORE), Incident Management CORE, reservists and local hires; and (3) interviewed FEMA officials in headquarters and field and regional offices. We compared the results of our analysis and the information we gathered with Standards for Internal Control in the Federal Government, The Standard for Program Management, FEMA strategic documents and guidance, and our prior reports on strategic human capital management and strategic training and development. <8. Analysis of FEMA Workforce Documents and Data> We analyzed documentation on how FEMA s incident management workforce is qualified, deployed, and developed. Documentation included the agency s 2017 Incident Management Handbook, 2015 CORE Program Manual, 2017 Reservist Program Directive, 2015 and 2019 FEMA Qualification System guides, 2019 Coach-and-Evaluator Program Directive, coach-and-evaluator training materials, 2014 Incident Workforce Deployment Directive, and 2019 Deployment Guide. In addition, we analyzed FEMA s 2018-2022 Strategic Plan, 2017 Hurricane Season After-Action Report, and documentation on FEMA s staffing targets for its incident management workforce. We analyzed data from FEMA s Deployment Tracking System to determine incident management staffing levels, the number of staff deployed to a disaster, the number of incident management staff that had a coach-and-evaluator assigned, and the ratio of managers to incident management staff. We also analyzed data FEMA provides to the National Finance Center to determine the number of new staff the agency hired. To assess the reliability of the data, we interviewed officials at FEMA headquarters about their data quality control procedures and reviewed documentation about these data systems. For the Deployment Tracking System, we also conducted electronic testing and reviewed the data for obvious errors and omissions. We found the data to be sufficiently reliable for the purposes of this report. <9. Focus Groups with Incident Management Staff Members> As shown in table 4, to obtain perspectives on how effectively FEMA qualifies, deploys, and develops its disaster workforce, we conducted 17 focus groups with a total of 129 participants who were deployed in incident management positions during the 2017 disaster season, and in some cases, the 2018 disaster season. The focus group locations were selected based on where staff members who were deployed during the 2017 disaster season were located at the time of our review. We also selected these locations to reflect where the 2017 disasters occurred and to obtain variation in geographic location to the extent possible. Participants were selected using a stratified random sample from a universe of incident management staff members who were deployed to a federally declared disaster during the 2017 hurricane and wildfire season. For each employee type, we conducted separate focus groups with participants in supervisory and nonsupervisory positions so they could speak more freely. We also selected participants to obtain a mix of staff from different cadres and a mix of staff that were qualified and not qualified in the FEMA Qualification System. If selected staff members indicated they could not attend, we replaced them with the next individual on our randomized list who had similar attributes. There were between three to 11 participants in each focus group, with an average of eight in each. These focus group discussions were guided by a moderator who used a structured list of discussion topics. The topics focused on staff members perspectives on, and experiences with, the level of staffing and skill sets their team had, how they were trained and developed, and the FEMA Qualification System and its qualification determinations. Supervisors were also asked about their staff s skill sets, training, and qualification status. Focus group sessions were audio recorded and transcribed. We evaluated the transcripts using systematic content analysis to identify key themes on how effective FEMA s qualification and deployment processes were in helping to meet field needs and the extent to which staff members received staff development to enhance their skills and competencies. An analyst coded the transcripts and a second analyst validated the coding. Any discrepancies were resolved by both analysts agreeing on the coding of the associated statement by a participant. If needed, a third analyst adjudicated any continued disagreement between coders. The results of our focus group analysis are not generalizable to all incident management staff members. However, they provided valuable first-hand experiences with staffing levels and skill sets during disasters, FEMA s deployment processes, the FEMA Qualification System and the reliability of its qualification designations, and how well staff were trained and developed. <10. Interviews with FEMA Officials in Field and Regional Offices and Headquarters> We conducted site visits to FEMA s joint field offices in Columbia, South Carolina; Durham, North Carolina; Guaynabo, Puerto Rico; and Tallahassee, Florida, to obtain officials perspectives on staffing levels and skill sets, the effectiveness of FEMA s qualification and deployment processes and systems in meeting field needs, and the extent to which FEMA s deployed staff receive coaching and development to enhance their skills and competencies. Officials we interviewed at the joint field offices included federal coordinating officers; chiefs of staff; training managers; and managers in the Individual Assistance, Public Assistance, Hazard Mitigation, and Logistics cadres, among others. We also interviewed an official who was previously a federal coordinating officer at a federally-declared wildfire in California. In addition, we interviewed leadership and managers for FEMA regions VI, VIII, and X to obtain the perspectives of regional officials on the topics above. In each of the regions, we interviewed the regional administrator and managers in both the response and recovery divisions, among others. We selected the joint field offices and regions to conduct interviews based on our focus group locations and to obtain variation in geographic location and disaster activity. We conducted systematic content analysis of this work using the same approach we used to analyze the focus groups. The results from this analysis are not generalizable to all field and regional officials, but provide important perspectives from leadership and managers on FEMA s mechanisms to qualify, deploy, and develop incident management staff. In addition, we conducted interviews with multiple senior officials in FEMA headquarters. For example, we interviewed officials in the Field Operations Directorate and management in the Individual Assistance, Public Assistance, and Hazard Mitigation cadres to obtain information about how FEMA s incident management workforce and staff in their cadres are qualified, deployed, and developed, and how the Deployment Tracking System and the FEMA Qualification System are used for these purposes. We also interviewed officials in the Office of the Chief Component Human Capital Officer to learn how FEMA trains and develops this workforce. We obtained information from these officials on the actions FEMA has taken to address the challenges we identified through our focus groups, interviews with field and regional officials, and data analysis. We conducted this performance audit from June 2018 to May 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Federal Emergency Management Agency (FEMA) Cadre List and Descriptions Appendix III: Comments from the U.S. Department of Homeland Security Appendix IV: GAO Contact and Staff Acknowledgments <11. GAO Contact Staff Acknowledgments> Chris Currie, (404) 679-1875 or curriec@gao.gov In addition to the contact named above, Kathryn Godfrey (Assistant Director), Johanna Wong (Analyst-in-Charge), James Cook, Lawrence Crockett, Elizabeth Dretsch, Ricki Gaber, Eric Hauswirth, Tracey King, Ronald La Due Lake, Rebecca Mendelsohn, Amanda Miller, and Adam Vogt made key contributions to this report. | Why GAO Did This Study
During the 2017 and 2018 disaster seasons, several large-scale disasters created an unprecedented demand for FEMA's workforce. FEMA deployed 14,684 and 10,328 personnel at the peak of each of these seasons and reported staffing shortages during the disasters. GAO was asked to review issues related to the federal response to the 2017 disaster season.
This report addresses (1) how FEMA's disaster workforce is qualified and deployed, (2) how effective FEMA's qualification and deployment processes were during the 2017 and 2018 disaster seasons in ensuring workforce needs were met in the field, and (3) the extent to which FEMA's disaster workforce receives staff development to enhance skills and competencies. GAO analyzed documentation and data on incident workforce qualification and deployment; conducted 17 focus groups with 129 staff members; and interviewed FEMA officials in headquarters, field, and regional offices.
What GAO Found
The Federal Emergency Management Agency (FEMA) has established mechanisms to qualify and deploy staff to disasters. For example, the FEMA Qualification System tracks training and task performance requirements for disaster workforce positions and has a process to designate staff as qualified in their positions once they have completed these requirements. FEMA's deployment process uses an automated system to deploy staff members to disasters that match field requests for positions and proficiency levels. The process depends on the agency's qualification and deployment systems to identify staff qualification status and skillsets to meet field needs.
However, FEMA's qualification and deployment processes did not provide reliable and complete staffing information to field officials to ensure its workforce was effectively deployed and used during the 2017 and 2018 disaster seasons. Specifically, GAO's focus groups with over 100 incident staff members and interviews with field and regional officials indicate that disaster personnel experienced significant limitations with qualification status matching performance in the field, due in part to challenges with how staff are evaluated through the qualification process. In all focus groups with applicable incident personnel, participants cited issues with staff members who were qualified in the FEMA Qualification System not having the skills or experience to effectively perform their positions. For example, one participant described supervising staff members who were qualified in the system but did not know the eligibility requirements for applicants to receive housing assistance, or what information needed to be included in the applicant's file. In addition, participants in the majority of the focus groups reported challenges with using FEMA's deployment processes to fully identify staff responsibilities, specialized skillsets, and experience. FEMA headquarters officials acknowledged the identified information challenges but said they have not developed a plan to address them in part because of competing priorities. Developing a plan to address identified challenges with providing reliable staffing information to field officials would enhance FEMA's ability to use staff as flexibly and effectively as possible to meet disaster needs.
Further, FEMA's disaster workforce experienced challenges with receiving staff development through the agency's existing methods to enhance the skills and competencies needed during disaster deployments—challenges FEMA headquarters officials acknowledged. Specifically, GAO's focus groups and interviews indicate that disaster personnel encountered challenges related to the availability of courses, providing and receiving on-the-job training and mentoring, and consistently receiving performance evaluations. For example, in 10 of 17 focus groups, participants cited barriers to taking courses that in their view would help them better perform their jobs. In addition, participants in seven focus groups stated that they did not receive coaching or feedback on the job. Relatedly, FEMA data show that at the start of deployments during the 2017 and 2018 disaster seasons, 36 percent of staff did not have an official assigned to coach and evaluate task performance—the primary mechanism the agency depends on for coaching. Creating a staff development program would help better ensure FEMA's disaster workforce develops the skills and competencies needed to meet mission needs in the field.
What GAO Recommends
GAO is making three recommendations, including that FEMA develop (1) a plan to address identified challenges that have hindered its ability to provide reliable information to field officials about staff skills and abilities and (2) a staff development program for its disaster workforce that addresses training access, delivery of on-the-job training, and other development methods. The Department of Homeland Security concurred with GAO's recommendations. |
gao_GAO-19-676 | gao_GAO-19-676_0 | <1. Background> <1.1. VAWA Self-Petition Eligibility Requirements and Confidentiality Protections> To adjudicate a self-petition filed by a foreign national claiming to have suffered domestic abuse, USCIS adjudicators determine whether the self- petitioner has established the statutory eligibility requirements. A foreign national satisfies the applicable eligibility requirements by demonstrating that he or she (1) has a qualifying relationship with a U.S. citizen or LPR, such as a marriage; (2) was battered or subjected to extreme cruelty by his or her U.S. relative during the qualifying relationship; (3) is residing or has resided with the abusive U.S. citizen or LPR during the qualifying relationship; and (4) is of good moral character. A foreign national filing a VAWA self-petition as an abused spouse is also required to demonstrate that he or she entered into or intended to enter into the marriage in good faith and not in order to evade U.S. immigration law. For a good moral character determination, the petitioner typically should submit a local or state police clearance letter or a state-issued criminal background check from each place where he or she has lived for 6 months or more in the past 3 years immediately prior to filing the VAWA petition. The burden of proof is on the self-petitioner to demonstrate, by a preponderance of the evidence, that he or she has satisfied the statutory eligibility requirements. Considered evidence may include, for example, a criminal background check to establish the good moral character of a self- petitioner or testimony in the form of an affidavit to establish abuse on the part of the U.S. citizen or LPR relative. If the self-petition is approved, the point at which the petitioner will be able to apply for and obtain LPR status will depend on whether he or she is an immediate relative of a U.S. citizen (i.e., U.S. citizen s unmarried child under age 21, spouse, or, where the citizen is at least 21, their parent), or other relative of a U.S. citizen or LPR, who, unlike immediate relatives, are subject to annual immigration limits. Under U.S. immigration law, there are confidentiality protections for VAWA self-petitioners. Any information about the self-petitioner is considered confidential and, with certain exceptions, officials from DHS are prohibited from releasing any information about the petitioner, including that the petitioner has sought immigration relief. In addition, adjudicators are prohibited from using information provided solely by the alleged abuser to make an adverse determination of admissibility or deportability against self-petitioners, unless such adverse information has been corroborated through independent sourcing consistent with departmental policy. Finally, according to DHS policy, DHS officials typically do not take enforcement actions, such as executing an order of removal, against abuse victims when they are present at certain locations, such as domestic violence shelters, victims services programs, and community-based organizations. <1.2. Overview of the Self- Petition Process> The self-petition adjudication process begins when a foreign national submits a Form I-360, Petition for Amerasian, Widow(er), or Special Immigrant, with supporting evidence, to USCIS. USCIS s Vermont Service Center then begins the pre-adjudication phase and takes several actions. First, the service center makes a prima facie determination, which is an initial review of self-petition filings, to determine whether the self-petitioner has submitted evidence that, on its face, is responsive to each of the eligibility requirements noted above, in order to allow qualified aliens access to certain public benefits, if needed. If the self-petitioner has not submitted evidence to address each of the eligibility requirements, USCIS policy directs the service center to issue a request for evidence to the self-petitioner to provide additional evidence for the full adjudication of the petition. In addition, the service center conducts a safe address assessment on the self-petition to identify the address to be used for future communications with the self-petitioner to protect the self- petitioner s confidentiality and safety. Finally, the service center s Background Check Unit uses the TECS database to determine whether the self-petitioner is connected to any administrative or criminal investigations, is the subject of a national security concern, or is a public safety threat. The Vermont Service Center also checks the TECS database to determine whether any derogatory information exists on the foreign national that may impact the submitted self-petition. Figure 1 provides an overview of the USCIS self-petition process. To begin the adjudication phase, an adjudicator incorporates a self- petition filing into the self-petitioner s Alien file. Adjudicators stated they review the evidence available in the self-petition filing and the Alien file and generally take one of three actions approve, deny, or refer the petition for an administrative investigation. Adjudication may also be withheld. Approve. If a USCIS adjudicator determines that the evidence submitted by the self-petitioner satisfies the eligibility requirements noted above, the self-petition is approved. Once USCIS approves a self-petition, DHS will generally defer any removal action against the individual, as he or she goes through the process of applying for LPR status. According to USCIS data, of the 82,357 self-petitions adjudicated from fiscal year 2009 through fiscal year 2018, 72 percent were approved. Self-petitioners who obtain LPR status are not eligible for U.S. citizenship until they have been an LPR in the United States for at least 3 years. Deny. An adjudicator may deny a self-petition if the petitioner has not demonstrated that he or she is more likely than not eligible for petition approval, considering all credible evidence provided by the self- petitioner. In some circumstances, an adjudicator will issue a request for evidence to the petitioner to provide an opportunity for the petitioner to send additional information or documents. In response to this request, the petitioner has an opportunity to provide additional evidence; if that evidence does not sufficiently demonstrate that the petitioner meets the eligibility requirements, or additional evidence is not provided, USCIS may deny the self-petition. In other circumstances, an adjudicator will issue a notice of intent to deny to the self-petitioner in cases where it does not appear likely that the self-petitioner could overcome the deficiencies. This provides the self- petitioner an opportunity to respond. If the self-petitioner s response does not sufficiently demonstrate that the petitioner meets the eligibility requirements or a response is not provided, the self-petition is subsequently denied. An adjudicator may also deny a self-petition if the petitioner abandons his or her self-petition or withdraws the self- petition by providing notice to USCIS in writing. According to USCIS data, among self-petitions adjudicated from fiscal year 2009 through fiscal year 2018, about 28 percent were denied. Of that, about 3 percent were withdrawn, revoked, or closed administratively. If a self- petition is denied and the self-petitioner has other valid immigration status, he or she may remain in the United States. Otherwise, the self- petitioner may be placed in removal proceedings. Adjudication withheld. An adjudicator may also withhold adjudication of a visa petition or other application if there is an ongoing investigation involving eligibility, in connection with a benefit request, and disclosure of information to the applicant or petitioner concerning the adjudication would prejudice the investigation. If adjudication is withheld from a self-petition, USCIS takes no further adjudicative action at that time, pending completion of the related investigation. Refer a petition for an administrative investigation. In addition to approving or denying a self-petition, an adjudicator may refer a self- petition to CFDO for an administrative investigation in cases when an adjudicator suspects fraudulent activity within the self-petition. In such cases, CFDO completes an administrative investigation and returns a Statement of Findings to the adjudicator. The Statement of Findings indicates whether fraud was found, not found, or whether the administrative investigation was inconclusive in finding fraud. After reviewing the Statement of Findings, immigration officers stated the adjudicator continues the adjudication process for the self-petition and may ultimately approve or deny the self-petition. <1.3. Self-Petition Filings> According to USCIS data, the total number of VAWA self-petitions filed by foreign nationals increased from 7,360 in fiscal year 2014 to 12,801 in fiscal year 2018, an increase of about 74 percent. The number of filings by spouses a subset of the above petitioners increased from 7,131 in fiscal year 2014 to 11,213 in fiscal year 2018, an increase of 57 percent. Filings by spouses represented about 93 percent of self-petition filings from fiscal year 2014 to fiscal year 2018. See table 1. <1.4. Self-Petition Fraud> Immigration benefit fraud involves the willful or knowing misrepresentation of material facts for the purpose of obtaining an immigration benefit without lawful entitlement. According to USCIS officials, self-petition fraud is a form of immigration benefit fraud which can occur in a number of ways, such as through document fraud, including submission of falsified affidavits, or making false statements material to the adjudication. For example, a self-petitioner may submit a fraudulent marriage certificate with his or her self-petition in an attempt to establish a qualifying relationship with a U.S. citizen or LPR. Or a self-petitioner may submit a fraudulent affidavit falsely attesting that he or she was battered or subjected to extreme cruelty during the qualifying relationship with the U.S. citizen or LPR. For the purposes of this report, self-petition fraud is construed broadly to include any misrepresentation of material fact(s), such as making false statements, submitting forged or falsified documents, or conspiring to do so, in support of a VAWA self-petition. USCIS may deny, or revoke approval of, a self-petition upon determining that the self-petitioner is, or was, not eligible for petition approval by a preponderance of evidence, due to fraud material to the adjudication process. While it is unlawful to fraudulently obtain approval of an immigration benefit, U.S. immigration law does allow VAWA self- petitioners who may have committed such fraud to retain eligibility for LPR status when they or their family would otherwise suffer extreme hardship. <1.5. GAO s Fraud Risk Management Framework> GAO s A Framework for Managing Fraud Risks in Federal Programs (Fraud Risk Framework) is a comprehensive set of leading practices that serves as a guide for program managers to use when developing efforts to combat fraud in a strategic, risk-based manner. The framework describes leading practices for establishing an organizational structure and culture that are conducive to fraud risk management; assessing fraud risks; designing and implementing controls to prevent and detect potential fraud; and monitoring and evaluating to provide assurances to managers that they are effectively preventing, detecting, and responding to potential fraud. Under the Fraud Reduction and Data Analytics Act of 2015, agencies are required to establish financial and administrative controls that are aligned with the Fraud Risk Framework s leading practices. In addition, guidance from the Office of Management and Budget affirms that managers should adhere to the leading practices identified in the framework. The Fraud Risk Framework includes control activities that help agencies prevent, detect, and respond to fraud risks, as well as structures and environmental factors that influence or help managers achieve their objectives to mitigate fraud risks. The framework consists of four components for effectively managing fraud risks: commit, assess, design and implement, and evaluate and adapt. Leading practices for each of these components include the following: Commit: create an organizational culture to combat fraud at all levels of the agency, and designate an entity within the program office to lead fraud risk management activities. Assess: assess the likelihood and impact of fraud risks and determine risk tolerance and examine the suitability of existing controls and prioritize residual risks. Design and implement: develop, document, and communicate an antifraud strategy, focusing on preventive control activities. Evaluate and adapt: collect and analyze data from reporting mechanisms and instances of detected fraud for real-time monitoring of fraud trends, and use the results of monitoring, evaluations, and investigations to improve fraud prevention, detection, and response. Figure 2 provides an overview of the Fraud Risk Framework and its control activities. <2. USCIS Has Established a Culture and Structure to Manage Fraud Risks for the Self-Petition Program but Has Not Implemented Other Fraud Risk Management Practices> <2.1. USCIS Has Established an Antifraud Culture and a Dedicated Entity to Manage Fraud Risks in the Self-Petition Program> USCIS has an antifraud culture and a dedicated entity for managing fraud risks in the self-petition program. The first component of GAO s Fraud Risk Framework commit provides that agencies should commit to combating fraud by creating an organizational culture and structure conducive to fraud risk management. In particular, agencies should create an organizational culture to combat fraud at all levels, by demonstrating a senior-level commitment to integrity and combatting fraud, and by involving all levels of the agency in setting an antifraud tone that permeates the organizational culture. The first component of the Fraud Risk Framework also calls for an agency to create a structure with a dedicated entity to lead fraud risk management activities. Consistent with the Fraud Risk Framework, we found USCIS has promoted an antifraud culture in several ways. It has demonstrated a senior-level commitment to combating fraud and involvement at all levels. Within the Vermont Service Center, senior officials who oversee the VAWA self-petition unit, as well as adjudicators who review petitions, are evaluated on activities related to managing fraud risks in the self-petition process. For example, according to performance appraisal documentation, senior officials are evaluated on their ability to consistently identify immigration fraud. Specifically, experienced adjudicators and supervisors stated that they are evaluated on their ability to review fraud referral sheets submitted by adjudicators to determine whether the adjudicator has appropriately identified and described suspected fraudulent activity in a self-petition. In addition, senior officials told us they independently review a sample of self-petitions adjudicated during each fiscal year for quality assurance purposes, to include identification of suspected fraud. Adjudicators are evaluated by their supervisors on their ability to identify fraud within the self-petition adjudication process, which includes identifying suspected fraudulent activities in self-petitions, submitting fraud referral sheets to their supervisors, and collaborating with CFDO on resolving self-petition adjudications where suspected fraudulent activity has been identified. In addition to being evaluated on their ability to identify fraud, officials have implemented several activities that contribute to an antifraud tone. For example, officials at the Vermont Service Center stated that VAWA self-petition unit adjudicators and CFDO immigration officers collaborate and share information to combat potential fraud through activities that include monthly meetings, regular contact through their co-location, and an electronic bulletin board. Officials stated that during monthly meetings, immigration officers answer questions from adjudicators on fraudulent schemes and activities uncovered in their administrative investigations of self-petitions. In addition, adjudicators we spoke to stated that because they are co-located with CFDO, they have direct access to immigration officers to obtain feedback on identifying suspected fraudulent self- petitions prior to submitting a formal fraud referral sheet. Finally, CFDO maintains an electronic bulletin board for sharing information with adjudicators on new potentially fraudulent activities they have identified through their administrative investigations. Adjudicators we spoke to stated that the bulletin board assists with identifying fraud indicators during adjudication. We also found that USCIS has created a dedicated entity to lead fraud risk management activities for the self-petition program. According to USCIS officials, the CFDO unit at the Vermont Service Center, in conjunction with FDNS headquarters, is that dedicated entity. Within the Vermont Service Center, CFDO officials stated the CFDO unit consists of three immigration officers and a supervisory immigration officer who have defined antifraud responsibilities, such as conducting administrative investigations of self-petition filings that are referred by adjudicators who suspect fraudulent activity. In addition, the immigration officers are responsible for liaising with law enforcement entities, such as ICE HSI, to provide logistical support in law enforcement matters. According to the officials, CFDO and FDNS fulfill other fraud risk management responsibilities described in GAO s Fraud Risk Framework, including leading or assisting with fraud training for adjudicators. <2.2. USCIS Has Not Fully Assessed Fraud Risks in the Self-Petition Program> While USCIS has taken some steps to assess fraud risks in the self- petition program, the agency has not conducted a formal assessment of such program risks. The second component of the Fraud Risk Framework assess calls for federal managers to plan regular fraud risk assessments, and to assess risks to determine a fraud risk profile. A fraud risk profile is the summation of key findings and conclusions from a fraud risk assessment, including the analysis of the types of internal and external fraud risks, their perceived likelihood and impact, managers risk tolerance, and the prioritization of risks. The fraud risk assessment should be tailored to the program, and in identifying and assessing risks to determine the fraud risk profile, the focus should be on likelihood and impact of inherent fraud risks. This means not only fraud risks already known through program experience, but also other fraud risks that may not yet have been experienced but can be identified, based on the nature of the program. Such risks can be either internal or external to the program. USCIS has not assessed fraud risks and determined a fraud risk profile for the self-petition program, as USCIS officials told us that they were unfamiliar with the concept of a comprehensive fraud risk management process, as provided in the Fraud Risk Framework. Instead, USCIS officials said they viewed fraud risk management more practically, from the standpoint of adjudicating self-petitions and referring suspected fraudulent activity to CFDO. As part of those efforts, CFDO staff review fraud referrals to determine potential fraud schemes and trends that may exist in the self-petition program. While these are positive steps, they do not constitute an assessment of program fraud risks that would position USCIS to develop a fraud risk profile for the self-petition program. More specifically, the Fraud Risk Framework calls for agencies to identify inherent fraud risks of a program, examine the suitability of existing fraud controls, and then to prioritize residual fraud risks that is, risks remaining after antifraud controls are adopted. According to USCIS officials we spoke with, the self-petition program is vulnerable to fraud. For example, USCIS officials stated that they have seen cases in which self-petitioners submitted false or forged leases in an attempt to show they resided with the alleged abuser during a period of abuse, as well as foreign marriage or divorce certificates later found to be falsified in an attempt to establish that the self-petitioner entered into a marriage with a U.S. citizen in good faith. While these are examples of individual fraudulent activities, USCIS officials cannot be assured they have identified inherent fraud risks to the program without undertaking a fraud risk assessment as provided in the Fraud Risk Framework. USCIS officials we spoke with acknowledged the benefits of conducting a fraud risk assessment and noted that a formal analysis of self-petition fraud referrals and administrative investigations could help to better understand the extent of fraud risks that exist in the self-petition program. Further, the Fraud Risk Framework highlights the need for fully assessing fraud risks when there are changes to the program or operating environment conditions that apply in the case of the self-petition program. USCIS data indicate that the number of self-petitions filed has been growing in the past 5 fiscal years, and at the end of fiscal year 2018, USCIS had received 12,801 self-petitions and had over 19,000 self- petitions pending adjudication. In this environment, identification of inherent fraud risks, coupled with assessments of the likelihood and expected impact of those risks, could help USCIS better target its fraud prevention and detection efforts. Planning and conducting regular fraud risk assessments, as provided in the Fraud Risk Framework, would better position USCIS to identify fraud risks in the self-petition program. Regularly assessing fraud risks in the self-petition program to determine a fraud risk profile would also help USCIS better determine the extent to which the agency has designed and implemented adequate fraud prevention controls. <2.3. USCIS Has Established Controls to Combat Fraud but Has Not Developed a Risk-Based Antifraud Strategy Tailored to the Self-Petition Program> USCIS has controls designed to help prevent and detect fraud in the self- petition program, but has not developed a risk-based antifraud strategy for the program consistent with the Fraud Risk Framework. The third component of the Fraud Risk Framework design and implement calls for agencies to design and implement a strategy with specific control activities to address risks identified in the fraud risk assessment. In particular, managers should develop and document an antifraud strategy based on the fraud risk profile (developed as part of the fraud risk assessment of the second component of the Framework), and design and implement specific control activities to prevent and detect fraud. The basis for these activities should be the prioritized residual risks identified earlier, meaning that the agency adopts a risk-based antifraud control strategy. This approach is in line with Standards for Internal Control in the Federal Government, which requires managers to design a response to analyzed risks. USCIS has instituted some fraud controls for the self-petition program, particularly controls related to preventing and detecting fraud. USCIS s specific fraud control activities include, for example, the Vermont Service Center Background Check Unit conducting TECS checks on foreign nationals who submit self-petitions during the pre-adjudication stage to determine whether the self-petitioner is connected to any administrative or criminal investigations, is the subject of a national security concern, or is a public safety threat. In addition, USCIS has a process for adjudicators to refer petitions when they suspect fraudulent activities to CFDO for administrative investigation. Specifically, USCIS official stated that in cases where an adjudicator suspects potential fraud in a self-petition, the adjudicator is to complete and submit a supervisor-approved fraud referral sheet to CFDO. After receiving a referral, the center is to determine whether the referral has sufficient information to warrant an administrative investigation. CFDO also provides fraud training to adjudicators. While these controls help USCIS prevent and detect potential fraud in the self-petition program, USCIS has not developed and implemented a risk- based antifraud strategy based on a fraud risk assessment as provided under the Fraud Risk Framework. This is because, as noted earlier, the agency has not undertaken an assessment of inherent program fraud risks. USCIS officials told us that even with adjudicator and CFDO staff experience with identifying and investigating potential fraud in self- petitions, unknown fraud risks may nevertheless remain. USCIS officials acknowledged the benefits of conducting a fraud risk assessment, such as designing and implementing new control activities, as well as revising existing controls, if they determine that controls are not effectively designed to reduce the likelihood or impact of an inherent fraud risk to a tolerable level. USCIS officials told us that adjudicators and CFDO staff conducting administrative investigations have identified trends in fraudulent activities; however, officials also stated that it is difficult for staff to identify fraud risks that are present but that are not identified through adjudication or investigation. Examining antifraud controls, and adjusting them as necessary based on an antifraud strategy, would help the Vermont Service Center to better ensure that its controls are addressing fraud risks in the self-petition program, including inherent risks. <2.4. USCIS Has Plans to Develop Tailored Antifraud Training For the Self- Petition Program> USCIS is developing training on fraud-related issues for the self-petition program. The third component of the Fraud Risk Framework, discussed earlier, identifies training as a leading antifraud practice and as an antifraud control to increase fraud awareness of possible fraud schemes. Training and education intended to increase fraud awareness among managers and employees, among others, can serve as a preventive measure to help create a culture of integrity and compliance within the program. Increasing fraud awareness can also enable managers and employees to better detect potential fraud. To achieve these benefits, the Fraud Risk Framework notes that a leading practice is to require all employees to attend training upon hiring and on an ongoing basis thereafter. Training should convey fraud-specific information that is tailored to the program and its fraud risk profile. Specifically, it should include information on fraud risks, such as examples of specific types of fraud that employees are likely to encounter, and information on how to identify fraud schemes. USCIS has a training program in place for new adjudicators that provides general information on identifying potential fraudulent activities as part of any adjudication and has plans to develop new fraud awareness training tailored specifically to the self-petition program. According to CFDO officials, USCIS provides general training to new adjudicators during a 6- day classroom training program. During this training, new adjudicators are instructed on eligibility and evidence requirements across several application types, including the VAWA self-petition. The training includes information on eligibility requirements, supporting documentation needed, and evidentiary requirements for these applications. Application forms are used to teach adjudicators fraud identification, and adjudicators are given a list of common fraud indicators to assist when reviewing applications, according to adjudicators. This training also includes a 2-hour presentation on the VAWA self-petition program where general fraud concepts, such as document fraud, are discussed. While adjudicators receive general training when hired, USCIS had not provided tailored antifraud training on the self-petition prior to fiscal year 2019. Adjudicators we spoke to noted that fraud schemes continue to evolve, and that fraud schemes and tactics are becoming more sophisticated and thus more difficult to identify during adjudication of VAWA self-petitions. Adjudicators we spoke to also noted that ongoing training that included information on evolving fraud schemes and tactics specific to the self-petition program would help increase their ability to identify potentially fraudulent activities. Further, adjudicators noted that additional training on how to identify potential fraud when a petitioner submits an attested affidavit would help to identify potentially fraudulent self-petitions. In response to our discussions and adjudicator feedback, a senior CFDO official stated that they recognized the need for fraud training, including training tailored to the self-petition program, and planned to hire an additional four immigration officers in fiscal year 2019 to increase fraud training for adjudicators, among other duties. In response to discussions we had during our review, officials at the center also stated they planned to develop and implement tailored fraud training for the self-petition program by the end of fiscal year 2019. CFDO officials stated they also plan to continually update the training based on any new potentially fraudulent activity identified in the self-petition program. <2.5. USCIS Has Not Used Data Analytics as an Antifraud Tool for the Self- Petition Program> USCIS has data analytics capabilities, but has not applied these capabilities as an antifraud tool for the self-petition program. The third component of the Fraud Risk Framework, discussed earlier, cites data analytics as a leading practice in developing specific control activities to prevent and detect fraud in particular, to mitigate the likelihood and impact of fraud. In addition, Standards for Internal Control in the Federal Government provide for ongoing monitoring of operations and internal controls, and data analytics can aid in this task. According to the Fraud Risk Framework, data analytics can include a variety of techniques, such as data mining (identifying suspicious activity or transactions, including anomalies, outliers, and other red flags, within data) and data matching (comparing information in one source to another to identify inconsistencies), which can enable programs to identify potential fraud. Further, predictive analytics can identify particular types of behavior, including fraud, before transactions are completed. According to USCIS officials, the agency has developed and uses data analytics capabilities as part of its efforts to identify and prevent fraud within immigration benefit programs. These officials said the agency has not had sufficient resources to pursue data analytics separately for each type of immigration benefit program. Thus, they stated that USCIS deploys its data analytics resources strategically across immigration benefit programs, based on factors including, among other things, the volume of filings or applications for specific benefit programs, the amount of data available for electronic analysis, and whether the type of application is one that can lead to a change in immigration status, such as asylum or permanent residency. Under this approach, for example, USCIS officials stated that marriage and employment-based benefit programs are areas where there is a greater amount of electronic data available for analysis. USCIS s development and use of data analytics in other programs are positive actions in helping the agency in its efforts to prevent and detect fraud risks to immigration benefit programs. However, USCIS has not conducted a comprehensive assessment of fraud risks in the self-petition program to provide an understanding of the likelihood and impact of program risks and to help inform the level of resources USCIS should allocate to addressing those risks. Consistent with the Fraud Risk Framework, using data analytics capabilities in the self-petition program could help position USCIS to better identify and assess fraud risks in the program. Such data analysis does not by itself necessarily confirm the existence of fraud, but the use of data analytics could help USCIS to determine indicators of potential fraud. Further, consistent with the Fraud Risk Framework, this type of analysis can aid in decisions on prioritization of investigative resources. According to the Fraud Risk Framework, specific data analytic tests that are most effective in helping managers prevent or detect potential fraud vary by program because of the different fraud risks programs face. By using information on previously encountered fraud schemes or known fraud risks, managers can identify signs of fraud that may exist within their data. In the absence of an assessment of fraud risks in the self- petition program, we asked USCIS officials about fraud risks or schemes they have identified in the program and analyzed program data to identify examples of ways USCIS could use program data to better prevent or detect potential self-petition program fraud. As examples, we analyzed variables that generally serve to identify individuals, such as address and Social Security number, because multiple instances of the same identifier in program data can indicate misuse of personally identifying information. In addition, we examined other variables associated with self-petition filings and outcomes of self-petition adjudications, as trends in variables associated with denial outcomes, for example, can provide indicators of potential fraud. We offer the following examples not as illustrations of confirmed or even potential fraud, but rather to help illustrate the use of data analytics as a tool for helping to prevent and detect fraud in the self- petition program. For example, one area in which we identified multiple instances of the same variable was with addresses. While not necessarily indicative of fraud, our review of USCIS data showed that from fiscal year 2009 to January 2019, thousands of self-petition filings had addresses that were used in multiple self-petition filings. According to USCIS officials, this is not unexpected and further research would be required to authoritatively explain the multiple address use we identified. The self-petitioner program also allows self-petitioners to use a safe address for communications, in an effort to ensure confidentiality in filing of the petition. According to USCIS officials, self-petitioners often use an assisting attorney or representative s business address as their safe address. In prior work on other immigration benefits, we have highlighted where DHS officials have used multiple instances of the same address in program data to target investigative follow-up. Our analysis of data on the number of times unique addresses were used in filing self-petitions showed, for instance, 37,201 filings had addresses used at least 10 times each from fiscal year 2009 to January 2019. In some cases, an address was used hundreds of times in a group of 6,302 self-petitions, there were 31 instances in which addresses were used 100 or more times. Table 2 provides examples of multiple uses of addresses, which we selected for illustrative purposes from among all the multiple uses of addresses we identified. It shows, for example, in the last row, that there was one unique address that was used 845 times in self-petition filings, all of which were separate filings. Thus, the total number of self-petitions involved with this address was 845. Another example of multiple instances of the same variable was identification numbers. In particular, our review of USCIS data showed that from fiscal year 2009 to January 2019, there were thousands of self- petition filings that used duplicative identifying numbers Social Security numbers and Alien numbers. According to USCIS officials, as with multiple uses of the same address, further research would be required to authoritatively explain the multiple identification number use we identified. For example, according to USCIS officials, a foreign national parent and child may file separate self-petitions, resulting in multiple petitions using the same Social Security number. Also, it is common for people to file more than one self-petition if, for instance, they are able to obtain additional evidence after a decision is made. Our analysis of the number of times unique Social Security numbers were provided in self-petition filings, as well as unique Alien numbers, showed that for each, there were several thousand filings in which the numbers were used in multiple self- petition filings. In prior work, we have highlighted examples where multiple instances of the same Social Security number in program or payment data has indicated Social Security number misuse, such as where multiple individuals may have been using the same Social Security number for employment, and use of Social Security numbers to create synthetic identities, to obtain benefits for ineligible individuals using the Social Security numbers of eligible applicants. Table 3 provides examples of multiple uses of Social Security and Alien Registration numbers, selected for illustrative purposes from among all the multiple uses of identification numbers we identified. It shows, for example, in the last row, that there were 28 instances in which a unique Alien number was used in five different self-petition filings, all of which were separate filings. Thus, the total number of self-petitions involved with these 28 Alien numbers was 140. Another example of multiple instances of the same variables was assistance provided to self-petitioners by attorneys or other organizations. According to USCIS officials, self-petitions filed with assistance are expected, as organizations specialize in providing assistance to petitioners and applicants for immigration benefits, including self-petitions. Thus, USCIS officials noted that the appearance of the same attorneys or other organizations in program data is not necessarily indicative of fraud without further investigation. However, USCIS officials also noted that application mills, in which a relatively large number of incomplete or deficient self-petitions are submitted through a single preparer, also exist and could indicate avenues for further investigation. For example, if investigation revealed submission of self-petitioner affidavits or other supporting evidence across multiple self-petitions and using common information, such as duplicate wording, that could be an indicator of potential fraud. In July 2019, the U.S. Attorney for the District of Vermont announced an indictment against a self-petition preparer, charging the man with filing false statements with USCIS, including more than 1,800 fraudulent submissions for more than 1,000 self-petitioners over four years. The preparer is alleged to have falsely claimed that his clients were victims of abuse, without their authorization, according to the U.S. Attorney s office. Our analysis of USCIS data from fiscal year 2009 to January 2019 showed that a large portion of self-petitions were filed with assistance by either attorneys or by other organizations. In the case of attorneys, according to our analysis, about 80 percent of self-petitions were filed by foreign nationals with assistance from attorneys or accredited representatives from fiscal year fiscal year 2009 through January 2019. However, while USCIS collects attorney identifying information on the paper form that self-petitioners submit, officials told us the agency does not capture this information electronically. Therefore, it is not available for analysis. Such analysis could indicate particular attorneys or representatives relative shares of self-petitions, and allow USCIS to conduct further analysis, as appropriate. In the case of organizations providing assistance, we found that about 70 percent of self-petitioners from fiscal year 2009 through January 2019 listed various organizations in their filings, and we identified a number of organizations assisting hundreds of self-petitioners each. For example, in one case an organization was listed as providing assistance in over 500 filings and in another case two entities were listed as providing assistance in over 400 filings each. However, according to USCIS officials, one legal organization providing assistance for 500 filings over a 10-year period is not uncommon or necessarily an indicator of fraud, given that, unlike other petitions, most VAWA self-petitions are filed with the assistance of an attorney or legal representative. Consistent with leading practices as described in the Framework, analyses of multiple uses of unique identifiers or instances of certain variables in self-petition program data could help USCIS identify areas for more targeted review, to determine what accounts for the duplicates in the program data and the extent to which they or other variables could be indicators of potential fraud. Moreover, according to the Fraud Risk Framework, data analytics, such as data mining, can identify suspicious activity or transactions, including anomalies, outliers and other red flags in a program s data. Activity or transactions that deviate from expected patterns can potentially indicate fraudulent activity and program managers who effectively use data analytics to detect potential fraud look for unusual transactions or data entries that do not fit an expected pattern. However, as noted earlier, USCIS has not applied data analytics as an antifraud tool for the self-petition program. For example, as previously discussed, while adjudicating self-petitions, USCIS officers may request additional evidence from petitioners for reasons including incomplete or inconsistent information provided in filings, or suspected fraud, USCIS officials told us. The officials told us the agency does not compile data on the nature of these requests for additional evidence, which number in the thousands annually. Maintaining and analyzing such data especially when adjudicators are requesting further information because they suspect possible fraud could provide program-level insights into where self-petitions are incomplete or suspected to be fraudulent. Further, as noted earlier, USCIS does not assess data on the outcomes of self-petition adjudications to determine whether there are any trends or patterns in such data that could be indicative of fraud. In particular, denials or referrals can be based on multiple factors, including potential fraud. Analyzing such outcomes for any patterns or trends that could suggest potential fraud could help USCIS strengthen its efforts to identify and prevent fraud risks in the self-petition program. For example, USCIS officials told us they sometimes observe patterns or trends among self- petitions that may seem suspicious and warrant further review and noted as an example an increase in cases involving potentially false claims of abuse from self-petitioners from one country. While not necessarily indicative of fraud, and to provide some example of trend analysis on data on the outcomes of self-petition adjudications, we analyzed data on the outcomes of adjudications from the 10 countries with the largest number of self-petition filings and found the denial rate by country of birth of the self-petitioner varied by as much as a factor of three. Additional analysis across data on adjudication outcomes could help better identify areas for further investigation or review. In addition, the Fraud Risk Framework notes that one leading practice for using data analytics as an antifraud tool is to verify key information, including self-reported data and information necessary to determine eligibility. To effectively prevent and detect instances of potential fraud, managers are to take steps to verify reported information, particularly self- reported data and other key data necessary to determine eligibility for programs or receiving benefits. For example, according to officials, USCIS does not check the validity of key identification information submitted by self-petitioners, and it does not analyze outcomes across characteristics of self-petitions practices our prior work indicates could strengthen USCIS s use of data analytics. More specifically, although USCIS may conduct background checks on self-reported self-petitioner information, officials told us the agency does not have the capability to check the validity of Social Security numbers or passport information that self-petitioners report in their Form I-360 filings. Nevertheless, USCIS officials told us the agency routinely performs overseas verification of self- petitioner documents, such as birth certificates, marriage/divorce certificates, and passports. Based on our analysis of USCIS data, the agency maintains data that could be used for data analytics. For example, the majority of self-petition filings have full name information, addresses, Alien numbers, and, to a lesser extent, Social Security numbers. This relative completeness of data items provides opportunities for data-matching, which, as noted, is a key data analytics technique. USCIS officials told us that generally, they see the value of developing a data analytics capability for the self-petition program, noting that such a capability would be beneficial both in aiding fraud detection and prevention efforts, as well as by allowing timely, accurate reporting on self-petitioner data as part of routine program management and oversight. However, officials also noted that while expanding the range of electronic self-petitioner data maintained would increase analytical capabilities, there would be a cost to implementation, which would need to be balanced against the benefit of the additional antifraud tool, and any data analytics would need to be conducted so as to not target individuals or groups solely based on certain self-petitioner characteristics indicated by data. In other work, we have noted that leading practices in data analytic techniques alone may not be sufficient to prevent fraud in obtaining benefits but can help an agency prioritize and enhance fraud investigations. Developing and implementing a data analytics capability for the self-petition program would provide USCIS with tools to aid in identifying potential fraud in self-petition filings and aid in focusing resources. Further, analysis and insights developed through use of data analytics could inform the self-petition program s periodic fraud risk assessments, which, as described earlier, are a key aspect of the fraud risk management process. <3. DHS Provides Assistance to Potential Victims of Immigration-Related Crimes and Refers Suspected Self- Petition Fraud for Review and Investigation> <3.1. DHS VOICE Office Provides Assistance to Potential Victims of Immigration-Related Crimes> The DHS VOICE Office provides assistance to potential victims of immigration-related crimes. In April 2017, in response to Executive Order 13768, ICE established the VOICE Office to provide professional services and assistance to potential victims and family members of victims of crimes committed by removable aliens. The office s assistance to potential U.S. citizen and LPR victims includes, among other things, providing ICE community relations officers who serve as local representatives to help potential victims understand the immigration enforcement and removal process; victim assistance specialists who provide potential victims with direct service referrals for matters such as counseling; and information, such as the potential offenders immigration and custody status. In addition, the office provides referral information to the ICE HSI tip line and answers questions and concerns regarding immigration enforcement through the VOICE Office s toll-free hotline. Data collected by the VOICE Office from hotline calls shows that in fiscal year 2018, a total of 1,543 calls were made to the VOICE Office. Of those 1,543 calls, 130 calls, or 8 percent, were from self-identified victims of marriage-related fraud requesting assistance. VOICE officials indicated that they would consider VAWA self-petition fraud as a subset of marriage fraud; however, self-petition fraud is not separately identifiable in their data. Of those 130 calls, the Office referred 78 alleged victims to ICE s HSI Tip Line. For example, in one case from fiscal year 2018, a caller claimed that his or her spouse married the caller for immigration purposes and attempted to falsely press criminal domestic violence charges against the caller as a means of obtaining immigration status. The Office offered the caller local victim services and referred the caller to both USCIS and the ICE HSI Tip Line. Of the remaining 52 calls from self-identified victims of marriage-related fraud, the office provided the caller with an ICE community relations officer, and the officer recommended actions to victims, such as calling the ICE HSI Tip Line, or providing the victim with a victim assistance specialist to discuss available resources. For example, in another case from fiscal year 2018, a caller claimed his or her spouse married the caller to obtain immigration relief, and falsely accused the caller of domestic violence to obtain legal residency. The VOICE office referred the caller to ICE HSI and explained the victim assistance services available to the caller. See figure 3 for a description of calls made to the VOICE Office and subsequent office action. According to CRCL officials, assessing tips from self-identified victims of immigration fraud poses a challenge, since domestic abusers may use the immigration system against their victims by providing false tips in order to have them removed. Per statutory protections for self-petitioners, DHS treats tips as inherently suspect, and is barred from making adverse determinations of inadmissibility or deportability in adjudications based solely on information provided by certain individuals, such as the alleged abuser or a member of the abuser s household. However, DHS may consider such information if it can be independently corroborated consistent with DHS policy. As for the alleged abuser s information, which may have been included in a VAWA self-petition, USCIS officials noted that USCIS never provides such information to anyone including law enforcement even where allegations of criminal conduct are reported with a self-petition. As a result, U.S. citizens and LPRs face no consequences solely from being named in a self-petition regardless of its outcome. <3.2. DHS Has a Referral Process for Suspected Fraud in Self-Petitions, Which May Result in a Referral to ICE for Criminal Investigation> DHS has a referral process for suspected fraud in self-petitions, which may result in a referral to ICE for criminal investigation. Within USCIS, FDNS immigration officers review self-petition fraud referrals, conduct administrative investigations when warranted, and in limited circumstances, refer cases to ICE for criminal investigation. Fraud referrals related to self-petitions typically originate from five sources: (1) the TECS checks that the Vermont Service Center Background Check Unit conducts prior to adjudication, which include notifications that indicate potential national security concerns, public safety threats, and fraud leads in the preadjudication screening process; (2) USCIS adjudicators reviewing self-petitions at any time during the adjudication process; (3) other USCIS offices that may encounter potential self-petition fraud in the course of their work on other USCIS applications; (4) other law enforcement sources, including other federal law enforcement entities; and (5) benefit fraud tips received from the general public. After receiving a referral, FDNS immigration officers determine whether the referral has sufficient information to warrant further investigation. According to FDNS s fraud detection standard operating procedures, FDNS immigration officers either determine that the referral becomes a lead and the lead is accepted, or the referral is declined. After accepting the referral, immigration officers are responsible for conducting an administrative investigation to, among other things, obtain relevant information needed by Vermont Service Center adjudicators to render the appropriate adjudicative decision. If, after conducting research and analyzing the information associated with a lead, the FDNS immigration officer determines that a reasonable suspicion of fraud is articulated and actionable, the lead is elevated to a case. Upon conclusion of the administrative investigation, FDNS immigration officers close the accepted lead and case and record their findings in a Statement of Findings. The Statement of Findings concludes the administrative investigation with one of three types of findings: (1) Fraud Found: the investigation determined fraudulent activities exist in the self- petition; (2) Fraud Not Found: the investigation did not uncover fraudulent activities in the self-petition; or (3) Inconclusive: the investigation could not make a determination of whether fraudulent activity occurred. Once completed, the Statement of Findings is returned to the appropriate referral source. In cases where FDNS immigration officers find self-petition fraud, the case can be referred to ICE s HSI for criminal investigation. According to a 2008 immigration benefit fraud memorandum of agreement between USCIS and ICE, FDNS is to refer individual petitions involving suspected fraud to HSI where (1) the alien is the subject of a TECS record; (2) USCIS suspects misconduct on the part of the self-petitioner s attorney, notary, interpreter, or preparer of the application; or (3) evidence of a criminal conviction of an offense that is not grounds for inadmissibility or removability is present, among other things. Typically, referrals are sent to the National Lead Development Center, where they are distributed to ICE Special Agent In-Charge local offices for further investigation, according to FDNS officials. If a referral is the result of a task team, petitions may be referred directly to ICE Special Agent In-Charge local offices. ICE either accepts the referral and conducts a criminal investigation or declines the referral and sends it back to FDNS. If a referral is declined by ICE, FDNS continues its administrative investigation. Figure 4 provides an overview of the self-petition fraud referral process. According to FDNS data, from fiscal year 2014 to March 2019, FDNS created 2,208 fraud referral leads and cases that involved a VAWA self- petition. Total leads and cases increased from 198 in fiscal year 2014 to 801 in fiscal year 2019 (data as of March 2019), an increase of about 305 percent. USCIS officials attributed this increase to an overall increase in the number of self-petitions filed and an increase in fraud leads and cases obtained through USCIS s fraud tip hotline. FDNS data showed that 2,151 leads and cases were accepted by FDNS between fiscal year 2014 and March 2019, or about 97 percent. Table 5 shows the number of fraud leads and cases that contain a self-petition and the disposition of accepted leads and cases between fiscal years 2014 and March 2019. From fiscal year 2014 to March 2019, FDNS found a disposition for 631 of the closed cases that involved a VAWA self-petition. According to USCIS officials, a fraud lead or case is not typically closed within the same year that it is filed. This accounts for differences between the total number of fraud cases and leads filed and the total number of completed cases and closed leads within the same fiscal year. Of the 631 closed cases with a disposition, FDNS found fraud in 332, or 53 percent. Table 6 shows the disposition of closed self-petition fraud leads and cases between fiscal year 2014 and March 2019. According to FDNS data, from fiscal year 2014 to March 2019, FDNS made 68 fraud referrals to ICE for criminal investigation that involved a VAWA self-petition. We inquired with ICE about the status and disposition of these cases. As previously mentioned, for purposes of accepting a referral for criminal investigation, ICE does not make distinction between self-petition fraud and marriage fraud investigations. As a result, information on the 68 fraud referrals to ICE is encompassed in ICE s immigration benefit fraud investigation data and could not be separated for analysis. Therefore, we could not provide status and disposition information on these referrals. <4. Conclusions> The VAWA self-petition program is designed to protect foreign nationals who are victims of domestic abuse. The decision to approve or deny a VAWA self-petition is consequential, as the program allows an eligible foreign national victim to remain in the country, obtain work authorization, and apply for LPR status independent of their abuser. According to CRCL, VAWA self-petition relief brings safety, security and stability to legitimate victims who might not otherwise be able to escape domestic abuse. However, approving a fraudulent petition could affect the integrity of the program. USCIS has implemented some aspects of GAO s Fraud Risk Framework in managing the self-petition program, such as having a dedicated antifraud entity, but could improve efforts to detect and prevent potential fraud in the program. More specifically, conducting regular fraud risk assessments and determining a fraud risk profile for the program could help USCIS identify fraud risks in the self-petition program and better determine the extent to which the agency has designed and implemented adequate fraud prevention controls. Further, basing antifraud controls on inherent risks identified through regular fraud risk assessments could help ensure USCIS s antifraud controls are addressing fraud risks in the self-petition program. Lastly, developing and implementing a data analytics capability could provide USCIS with tools to aid in identifying potential fraud in self-petition filings. Analysis and insights developed through the use of data analytics could inform the self- petition program s regular fraud risk assessments. <5. Recommendations for Executive Action> We are making the following three recommendations to USCIS: The Director of USCIS should plan and conduct regular fraud risk assessments of the self-petition program to determine a fraud risk profile, as provided in GAO s Fraud Risk Framework. (Recommendation 1) The Director of USCIS should develop and implement an antifraud strategy with specific control activities, based upon the results of fraud risk assessments and a corresponding fraud risk profile, as provided in GAO s Fraud Risk Framework. (Recommendation 2) The Director of USCIS should develop and implement data analytics capabilities for the self-petition program, as a means to prevent and detect fraud, as provided in GAO s Fraud Risk Framework. (Recommendation 3) <6. Agency Comments and Our Evaluation> We provided a draft of this report to DHS for review and comment. DHS provided comments, which are reproduced in full in appendix I and discussed below. DHS also provided technical comments, which we incorporated as appropriate. In its comments, DHS concurred with our three recommendations and described actions planned to address them. With respect to our first recommendation that USCIS plan and conduct regular fraud risk assessments of the self-petition program to determine a fraud risk profile, DHS stated that the USCIS FDNS plans to capture data digitally for both I-360 and I-751 self-petitions filed on the basis of domestic abuse, and discuss any patterns observed with stakeholders in order to develop a fraud risk profile. Further, DHS stated USCIS will use the results of data analytics to conduct and update regular fraud risk assessments. With regard to our second recommendation that USCIS develop and implement an antifraud strategy with specific control activities based upon the results of fraud risk assessments and a corresponding fraud risk profile, DHS stated USCIS plans to create an antifraud strategy that includes both adjudicators and FDNS officers in order to emphasize fraud detection prior to adjudication of self-petitions. With respect to our third recommendation that USCIS develop and implement data analytics capabilities for the self-petition program as a means to prevent and detect fraud, DHS stated that USCIS will apply their data analytics capabilities, driven by the results of the self-petition fraud risk assessments, to develop analytic tools that verify information provided by self-petitioners and identify indicators of potential fraud. Further, DHS stated that USCIS will use the results of data analytics to inform antifraud training and will distribute the results to USCIS senior leadership when warranted. We are sending copies of this report to the appropriate congressional committees and the Acting Secretary of Homeland Security. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Rebecca Gambler at (202) 512-8777 or GamblerR@gao.gov or Rebecca Shea at (202) 512-6722 or SheaR@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of our report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Homeland Security Appendix II: GAO Contacts and Staff Acknowledgements <7. GAO Contacts> <8. Staff Acknowledgements> In addition to the contacts named above, Jeanette Henriquez (Assistant Director), Kelsey M. Carpenter, Pamela Davidson, April Gamble, Eric Hauswirth, Brandon Jones, Brendan Kretzschmar, Sasan J. Jon Najmi, Christopher H. Schmitt, and Eli Stiefel made key contributions to this report. | Why GAO Did This Study
In fiscal year 2018, foreign nationals filed nearly 13,000 VAWA self-petitions alleging domestic abuse by a U.S. citizen or LPR family member. The Immigration and Nationality Act, as amended by VAWA, provides for immigration relief for self-petitioning foreign nationals who are victims of battery or extreme cruelty committed by their U.S. citizen or LPR family member. The self-petition process allows such victims to obtain classification as an immigrant and ultimately apply for LPR status.
GAO was asked to review fraud risks in the self-petition process and how, if at all, DHS assists U.S. citizens or LPRs who may have been falsely identified as domestic abusers. This report examines the extent to which (1) USCIS has adopted relevant leading practices in GAO's Fraud Risk Framework for the self-petition program; and (2) DHS provides assistance to U.S. citizens or LPRs who may have been falsely identified as domestic abusers in the self-petition process, and steps DHS takes when suspected fraud is identified. GAO reviewed documents, interviewed officials, analyzed program data, and assessed the agency's approach to managing fraud risks against GAO's Fraud Risk Framework.
What GAO Found
Within the Department of Homeland Security (DHS), U.S. Citizenship and Immigration Services (USCIS) has responsibility for the Violence Against Women Act (VAWA) self-petition program for foreign national victims of battery or extreme cruelty committed by their U.S. citizen or lawful permanent resident (LPR) spouse or parent, or their adult U.S. citizen son or daughter. According to USCIS officials, the self-petition program is vulnerable to fraud, such as self-petitioners' use of false or forged documents. USCIS has adopted some, but not all, of the leading practices in GAO's Fraud Risk Framework. While USCIS has established a culture and a dedicated entity to manage fraud risks for the program, it has not fully assessed fraud risks and determined a fraud risk profile to document its analysis of the types of fraud risks the program could be vulnerable to. Further, the number of self-petitions filed has grown by more than 70 percent over the past 5 fiscal years. At the end of fiscal year 2018, USCIS had received 12,801 self-petitions and had over 19,000 self-petitions pending adjudication. Planning and conducting regular fraud risk assessments would better position USCIS to identify fraud risks when reviewing self-petitions. USCIS has instituted some fraud controls, such as developing antifraud training for self-petition adjudicators, but has not developed and implemented a risk-based antifraud strategy based on a fraud risk assessment. Developing and implementing an antifraud strategy would help USCIS better ensure its controls are addressing potential fraud risks in the program.
DHS provides assistance to victims of immigration-related crimes and refers suspected self-petition fraud for review and investigation. Within DHS, U.S. Immigration and Customs Enforcement provides professional services and assistance to potential victims of immigration-related crimes, including self-petition fraud. As shown in the figure below, USCIS also has a referral process for suspected fraud in self-petitions, which may result in a referral for criminal investigation. According to agency data, from fiscal year 2014 to March 2019, USCIS created 2,208 fraud referral leads and cases that involved a VAWA self-petition. Total leads and cases increased from 198 in fiscal year 2014 to 801 in fiscal year 2019 as of March 2019, an increase of about 305 percent.
What GAO Recommends
GAO is making three recommendations, including that USCIS conduct regular fraud risk assessments to determine a fraud risk profile for the program and develop an antifraud strategy with specific control activities. DHS concurred. |
gao_GAO-19-628 | gao_GAO-19-628_0 | <1. Background> CMS and private payers use a variety of quality measures to assess different aspects of health care quality. Process measures assess the extent to which providers effectively implement clinical practices (or treatments) that have been shown to result in high-quality or efficient care, such as the percentage of patients with a myocardial infarction who receive an aspirin prescription on discharge. Others are outcome measures, which track the results of health care, such as mortality, infections, and patients experiences of that care. To calculate providers performance on quality measures, CMS and private payers ask providers to report a variety of clinical data. Historically, providers have collected data for quality measures through a detailed, manual review of paper medical records. Other quality measures use data from billing records and patient surveys. More recently, a limited number of electronic quality measures have been developed to allow providers to report data electronically using electronic health records. <1.1. Medicare Quality Programs> Since the early 2000s, CMS has created a number of distinct quality reporting programs within Medicare. These programs generally focus on different sites of care, such as hospitals, physician offices, and nursing homes. Beginning in the early 2000s, CMS launched a number of related programs that offer financial incentives to providers receiving Medicare payments to report their performance on specified quality measures. Some of these programs, such as the Hospital Inpatient Quality Reporting program, are pay-for-reporting programs, in which providers may receive higher payments if they report their performance on the quality measures used in the programs. Others, such as the Hospital Value-based Purchasing program, are pay-for-performance programs, in which the level of providers performance on the quality measures affects the amount of the payment they receive. CMS also incorporates pay-for- performance in various alternative payment models, such as accountable care organizations where CMS pays groups of providers based in part on the collective performance of those providers, rather than the fee-for- service traditionally paid in Medicare. <1.2. Developing and Adopting New Quality Measures> At any given point in time, CMS has a set of quality measures it is currently using in its various Medicare quality programs as well as efforts underway to identify different quality measures to better meet program needs. These quality measures may either already have been developed or potentially could be developed. A variety of different entities may develop new health care quality measures, such as the Joint Commission, the National Committee for Quality Assurance, and various medical specialty societies. In some cases CMS itself contracts with entities for the development of measures for use in its Medicare quality programs. CMS has developed a set of guidelines for developing new quality measures that are described in its Blueprint for the CMS Measures Management System. The Blueprint lays out the steps measure developers should follow to first identify health care topics or conditions where new measures are needed, and then develop and test specific new measures to fill those identified gaps. According to CMS estimates, it can take 2 years or more to complete all of these steps. As part of this process, CMS encourages entities that develop measures to submit them to the National Quality Forum (NQF), a nonprofit organization that evaluates and endorses measures that is, determines which measures should be recognized as the best available for a given aspect of care. NQF has endorsed over 700 quality measures. In addition, NQF plays a major role in CMS s process for determining which measures to use in its Medicare quality programs. Since 2009, NQF has been the sole organization to function under contract to CMS as the consensus-based entity as described by the provisions of sections1890 and 1890A of the Social Security Act (SSA). The consensus-based entity manages the Measure Applications Partnership, which is a formal process for obtaining stakeholder input on proposed new measures for Medicare quality programs, along with other measure endorsement and maintenance activities. CMS also relies on other contractors to conduct analyses or disseminate information related to the development and use of quality measures in its Medicare quality programs. <1.3. CMS Quality Measurement Strategic Objectives> CMS has established strategic objectives for the measures CMS develops or uses in its Medicare quality programs. CMS s quality measurement strategic objectives have evolved over the last decade as CMS has expanded Medicare quality programs and has collaborated with other organizations that use or develop quality measures, such as private insurance companies. In 2017 CMS announced a revised version of these objectives in its Meaningful Measures Initiative. These eight quality measurement strategic objectives are for CMS to adopt measures that are patient-centered and meaningful to patients, clinicians, and address high-impact measure areas that safeguard public health, are outcome-based where possible, fulfill each program s statutory requirements, minimize burden for providers, create significant opportunity for improvement, address measure needs for population-based payment through alternative payment models, and align across programs and/or with other payers. In addition, to provide greater specificity for its objective to address high- impact measure areas that safeguard public health, CMS has designated 19 specific meaningful measure areas. See appendix I for the list of these meaningful measure areas and the six broad quality priority areas that they address. <1.4. CMS Funding for Quality Measurement Activities> CMS s quality measurement activities are funded through the federal budget and appropriations process. Each appropriation includes language that describes an authorized purpose or purposes for which the funds may be used. Such language may specifically reference certain activities such as quality measurement or could refer to a broad purpose under which activities such as quality measurement may have been authorized. Available funds are first obligated that is, committed to a specific purpose and then expended when an actual payment is made. Expenditures can occur one or more fiscal years after the obligation was incurred. Funds that are available in a given fiscal year but not obligated during that year are known as unobligated balances. Unobligated balances can be carried over to the next fiscal year, unless their availability expires under the terms of their appropriation. Most CMS funding that is explicitly appropriated for quality measurement activities is available indefinitely, until obligated and expended. <2. CMS Lacks Complete Information on Its Quality Measurement Funding and on How It Uses Funding to Achieve Its Strategic Objectives> CMS maintains information in its core budget database on the amount of funding for its quality measurement activities, such as when funding for that purpose is specifically authorized by appropriations. However, CMS s database does not capture all of the funding the agency has obligated that pays for quality measurement activities or the extent to which this funding has supported CMS s quality measurement strategic objectives. Our review of CMS s quality measurement funding information also shows that CMS maintains a substantial amount of unobligated balances funding that CMS has not yet used and remains available for quality measurement activities. <2.1. CMS Maintains Information on Funding for Some Quality Measurement Activities in CMS s Core Budget Database> CMS officials report that the agency records funding information for its quality measurement activities in its core budget database, HIGLAS. CMS has information on quality measurement funding primarily when the appropriation is specifically authorized for that purpose. CMS officials identified eight appropriations that specifically designate funding for Medicare quality measurement activities over the 10-year period we reviewed (fiscal years 2009-2018). These include five appropriations that have funded the consensus-based entity established under sections 1890 and 1890A of the SSA, to carry out various activities under contract with CMS in accordance with those provisions. CMS officials identified another three appropriations that focused on more discrete aspects of quality measurement, such as developing new quality measures for clinicians. From fiscal years 2009 through 2018, a total of $429.9 million was authorized for these eight appropriations (see table 1). In addition, CMS officials identified some funding used that is, obligated for quality measurement activities, from appropriations authorized for more general purposes. They obtained information on such usage from HIGLAS based on the presence of labels, such as quality measure development, in the project code and project description data fields in HIGLAS. According to CMS officials, these data fields provide the most detailed categorization of activities in HIGLAS. Table 2 shows the specific project codes and project descriptions used in HIGLAS to characterize use of quality measurement funding in fiscal year 2018. These obligations are from both appropriations that specifically authorize quality measurement activities and also from general appropriations whose authorized purposes do not specifically mention quality measurement activities. As shown in table 2, the project codes and their descriptions used in HIGLAS provide high-level information that largely matches the information known from the appropriation authorizing such use. <2.2. CMS Lacks Information on the Total Amount of Quality Measurement Funding and the Extent to Which This Funding Supports Its Strategic Objectives> Our review of the funding information in HIGLAS found that the data do not capture the total amount of funding CMS has obligated that pays for quality measurement activities. As we have noted, CMS officials identified funding obligated for quality measurement activities in HIGLAS either because 1) the funding came from appropriations specifically designated for quality measurement purposes, or 2) the funding came from appropriations for more general purposes but had specific HIGLAS project codes to identify its use for quality measurement activities. However, CMS officials told us that they thought there were additional quality measurement activities funded from appropriations for general purposes that could not be identified by project codes in HIGLAS. As a result, they could not determine from HIGLAS what amount of these funds paid for quality measurement activities as opposed to other activities. CMS officials stated that while they do not have information on the amount of this unidentified quality measurement funding, they estimated that it was less than the amount of quality measurement funding identified in HIGLAS. Furthermore, CMS s funding information in HIGLAS also is not sufficiently detailed to show the extent to which the funding was used for activities that support CMS s eight quality measurement strategic objectives. While some HIGLAS project descriptions like Hospital Outcome Measures correspond with one of these objectives, as shown in table 2 most do not. In addition, the documents that CMS uses to plan and monitor spending for quality measurement activities generally do not include information showing how much funding CMS has obligated for activities related to CMS s quality measurement strategic objectives. CMS officials stated that they considered it unduly burdensome to attempt to use HIGLAS to track quality measurement funding according to their strategic objectives. First, they said that quality measurement activities overall constitute a small portion of the funding recorded in HIGLAS. In addition, officials noted that CMS s strategic objectives change over time. Finally, CMS officials stated their belief that all of CMS s quality measure activities help to address the agency s objectives. As a result, CMS cannot determine how its specific funding for quality measurement activities addresses each of its quality measurement strategic objectives and how possible changes in its funding allocations among those activities could help to promote its objectives more effectively. Federal standards for internal control call for agencies to use complete and accurate information and to identify types or categories of information that enable the agency to achieve its objectives. Without more complete information on the total amount of funding obligated to quality measurement activities, CMS officials cannot accurately assess the magnitude of resources they have provided for quality measurement. In addition, even if CMS quality measure activities generally address one or another of its strategic objectives, having information on the extent of funding for each quality measurement strategic objective could help CMS officials assess the amount of funding each of the agency s priorities is receiving. Doing so would enable CMS officials to make adjustments in accordance with their objectives. While collecting more complete and detailed information on funding for quality measurement activities in HIGLAS or using some other method that CMS determines is feasible would require additional effort, CMS could realize corresponding benefits. CMS officials told us that at present, when they need to obtain a higher level of detail about funding for quality measurement activities, they do not use HIGLAS and instead typically conduct a manual review of any available underlying documentation, such as documents related to individual contracts. For example, in order to respond to a statutory requirement to report on its spending to develop certain quality measures for physicians, CMS officials told us they needed to review a set of individual contracts associated with those measures. CMS officials noted that this process is often laborious and that the content of available documents may not enable them to obtain all the desired funding information for the specific quality measurement activities in question. Collecting more information routinely about funding for quality measurement activities has the potential to make such manual reviews of documents less necessary and burdensome. The limitations in CMS s information on funding for quality measurement activities have implications for CMS s ability to communicate information outside the agency. As required by the Congress, CMS issued its first annual report on quality measurement funding in March 2019. In this report, CMS itemized information on such funding into four broad categories: Duties of the consensus-based entity, Dissemination of quality measures, Program assessment and review, and Program oversight and design. CMS s report listed a number of more specific activities within these categories without providing the amount of funding it allocated for each of the described activities. More detailed funding information could help the Congress to better understand how CMS is using appropriations for quality measurement, and could assist with effective oversight of these activities. Internal control standards call for agencies to consider the needs and expectations of external users, such as Congress, when collecting and communicating information. <2.3. CMS s Funding Information Shows Substantial Unobligated Balances in Its Quality Measurement Funding> Our review of the funding information CMS provided determined that the agency has maintained substantial unobligated balances related to its quality measurement activities from fiscal years 2010 through 2018. Unobligated balances represent funding that CMS did not use in the year it was appropriated, and that remains available for use in future years. All but one of the eight appropriations that specifically authorize spending for quality measurement activities are available indefinitely. Five of these appropriations funded quality measurement activities under sections 1890 and 1890A of the SSA. In the case of these five appropriations, with the exception of fiscal year 2009, CMS had unobligated balances each year that were larger than or similar to the total amount the agency had obligated from those appropriations that year (see figure 1). Figure 1 also shows three other appropriations more narrowly focused on developing new measures for clinicians and post-acute care providers under Medicare (appropriated by MACRA section 102 and the IMPACT Act sections 2a and 2d). Since 2015, unobligated balances for these appropriations also generally exceeded annual obligations. See appendix II for more detailed information. CMS officials stated that unobligated balances reflect broader spending decisions for quality measurement as well as other activities the agency makes to meet its strategic objectives and any related legislative requirements. CMS officials said that in general, they chose to use the available quality measurement funds conservatively to ensure there were no gaps in funding to carry out their statutory responsibilities, in view of uncertainty about the availability and timing of funding in future years. They also said that they took into account the total amount of appropriated funds including unobligated balances in developing the scope and duration of quality measurement activities. The officials noted that it often takes more than one year to implement these activities, in order to gather information, select contractors, or solicit and award grant applications. Regarding the level of unobligated balances to be carried over from one fiscal year to the next, CMS officials told us that they work to obligate all appropriations in accordance with statutory requirements, and do not have thresholds for maximum unobligated balances. Maintaining large unobligated balances means that CMS is retaining funds for future quality measurement activities rather than using them for current quality measurement activities. One example of how such choices can affect the scope and timing of CMS s quality measurement activities was the outcome of a CMS competition for cooperative agreements, announced in March 2018, to develop new clinician quality measures to address identified measurement gaps. Drawing on funds from the appropriation dedicated to developing, improving, updating, or expanding new clinician quality measures (MACRA 102) that were available for use until 2022, CMS set a maximum amount for the awards of $30 million over three years. CMS officials determined that the $30 million ceiling meant that there was adequate funding for seven awardees, while CMS indicated that additional applicants scored well on CMS s selection criteria and addressed areas of need. For fiscal year 2018, MACRA 102 had an unobligated balance of $42 million, with an additional $15 million appropriation in place for fiscal year 2019. As of May 23, 2019, CMS officials told us that they had not announced new competitions to develop clinician quality measures. <3. CMS Lacks Assurance That the Quality Measures It Decides to Use or Develop Effectively Promote Strategic Objectives> CMS takes different approaches in deciding which Medicare quality measures to use in its programs, which to remove, and which new measures to develop. However, CMS lacks procedures to ensure that these decisions are consistent with its quality measurement strategic objectives, and CMS has not yet developed or implemented performance indicators to evaluate its overall progress toward achieving these objectives. <3.1. CMS Takes Different Approaches in Deciding Which Quality Measures to Use and Develop> For selecting measures to be used in its Medicare quality programs, CMS has an annual process, as defined by the Patient Protection and Affordable Care Act. CMS makes a number of decisions that influence measure selection throughout the process. Each year CMS asks measure developers to submit candidate quality measures to CMS for potential selection. CMS makes preliminary decisions on which of these measures to use in its quality programs, and it publishes this selection of measures in its annual Measures under Consideration list (MUC). The MUC list then undergoes public review by multiple stakeholders. After this review, CMS chooses which measures to include in the formal rulemaking processes that ultimately determine which measures are added to its quality programs. See table 3. To make decisions on which measures to include in the MUC list, CMS officials review the submissions. According to CMS, officials from each Medicare quality program, referred to as quality program leads, separately review each measure submitted for use in that program. CMS officials told us that as necessary, they consult with technical experts and with other CMS or Department of Health and Human Services (HHS) officials. According to CMS officials, the program leads make recommendations to higher level officials, such as division directors, on whether CMS should accept or reject each measure. CMS internal guidance outlines factors that, among other things, officials should consider. Some of these factors reflect the strategic objectives laid out in the Meaningful Measures Initiative, and the guidance also indicates that officials may consider additional factors in their decision-making. CMS officials told us that, when making measure selection decisions, program teams are given the flexibility to develop criteria that best suits their programs needs, noting that some programs are intended to address a broad range of areas, such as the Inpatient Quality Reporting Program, while others have a more limited focus, such as the Hospital Readmissions Reduction Program. CMS officials told us that the director of CMS s Center for Clinical Standards and Quality, which is responsible for quality measurement, makes the final measure selection decisions and, in doing so, generally accepts the recommendations of the program teams. Our analysis of CMS s quality measures indicates that the number of candidate quality measures submitted to CMS for the MUC list has decreased from 335 measures in 2014 to 67 in 2018. CMS officials told us the decline in the number of candidate measures submitted reflected CMS efforts to more clearly define a targeted set of quality measurement priorities for measure developers and to reduce provider reporting burden. Minimizing provider burden is one of CMS s strategic objectives, and, according to CMS officials, it represents a priority communicated by the CMS administrator. For more information about CMS s measure selection decisions for its annual MUC list in 2014 through 2018, see appendix III. CMS officials also make decisions annually about which existing measures CMS will remove from its Medicare quality programs. According to CMS officials, the process for deciding which measures to remove is an ongoing, iterative process, and discussions on which measures to remove generally occur in parallel with discussions for selecting measures, with discussions on both measure selection and removal coming to a conclusion in the drafting of the annual proposed and final rules for each program. For measures that are being used in its quality programs, CMS relies on measure developers to monitor the performance of their measures based on principles defined in CMS s Blueprint. According to the Blueprint, information from developers monitoring efforts, including recommendations from technical experts, should be conveyed to and evaluated by CMS officials. CMS officials told us that their decisions to remove measures often take into account the recommendations made by technical experts. In addition, CMS has promulgated through federal rulemaking eight factors for determining whether to remove existing measures from its Medicare quality programs, some of which reflect its quality measurement strategic objectives. CMS officials also said that in deciding to remove measures from CMS quality programs in 2018 they, in part, considered an assessment of the costs of reporting measures relative to the benefit of continued use of the measures. CMS decisions to remove measures have been included in notices of proposed rulemaking in the Federal Register, which allows for public comment and further consideration before issuance of final rules to that effect. In addition to making decisions on the selection and removal of measures, CMS officials also make decisions regarding which new measures to develop. Our review of CMS contract documents, including task orders, indicates that CMS typically awards multiple year contracts to conduct ongoing assessments of quality measures and to develop measures for specific Medicare quality programs, such as inpatient psychiatric facilities or post-acute care providers. Those task orders often call on contractors to convene technical expert panels and conduct additional analyses to assess what measures are currently available for use and what gaps exist in available measures. CMS officials told us they review these reports and provide informal feedback to the contractors. CMS also establishes parameters that guide these efforts. For example, in its 2016 Measure Development Plan for Medicare s new physician payment system, after soliciting public input, CMS designated six medical specialty areas in which to focus its measure development efforts, and subsequently added five more specialties on which to focus the work of its contractors. For more information about outside entities that perform quality measurement activities under contract with CMS and the efforts CMS has taken to coordinate these activities across its contractors, see appendix IV. <3.2. CMS Lacks Procedures for Systematically Assessing Whether the Measures It Decides to Develop and Use Address Its Strategic Objectives> CMS has taken some steps that provide opportunities for CMS officials to consider how quality measures may help address the agency s quality measurement strategic objectives. CMS officials said that in 2018 they began using the Measure Review Template, a spreadsheet used to consolidate information on quality measures submitted to CMS by measure developers. CMS officials told us that they use the spreadsheet to inform their discussions, such as by considering how measures are distributed across the 19 meaningful measure high-impact areas. CMS is also developing another tool, the Quality Measure Index, that is intended to provide a standard methodology to score measures on dimensions that include several of CMS s eight quality measurement strategic objectives. In addition, CMS officials told us that on occasion they have made limited assessments across measures concerning specific strategic objectives. CMS officials told us that these limited assessments across measures are generally performed when a measure submitted for use in its Medicare quality programs is closely related to another measure, which affects the CMS objective to increase measure alignment. In addition, they said they have identified a few indicators that they use to continuously assess their decision-making process, such as the percentage of outcome measures. CMS also documents some information about its quality measurement decisions. For example, the agency announces its final selection of quality measures to be added to and removed from its Medicare quality programs in the annual federal proposed and final rules for each of those programs. The rationale for selecting each measure is provided as a summary of the peer-reviewed evidence of the impact that use of the measure will have on clinical care. In addition, CMS maintains an internal tracking system, which assembles the information that measure developers provide about the measures they submit to CMS. This system includes some information related to CMS s quality measurement strategic objectives, such as the meaningful measures high-impact area the measure is intended to address. While these steps provide some information about the linkages between certain quality measures and some of CMS s quality measurement strategic objectives, CMS lacks procedures to ensure systematic assessment of each quality measure against each of its eight quality measurement strategic objectives. For example, while CMS has implemented the Measure Review Template to consolidate some information on measures, the template does not provide procedures for systematically assessing how each measure will help CMS achieve all eight of its quality measurement strategic objectives. The Quality Measure Index currently under development has the potential to be used in a systematic assessment of each measure, but according to CMS officials, as of March 2019 the agency had not yet determined how it planned to use this tool once its testing was complete. Furthermore, CMS lacks procedures to ensure a systematic assessment of whether the collective set of measures it decides to develop or use will help CMS achieve each of the objectives, which could help determine the extent to which each of the objectives is being effectively addressed. The limited assessments across measures that CMS officials said they perform do not consider whether each of CMS s objectives is being addressed. For example, one of CMS s eight quality measurement strategic objectives directs CMS to address 19 high-impact measure areas. CMS officials told us that, for each quality program, they look at whether measures generally address the high-impact measure areas, but gaps in these areas remain to be filled. In 2018, there were no measures used in CMS quality programs that addressed the high-impact area equity of care and 13 of 17 Medicare quality programs had no measures that addressed the community engagement area. Measure developers did not submit measures to CMS that addressed these areas, and CMS did not identify specific initiatives to address them. CMS officials told us, however, that CMS supports discussions of key methodological considerations for collecting and analyzing measure data that could help enable future development of these measures. Last, CMS lacks procedures for documenting the consistent application of those systematic assessments. Federal internal control standards indicate the importance of documenting decisions to support achieving agency objectives. Specifically, CMS does not document, either in its public reporting or internal tracking system, how each measure it decides to use is expected to promote each of its eight quality measurement strategic objectives. For decisions on developing new measures, the agency records less information. For example, CMS does not maintain a consolidated list of decisions to initiate the development of new quality measures across the various Medicare quality programs. CMS officials also told us that they generally do not maintain documentation of discussions on how or why they selected one measure for development over another. If CMS develops procedures to consider the effect of each of its quality measurement decisions on each of its quality measurement strategic objectives, then documentation of these procedures would help to show that they are implemented consistently. Federal standards for internal control state that management should design and implement internal control activities, such as tools and documentation of decisions, to support the agency in achieving its objectives. Without procedures that ensure that its quality measures fully address its strategic objectives, CMS increases the risk that the measures it decides to develop and use will not help the agency achieve its quality measurement strategic objectives as effectively as possible. <3.3. CMS Has Not Established Performance Indicators to Determine Its Overall Progress in Achieving Its Quality Measurement Strategic Objectives> CMS has not developed and implemented performance indicators that would be needed to determine if it is making progress in meeting its quality measurement strategic objectives. Establishing these indicators and using them to evaluate its progress towards meeting each of its quality measurement strategic objectives would enable CMS to determine whether its quality measurement efforts are sufficient or whether changes in these efforts are needed. According to federal internal control standards, after agencies establish objectives, they should establish a set of performance indicators and use them to assess their effectiveness in achieving their objectives and identify improvements in their work, as needed. However, CMS has not established performance indicators for its strategic objectives that would provide a basis for determining its progress towards achieving these objectives. Such performance indicators would relate to each of CMS s quality measurement strategic objectives and provide information on interim progress toward achieving these objectives. For example, CMS could establish one or more indicators of its progress toward addressing the 19 high-impact measure areas that safeguard public health, and an indicator of providers reporting burden for quality measurement to see if it showed an overall reduction. CMS officials told us that they assess the impact of the agency s quality measurement activities by reviewing changes over time in health care providers reported performance on selected quality measures. However, these measures are for providers quality of care, and are not indicators designed to determine the agency s progress in achieving its eight strategic objectives for quality measurement. Specifically, CMS has completed the National Impact Assessment of Quality Measures report every 3 years since 2012. These reports focus on trends in the performance of health care providers on a number of specific quality measures. Such analyses do not evaluate CMS s performance in developing and choosing to use measures that promote its quality measurement strategic objectives. CMS has convened the Meaningful Measurement and Improvement Affinity Group, a workgroup of CMS officials involved in quality measurement. This workgroup s stated mission is to champion the Meaningful Measures Initiative and facilitate its implementation across the agency. CMS officials told us that the workgroup has begun to discuss potential ways to evaluate the agency s progress in achieving the eight strategic objectives laid out in the Meaningful Measures Initiative. However, the information CMS officials provided on the workgroup s activities, as of March 2019, indicated that the group had not yet determined how to gauge such progress, such as by establishing performance indicators. <4. Conclusions> CMS plays a leading role in the process of developing new quality measures and selecting measures for use in its various quality programs in Medicare. These programs in turn affect the quality of care the program s beneficiaries receive. However, CMS lacks complete information on the amount of resources it has obligated for its quality measurement activities and how its allocation of those resources relates to its quality measurement strategic objectives. The agency also lacks procedures to ensure that the decisions it makes to develop and use measures for its quality programs are consistent with those objectives. Finally, CMS has not developed and implemented performance indicators to evaluate its progress towards achieving these objectives. Taken together, these issues limit CMS s ability to determine whether its allocation of resources and quality measurement decisions are optimal or whether changes are needed in its approach. <5. Recommendations for Executive Action> We are making the following three recommendations to CMS: The Administrator of CMS should, to the extent feasible, maintain more complete information on both the total amount of funding allocated for quality measurement activities and the extent to which this funding supports each of its quality measurement strategic objectives. (Recommendation 1) The Administrator of CMS should develop and implement procedures to systematically assess the measures it is considering developing, using, or removing in terms of their impact on achieving CMS s strategic objectives and document its compliance with those procedures. (Recommendation 2) The Administrator of CMS should develop and use a set of performance indicators to evaluate the agency s progress towards achieving its quality measurement strategic objectives. (Recommendation 3) <6. Agency Comments> We provided a draft of this report to HHS for review and comment. In its written comments, which are reproduced in appendix V, HHS concurred with our recommendations. Regarding our first recommendation, HHS stated that it has undertaken a review of its fiscal accountability processes for its quality improvement activities and is implementing more granular tracking of funding specific to quality measurement to the extent it is feasible. Regarding our second recommendation, HHS stated that it will determine what steps may be needed to further document how its measure decisions impact the achievement of CMS s quality measurement strategic objectives. HHS s comments did not address the need to develop and implement procedures for systematically assessing measures against the strategic objectives, as we recommended. Regarding our third recommendation, HHS stated it would consider how best to evaluate its progress in meeting its quality measurement strategic objectives. In addition, HHS provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Health and Human Services, the Administrator of the Centers for Medicare & Medicaid Services, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or farbj@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: CMS Quality Priorities and Meaningful Measure Areas As part of its Meaningful Measures Initiative, the Centers for Medicare & Medicaid Services (CMS) identified 19 meaningful measure areas to specify its priorities under its quality measurement strategic objective to address high-impact measure areas that safeguard public health. The 19 areas are linked to six broader health care quality priorities previously identified in the 2011 National Strategy for Quality Improvement in Health Care. See table 4. Appendix II: CMS Appropriations for Medicare Quality Measurement Activities, Fiscal Years 2009-2018 The Centers for Medicare & Medicaid Services (CMS) has identified five separate appropriations that for various fiscal years have funded the activities assigned to the consensus-based entity (currently the National Quality Forum), along with certain other quality measurement activities, as described in sections 1890 and 1890A of the Social Security Act. See table 5. Three additional appropriations focus on specific Medicare quality measurement activities, such as post-acute care measures. See table 6. Appendix III: Description of Quality Measures CMS Selected for Its Annual Measures under Consideration List, 2014-2018 Tables 7 to 12 below present descriptive information that the Centers for Medicare & Medicaid Services (CMS) collects through its issue tracking system on the measures submitted to CMS by measures developers for potential use in CMS s Medicare quality programs. Appendix IV: CMS-Contracted Organizations That Perform Quality Measurement Activities and Efforts to Encourage Coordination The Centers for Medicare & Medicaid Services (CMS) has used the majority of its Medicare quality measurement funding for activities conducted by outside organizations under contract with CMS. Between fiscal years 2009 through 2018, the amount of obligations to contracted organizations increased from $10 million to nearly $55 million. See table 13. The total amount of funds obligated to each contractor in fiscal years 2009 through 2018 to perform Medicare quality measurement activities varied, ranging from $1,000 to $139,397,410. For fiscal years 2009 through 2018, 91 percent of funds obligated to contracted organizations for Medicare quality measurement activities went to 12 of 59 contracted organizations. See table 14. CMS has undertaken efforts to coordinate the Medicare quality measurement activities performed by its contractors. For example, CMS works with a CMS contractor, Battelle, to facilitate monthly webinars with its Measure & Instrument Development and Support (MIDS) contractors. The purpose of the webinars is to provide contractors with a forum to discuss each other s quality measurement activities and to exchange ideas. For more information about CMS s formal efforts to coordinate the quality measurement activities of its contractors, see table 15. Appendix V: Comments from the Department of Health and Human Services Appendix VI: GAO Contact and Staff Acknowledgments <7. GAO Contact> <8. Staff Acknowledgments> In addition to the contact named above, Will Simerl, Assistant Director; Eric Peterson, Analyst-in-Charge, Jonathan Adams, George Bogart, Krister Friday, Cathy Hamann, Katie Mack, and Dan Ries made key contributions to this report. Also contributing were Vikki Porter and Ethiene Salgado-Rodriguez. Related GAO Products Health Care Quality: HHS Should Set Priorities and Comprehensively Plan Its Efforts to Better Align Health Quality Measures. GAO-17-5 (Washington, D.C.: October 13, 2016). Patient Protection and Affordable Care Act: Procedures for Reporting Certain Financial Management Information Should Be Improved. GAO-14-697 (Washington, D.C.: September 22, 2014). Budget Issues: Key Questions to Consider When Evaluating Balances in Federal Accounts. GAO-13-798 (Washington, D.C.: September 30, 2013). Health Care Quality Measurement: HHS Should Address Contractor Performance and Plan for Needed Measures. GAO-12-136 (Washington, D.C.: January 13, 2012). Program Evaluation: Improving the Flow of Information to the Congress. GAO/PEMD-95-1 (Washington, D.C.: January 30, 1995). | Why GAO Did This Study
To encourage greater value in health care, CMS adjusts its Medicare payments to many health care providers based on measures of the quality of care. Therefore, the decisions CMS makes to choose certain quality measures have significant consequences. These decisions may involve selecting specific existing measures for CMS to use, stopping the use of some measures, or identifying new measures to be developed.
The Bipartisan Budget Act of 2018 contains a provision for GAO to review CMS's quality measurement activities. For this report, GAO (1) assessed the information CMS maintains on funding of health care quality measurement activities, and (2) described and assessed how CMS makes decisions to develop and to use quality measures. GAO analyzed CMS funding data for 2009 through 2018 and data on CMS quality measurement selections for 2014 through 2018. GAO reviewed CMS documentation related to its decisions on quality measurement and interviewed program and contractor officials.
What GAO Found
The Centers for Medicare & Medicaid Services (CMS), within the Department of Health and Human Services (HHS), maintains information on the amount of funding for activities to measure the quality of health care provided under Medicare. CMS's information shows it has carried over from each year to the next large amounts of available funding—known as unobligated balances—for quality measurement activities from fiscal years 2010 through 2018 (see figure). CMS officials said they maintained such available funding to ensure there were no gaps in funding for future years. However, CMS officials also told GAO that the information it maintains does not identify all of the funding the agency has obligated for quality measurement activities. Further, it does not identify the extent to which this funding has supported CMS's quality measurement strategic objectives, such as reducing the reporting burden placed on providers by CMS's quality measures. With more complete and detailed information, CMS could better assess how well its funding supports its quality measurement objectives.
CMS takes different approaches for deciding which quality measures to develop and to use. However, CMS lacks assurance that the quality measures it chooses address its quality measurement strategic objectives. This is because CMS does not have procedures to ensure systematic assessments of quality measures under consideration against each of its quality measurement strategic objectives, which increases the risk that the quality measures it selects will not help the agency achieve those objectives as effectively as possible. These procedures, such as using a tool or standard methodology to systematically assess each measure under consideration, could help CMS better achieve its objectives. In addition, CMS has not developed or implemented performance indicators for each of its quality measurement strategic objectives. Establishing these indicators and using them to evaluate its progress towards achieving its objectives would enable CMS to determine whether its quality measurement efforts are sufficient or changes are warranted.
What GAO Recommends
GAO recommends that CMS (1) maintain more complete and detailed information on its funding for quality measurement activities, (2) establish procedures to systematically assess measures under consideration based on CMS's quality measurement strategic objectives, and (3) develop and use performance indicators to evaluate progress in achieving its objectives. HHS concurred with all three recommendations. |
gao_GAO-19-450 | gao_GAO-19-450_0 | <1. Background> <1.1. GAO s Questions for Assessing Reform Efforts> In developing our June 2018 report to assist the Congress, OMB, and agencies in assessing agency reform plans, we reviewed our prior work and leading practices on organizational transformations; collaboration; government streamlining and efficiency; fragmentation, overlap, and duplication; and high-risk and other long-standing agency management challenges. The resulting June 2018 report includes 58 key questions to aid in assessing reform efforts. These questions are organized into four broad categories and 12 subcategories. We determined that the questions most relevant to the current implementation stage of State s reform efforts are found in two subcategories: (1) Leadership Focus and Attention and (2) Managing and Monitoring. Table 1 lists the key questions in these subcategories. <1.2. State s 17 Reform Projects> In response to the March 2017 Executive Order 13781 and the ensuing OMB memo, State launched a listening tour intended to gather ideas and feedback from State and USAID employees. As a key component of this outreach effort, State hired a contractor to design and administer a confidential online survey, which was sent to all State and USAID employees in May 2017. According to the contractor s report, the survey had a 43 percent response rate, with 27,837 State employees and 6,142 USAID employees responding to the survey. The contractor also conducted in-person interviews with a randomly selected cross section of personnel, which included 175 employees from State and 94 from USAID. The contractor s report on the results of the survey and the interviews highlighted five areas for State reforms. In July 2017, the Deputy Secretary of State created five planning teams to develop multiple projects in those five areas. The Deputy Secretary also established an Executive Steering Committee composed of senior State and USAID officials to guide the five planning teams and provide direction during the reform process. Led jointly by State and USAID, each planning team comprised participants from a cross section of overseas and domestic workforces. The planning teams were tasked with gathering information and conducting analysis as described below: Foreign Assistance Programs: Analyze current foreign assistance programs at State and USAID to develop a future vision, ensuring alignment with national priorities. Overseas Alignment and Approach: Assess key diplomatic activities and identify required platforms, including the balance of work between headquarters and the field. Human Capital Planning: Identify ways to promote an agile and empowered workforce as part of an overarching talent map. Management Support: Identify opportunities to streamline administrative support functions at the bureau and agency levels to ensure front line effectiveness. Information Technology (IT) Platform Planning: Focus on improving the employee experience through increased use of cutting- edge technology and streamlining duplicative systems and processes. Figure 1 shows a timeline of key events in State s initial reform efforts. The planning teams developed specific reform projects, listed below in table 2, which State described in the fiscal year 2019 budget justification it submitted to Congress in February 2018. According to implementing officials, all these projects predated the Executive Order and OMB memo issued in the spring of 2017. They also noted, however, that the administration s reform-related directives helped advance State s preexisting efforts by focusing management attention and agency resources on these projects. <2. As of April 2019, State Had One Completed and 13 Continuing Reform Projects; Two Other Projects Had Stalled and One Project Was Discontinued> As of April 2019, according to State officials and status reports, State had completed one of its 17 reform projects; 13 projects were continuing; two projects were stalled pending future decisions or actions; and one project was discontinued. Table 3 provides additional details on each project and a summary of the results of our analysis. <3. Loss of Leadership Focus Contributed to Staff Uncertainty about Some Reform Efforts, Although Bureaus and Offices Have Taken Steps to Manage and Monitor Continuing Projects> <3.1. Leadership Focus and Attention> As State shifted into the implementation phase of its reform efforts in early 2018, multiple transitions within the agency contributed to a loss of leadership focus on the efforts, resulting in uncertainty about leadership s support for some reform projects. In February 2018, State reported to Congress in its fiscal year 2019 budget justification that it was pursuing the reform projects we described above. In March 2018, the first transition affecting the implementation of those projects occurred when the President removed the then Secretary of State and nominated the then CIA director to replace him; in April 2018, the Senate confirmed the current Secretary. According to senior State officials, when the new Secretary took office, his top priority was ending the hiring freeze and restarting a concerted recruitment effort because vacancies in key positions and a general staffing shortfall would otherwise have led to what one senior official described as a cataclysmic failure at State. These senior officials noted that the new Secretary decided some of the existing reform projects were not well designed and that he wanted greater emphasis on cybersecurity and data analytics. They said he also wanted to pursue other initiatives, including a new proposal to create a Global Public Affairs Bureau by merging two existing bureaus. The senior officials told us that the Secretary authorized responsible bureaus and offices to determine whether to continue, revise, or terminate existing reform efforts or launch new initiatives. However, State did not formally communicate other changes in its reform priorities to Congress, such as its plan to no longer combine State and USAID s real property offices. State initiated another transition in leadership of the reform efforts in April 2018 when it disbanded the dedicated planning teams overseeing the reform efforts and delegated responsibility for implementing the reform projects to relevant bureaus and offices. As the planning teams finished working on their particular reform efforts and prepared to transfer these projects to the bureaus, some planning teams provided memos and reports on the status of their efforts and offered recommendations for the bureaus to consider when determining next steps in implementing the projects. Some implementing officials, however, reported that they received little or no direction regarding their projects or any other indication of continued interest in their project from department or bureau leadership aside from the initial notification that the project had been assigned to them. For example, in separate discussions with implementing officials responsible for three different projects, the officials reported that they had not received any direction or other guidance related to their assigned project since it was delegated to them in April 2018. In one case, this lack of communication continued for nearly a year. In addition, although implementing officials said that they have managed to incorporate reform-related work into their daily responsibilities, they noted that there were multiple benefits from having had dedicated planning teams to lead earlier phases of State s reform efforts. For example, they said that the dedicated teams included senior officials and the regular involvement of high-level leadership facilitated by these teams had helped advance the reform efforts. These dedicated teams also required staff to set aside time to focus on reform initiatives, which allowed them to develop holistic solutions to reform-related challenges. Conversely, implementing officials reported negative implications of not having dedicated teams. For example, one implementing official described how positive work initiated under the leadership of these dedicated teams including efforts to eliminate redundancies and identify opportunities for consolidation ended when the teams were disbanded because the staff and resources needed to continue these efforts were no longer available. Various State officials noted that the prolonged absence of Senate- confirmed leadership in key positions posed additional challenges. We have previously testified that it is more difficult to obtain buy-in on long- term plans and efforts that are underway when an agency has leaders in acting positions because federal employees are historically skeptical of whether the latest efforts to make improvements are going to be sustained over a period of time. For example, State did not have a Senate-confirmed Under Secretary for Management from January 2017 to May 2019. In November 2018, the Deputy Secretary of State told us that the lack of a confirmed Under Secretary for Management was hindering State s ability to conduct business and implement reforms. The bureaus and offices responsible for 12 of State s 13 continuing reform projects reported directly to an Acting Undersecretary for Management from January 2017 through May 2019. Moreover, State officials told us that both projects that we determined to be stalled were, among other things, awaiting the confirmation of an Under Secretary for Management to make key decisions. Furthermore, some implementing officials told us that the lack of confirmed officials in leadership positions within the bureaus responsible for implementing the projects added to a lack of leadership focus on implementing some of State s reform projects. According to State officials, as of April 2019, although 13 of the reform projects described in the fiscal year 2019 Congressional Budget Justification were considered by State to be continuing, some had been scaled back, slowed down, or both as a result of senior leadership s shifting priorities and attention. For example, one of State s initial reform projects was related to better management of real property. However, State ultimately scaled back this project, effectively splitting it into two projects: One project focused on real property process improvements is continuing, but State has discontinued the other project to consolidate its and USAID s real property function. Implementing officials told us in November 2018 that they were still pursuing the internal real property process improvements. They said then that they expected this reform project would likely progress at a slower pace without the dedicated team that previously had provided direct access and frequent interaction with senior department leadership. However, these officials recently informed us that the pace of progress on this project actually increased under the leadership of the bureau s Senate-confirmed Director. The bureau was led by acting directors from January 2017 through September 2018. We have identified leadership focus and attention as practices vital to successfully implementing reform efforts. These practices include communicating clear and compelling reasons for the reforms, having a dedicated implementation team to manage the transformation process, and designating leaders responsible for implementing reforms and holding them accountable. Dedicating a strong and stable implementation team responsible for a transformation s day-to-day management is important to ensuring that reforms receive the focused, full-time attention needed to be sustained and successful. One of the key responsibilities of a dedicated team is communication, particularly answering questions about the reform process from employees and other stakeholders. An implementation team is also important to ensuring that reform efforts are implemented in a coherent and integrated way. Because an agency s transformation process is a large undertaking, we have found that an implementation team must have direct access to and be accountable to top leadership. In turn, top leadership must vest the team with the necessary authority and resources to set priorities, make timely decisions, and move quickly to implement top leadership s decisions regarding the transformation. In addition, we previously reported that the single most important element of successful improvement initiatives is the demonstrated commitment of top leaders. This commitment is most prominently demonstrated through top leaders personal involvement in developing and directing reform efforts. Federal standards for internal control in the federal government also emphasize the importance of maintaining leadership continuity in order to achieve agency objectives. As a result, in other reports, we have recognized that agency reform efforts can take years to implement and that the time frame required for change typically takes longer than the tenures of political leaders. Similarly, the time it takes to nominate and confirm officials for senior management positions can also hamper efforts to initiate reforms or sustain momentum needed to successfully implement reform initiatives. For these reasons, and others, we have highlighted the need to ensure that top leadership drives the transformation and establishes dedicated teams to manage the transformation process. Taken together, the leadership transitions at State had two significant effects on State s reform efforts. First, the transition of departmental leadership and lack of direction and communication about subsequent changes in leadership s priorities contributed to uncertainty among implementing officials about the future of individual reform projects. Second, according to implementing officials, the transition of project responsibility from dedicated teams to bureau-level implementing officials resulted in fewer resources and a lack of senior leadership involvement and attention for some projects. Absent leadership decisions, implementing officials will continue to struggle with understanding leadership priorities with regard to State s reform efforts. Similarly, for any projects that are determined to be leadership priorities, day-to-day implementation activities will continue to be hampered by the lack of a dedicated team to guide and manage the agency s overall reform effort. <3.2. Managing and Monitoring> Although uncertainty exists about the leadership priorities regarding reform efforts, the bureaus and offices responsible for implementing State s reform projects have taken steps to manage and monitor their reform projects. Our previous work has identified monitoring as another important practice when implementing reform efforts, including, among other things, developing implementation plans and ensuring transparency by publicly reporting on progress toward milestones. These practices are also incorporated into State s Foreign Affairs Manual and other department policies. We found that the relevant bureaus and offices responsible for implementing reform projects had developed implementation plans and that these plans identified milestones and deliverables for the projects. For example the Human Resources Services Delivery project had an implementation plan with milestones and deliverables, such as identifying programs and functions for consolidation in 2019 and reducing human resource delivery costs by 14 percent by 2022. Similarly, we found that the implementation plan for the IT Modernization project incorporated milestones that including, among other things, implementing a comprehensive enterprise IT risk management program by fiscal year 2020; reducing average deployment time for new IT capabilities by 10 percent annually from fiscal year 2019 through fiscal year 2021; and increasing workforce access to cloud-based email and business data from 10 percent to 100 percent by September 30, 2019. With regard to monitoring, while there is no centralized mechanism for reporting progress on all projects, we found that each of the ongoing projects currently has some form of progress reporting. For example, State reports progress on projects with IT components such as Real- Time Collaboration and Work Anytime, Anywhere and Improve Enterprise-Wide Data Accessibility as part of its quarterly reporting on IT Modernization under the Government Performance and Results Modernization Act of 2010. As a result, these projects have continued within a formal monitoring structure that involves regular web-based status updates and progress reporting. Other reform efforts such as human capital and real property projects are monitored against milestones established in State s Joint Strategic Plan and progress is reported in State s Annual Performance Reports. Progress for certain projects is also monitored and reported in other reports, such as State s joint strategic plan, IT strategy, or human capital plan. Finally, other reform projects, such as State s acquisition reform efforts, are reported at the government-wide level as part of the Cross-Agency Priority Goals outlined in the President s Management Agenda. State collects data and evidence in order to measure progress in achieving outcome-oriented goals it sets for these projects. State reports these goals and relevant performance data in its annual performance plans and reports. For example, State uses the U.S. General Services Administration s Customer Satisfaction Survey to measure and report the performance of its Human Capital Delivery Services reform efforts. State also uses data collected through the Office of Personnel Management s Federal Employee Viewpoint Survey to measure employee satisfaction, which State established as a performance indicator for this project. <4. Conclusions> Effectively implementing major reforms can span several years and must be closely managed. In 2017, State began a reform effort that led to 17 reform projects, most of which are unimplemented but still continuing. State notified both OMB and the Congress of these projects. Nevertheless, State leadership has not provided the focus necessary to support the officials responsible for implementing all these reform projects. When a new Secretary of State took charge in March 2018, he transferred responsibility for implementing the reform efforts from dedicated teams led by senior department leadership to bureaus and offices. In addition, key political appointee positions remained filled by officials in an acting capacity until only recently. These transitions at State have had an effect on its reform efforts. Without explicit direction from senior leadership, some implementing officials involved in the reform efforts remain unclear about whether their projects are an agency priority. Further, for the reform efforts that remain an agency priority, a dedicated team to oversee implementation could help accelerate State s efforts to improve the efficiency and effectiveness of its operations. <5. Recommendations for Executive Action> We are making the following two recommendations to State: The Secretary of State should determine which of the unimplemented reform projects included in its fiscal year 2019 Congressional Budget Justification, if any, should be implemented and communicate this determination to Congress and appropriate State personnel. (Recommendation 1) The Secretary of State should establish a single dedicated team to manage the implementation of all reform efforts that the Secretary decides to pursue. (Recommendation 2) <6. Agency Comments> We provided a draft of this report to State, USAID, and OMB for review and comment. We received comments from State and USAID, which are reprinted in appendixes II and III, respectively. In response to our recommendation that State determine which reform projects should be implemented and communicate that information to Congress and appropriate State personnel, State indicated that it concurred but suggested it should inform OMB instead of Congress. While we agree that it is important for State to share information regarding its reform efforts with OMB, we remain concerned about State s lack of communication with Congress regarding the status of the projects State initially reported in its fiscal year 2019 Congressional Budget Justification. Congress is a key stakeholder in State s reform efforts and should be informed of changes in State s priorities and the status of these projects to help ensure successful implementation. In response to our recommendation that State establish a dedicated team to manage the implementation of all reform projects, State suggested that leadership of its reform projects should be decided on a case-by-case basis with the latitude to determine whether projects will be assigned to a higher level or within individual bureaus. We stand by our recommendation that State should establish a single dedicated team to manage the implementation of all its reform efforts. This is a key practice for implementing agency reforms identified in previous GAO reports, as well as in State s Foreign Affairs Manual (1 FAM 014.2), which calls for State to dedicate an implementation team to manage the transformation process for major reorganizations of bureaus or offices. Because reform efforts can span several years, dedicating a strong and stable team is important to ensure that the transformation receives the needed attention to be sustained and successful. In its comments, USAID expressed several concerns about the leadership of State s reform efforts and State s coordination with USAID. OMB did not provide written comments on the report. We also received technical comments from State and USAID, which we incorporated throughout our report as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of State, the Administrator of USAID, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-6881 or BairJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology We prepared this report under the authority of the Comptroller General to conduct work to assist Congress with its oversight responsibilities. This report examines (1) the status of the reform efforts that the Department of State (State) reported to Congress in its fiscal year 2019 Congressional Budget Justification and (2) the extent to which State addressed key practices we previously identified as critical to the successful implementation of agency reform efforts. For the purposes of this review, we use the term reform efforts to refer to all reform-related projects, proposals, plans, activities, and documents related to the 16 projects identified in State s fiscal year 2019 Congressional Budget Justification. The term projects refers specifically to the 16 reform projects identified in State s fiscal year 2019 Congressional Budget Justification. State subsequently split one of these 16 projects into two separate projects; thus, we refer to 17 reform projects throughout the report. For both objectives, we reviewed State s reform plans, proposals, and related documents. We also interviewed four senior officials generally at or above the assistant secretary level that had responsibility for the reform efforts as a whole, as well as all implementing officials responsible for each of the continuing reform projects. To determine the status of State s reform efforts, we reviewed documents and reports related to each of the reform projects described in State s fiscal year 2019 Congressional Budget Justification. To determine the extent to which State addressed key practices for implementing agency reforms, we assessed State s reform efforts against key questions identified in the implementation category of our June 2018 report. Specifically, we assessed State s implementation efforts against key questions from the two implementation-related subcategories of our 2018 report: (1) Leadership Focus and Attention and (2) Managing and Monitoring. We considered the nature of each of State s reform projects and the efforts taken to implement them, reviewed project-specific reports and other relevant State documents, interviewed State officials responsible for implementing each project, and then made qualitative determinations about the extent to which State s overall reform efforts addressed these criteria. A second analyst then independently reviewed and validated each determination. Subsequently, other GAO staff reviewed and concurred with these determinations. We only applied criteria from our June 2018 report that we determined were relevant to the scope of our review, which was limited to the implementation phase of State s reform efforts from April 2018 to the present to avoid duplicating the reviews of earlier phases of State s reform efforts conducted by State s and the U.S. Agency for International Development s Offices of Inspector General (OIG). Because State s OIG was also reviewing State s reform efforts, we coordinated regularly with State s OIG to avoid duplication. We did not consider criteria from the first two categories of our June 2018 report (1) Goals and Outcomes and (2) Process for Developing Reforms because these applied to the initial phases of State s reform efforts, which were outside the scope of our work and central to the broader historical review that State s OIG was conducting at the time of our review. We also did not apply criteria from the final category of our June 2018 report Strategically Managing the Federal Workforce to avoid duplicating work State s OIG recently conducted on State s workforce management. For the two sub- categories that we selected, we considered the key questions in the report in light of their relevance to State reforms efforts, and also employed other relevant criteria, where appropriate, most notably criteria for leadership from federal internal control standards. We conducted this performance audit from October 2018 to August 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of State Appendix III: Comments from the U.S. Agency for International Development Appendix IV: GAO Contact and Staff Acknowledgments <7. GAO Contact> Jason Bair, (202) 512-6881, or bairj@gao.gov. <8. Staff Acknowledgments> In addition to the contact named above, Thomas Costa (Assistant Director), Joshua Akery (Analyst in Charge), Peter Beck, David Dayton, Martin de Alteriis, Emily Gupta, Patrick Hickey, Chris Keblitis, Sarah Veale, and Alex Welsh made key contributions to this report. | Why GAO Did This Study
In 2017, State initiated a series of reform efforts in response to an executive order by the President and guidance issued by the Office of Management and Budget aimed at reorganizing and streamlining the government. GAO's prior work has shown that successful agency reform efforts follow key implementation practices, such as establishing a dedicated team to manage the implementation of reforms, and ensuring transparency by setting public goals and milestones to monitor progress.
This report examines (1) the status of the reform efforts that State reported to Congress in February 2018 and (2) the extent to which State addressed key practices critical to the successful implementation of agency reform efforts. GAO reviewed State's reform plans, proposals, and related documents; met with officials involved in State's reform efforts; and assessed implementation of the reform efforts against relevant key practices identified in GAO's prior work.
What GAO Found
The Department of State (State) is implementing most of the 17 reform projects it reported to Congress in February 2018, but a few are stalled or discontinued. State completed one project streamlining policy formulation, and continues working to implement 13 projects on topics including human resources, information technology, and data analytics. Progress on two projects related to overseas presence has stalled, and State has discontinued a project to consolidate real property management.
State has not addressed certain key practices related to leadership focus and attention in implementing its reform efforts. Multiple transitions in State's leadership and changing priorities contributed to uncertainty about leadership support for reform projects.Top leadership is expected to drive any needed transformation by clarifying priorities and communicating direction to employees and stakeholders.
In March 2018, the President replaced the Secretary of State, a transition that created uncertainty within the agency regarding the future of ongoing reform projects. While some officials stated that the new Secretary had expressed support for data analytics and cyber security reform efforts, other officials said they were unclear as to whether their projects remained a priority. According to senior officials, the current Secretary has focused on critical needs, such as ending the hiring freeze and increasing recruitment, and on launching new initiatives.
In April 2018, State disbanded the dedicated teams overseeing its reform efforts and shifted responsibility to bureaus and offices. In some cases, officials assigned to lead reform projects reported receiving little or no direction from department leadership. GAO's prior work has highlighted the benefits of having a dedicated team to manage agency transformations.
In addition, State officials indicated that the challenges posed by these transitions were compounded by a lack of Senate-confirmed leadership in key positions. Specifically, during the first 2 years of State's reform efforts, bureaus and offices responsible for implementing 12 of State's 13 continuing reform projects reported directly to one or more officials serving in an acting capacity. For example, State did not have a Senate-confirmed Under Secretary for Management from January 2017 to May 2019, which, according to senior officials, hindered State's reform efforts.
According to State officials, taken together these leadership transitions led to several projects being scaled back, slowed down, or both.
Although uncertainties exist about leadership priorities regarding the reform efforts, the bureaus and offices responsible for implementing reform projects have taken steps to manage and monitor them, consistent with key practices. Each of the continuing projects has implementation plans that include milestones and deliverables, and some report their progress publicly. For example, State reports on the progress of some projects in its annual performance plans and reports. The lack of a dedicated team to manage the reform process, however, could slow State's overall efforts.
What GAO Recommends
The Secretary of State should (1) determine which unimplemented reform projects, if any, should be implemented and communicate this determination to Congress and appropriate State personnel, and (2) establish a single dedicated team to manage the implementation of all reform efforts that the Secretary decides to pursue. State generally concurred with the recommendations. |
gao_GAO-20-357 | gao_GAO-20-357_0 | <1. Background> <1.1. Microelectronics Production at Sandia> The MESA Complex at Sandia comprises multiple production facilities and buildings, which total approximately 400,000 square feet (see fig. 1). In particular, the SiFab Facility, completed in 1988, is the primary production facility for microelectronics integrated into nuclear weapons. The SiFab Facility produces application-specific integrated circuits (ASIC) that are custom-designed to control certain nuclear weapon arming, fuzing, and firing functions. The MESA Complex also includes other buildings, such as the Micro Fabrication Facility, which was completed in 2006 and produces strategic radiation-hardened devices for manipulating electronic signals and electrical power. The physical layouts of these two production facilities center around a series of clean rooms that are designed to maintain an extremely low level of dust and other particulates, which can harm microelectronic functionality. The two facilities contain about 375 pieces of specialized production equipment, some of which cost millions of dollars, and have acid exhaust and liquid waste management systems for handling the byproducts of the production processes. The SiFab Facility produces all of the strategic radiation-hardened ASICs currently used in nuclear weapons. ASICs are produced on wafers a thin slice of semiconductor material such as silicon using what is referred to as a complementary metal-oxide semiconductor (CMOS) process technology. The production of ASICs requires hundreds of processing steps, which are completed over multiple weeks. For example, according to Sandia documentation, the production of a specific type of ASIC requires over 600 processing steps over an approximately 26-week period. Microelectronics are produced with characteristic dimensions (or feature sizes ) measured in nanometers (nm), or one-billionth of one meter. The process technology together with an associated feature size is known as a technology node. In general, smaller nodes represent more advanced technologies. The SiFab Facility produces microelectronics at the 350 nm node, and NNSA and Sandia refer to the CMOS production process technology at the 350 nm node as CMOS7. Currently, state-of-the-art microelectronics are produced at the 32 nm or below node. For example, the Intel Corporation produces commercial microelectronics at the 14 nm node for use in personal computers and servers. However, such smaller nodes are more challenging to produce and have not been proven to perform at the strategic radiation-hardened level, according to Sandia contractor representatives. Figure 2 shows commercially produced microelectronics on a wafer (left photo) and diced into individual microelectronics parts next to a U.S. dime (right photo). <1.2. Ongoing and Planned Weapon Modernization Programs and Other Modernization Plans Requiring Microelectronics> As shown in table 1, NNSA is undertaking multiple LEPs and weapon modernization efforts, in which Sandia is participating. In addition, the 2018 Nuclear Posture Review calls for NNSA to consider additional weapon programs specifically, a program to develop a modern nuclear- armed sea-launched cruise missile, and another to develop a new submarine-launched ballistic missile warhead (now referred to as the W93). To develop and produce microelectronics for these efforts, Sandia must (1) conduct research and development activities, (2) finalize the design of microelectronics to meet military requirements specific to the weapon program into which the microelectronics will be integrated, and (3) produce the microelectronics. Sandia must conduct all of these activities years before NNSA delivers a weapon program s first production unit to DOD. According to Sandia documents and contractor representatives, microelectronics research and development efforts generally begin 10 to 15 years before a weapon program s first production unit date, while microelectronics production generally begins 3 to 5 years before a first production unit date. DOD is also undertaking modernization efforts related to nuclear weapon delivery platforms, and Sandia is producing microelectronics to support those efforts. Specifically, DOD is responsible for designing and producing the arming and fuzing components on delivery platforms for certain types of nuclear weapons, and Sandia produces some of these components for DOD at the MESA Complex. For example, according to Air Force and Sandia documentation, the Air Force contracted with Sandia to design and produce microelectronics for its Intercontinental Ballistic Missile Fuze Modernization, which will provide a new fuze for use on both the current Minuteman III missile and its replacement, the Ground Based Strategic Deterrent missile. <1.3. DOE and NNSA Management Approaches for Projects and Programs> DOE and NNSA distinguish between projects and programs, and the agencies use different management approaches for each: Projects. DOE s project management order governs NNSA s management of capital asset acquisition projects with a total cost greater than $50 million. The order states that capital assets projects have a defined start and end point. Capital assets include land, structures, equipment and intellectual property that are used by the federal government and have an estimated useful life of 2 years or more. The order s goal includes delivering projects within their original performance baselines (on time and within budget) and fully capable of meeting mission performance and other requirements, such as environmental, safety, and health standards. Programs. As we reported in 2018, DOE has not established a program management policy. However, NNSA issued its own program management policy in February 2019. The policy applies to all NNSA elements and requires them to establish additional program management requirements for respective NNSA programs based on needs, risk, complexity, and stakeholder involvement, among other things. The NNSA policy defines a program in part as an organized set of activities directed toward a common purpose or goal, undertaken or proposed in support of an assigned mission area. In addition, some NNSA offices have issued their own program management directives that are more specific than the NNSA policy. For example, NNSA s Office of Defense Programs which is responsible for, among other things, weapon modernization programs, including LEPs, and associated materials and components, such as microelectronics issued a program management directive in June 2019 that establishes requirements and processes for managing the office s programs. This directive establishes four program management categories and execution requirements for these categories. These management categories are risk-based and apply different execution requirements commensurate with program risk. <1.4. Fiscal Year 2020 Funding for Microelectronics Activities at Sandia> The MESA Complex s estimated fiscal year 2020 budget is $283 million, according to Sandia documentation. As shown in figure 3, this funding comes from a variety of sources, because Sandia uses the MESA Complex to meet both NNSA s and DOD s nuclear weapon production missions as well as for research and development for those and other federal entities through strategic partnership programs. Sandia documentation states that a portion of the MESA Complex s budget is obtained from other, non-NNSA federal entities that pay Sandia directly to produce microelectronics for, among other thing, research and development purposes, and this amount of funding fluctuates annually. According to Sandia contractor representatives, the laboratory presents MESA s budget as an estimate for this reason. Specific funding sources are discussed in greater detail below: NNSA provides about 60 percent (or $168 million) of the MESA Complex s total estimated budget for fiscal year 2020. Two NNSA offices account for most of the agency s funding: The Office of Defense Programs accounts for 42 percent (or about $71 million) and is responsible for ensuring the United States maintains a safe, secure, and reliable nuclear stockpile through the application of science, technology, engineering, and manufacturing activities. This funding comes from multiple sub- offices. For example, the Office of Research, Development, Test, and Evaluation provides funding for microelectronics research and development; the Office of Production Modernization provides funding for, among other things, refurbishing microelectronics processing capabilities; and the Office of Stockpile Management provides funding for microelectronics production, according to an NNSA official and NNSA documentation. The Office of Safety, Infrastructure and Operations accounts for 46 percent (or about $78 million), and this office is responsible for ensuring existing facilities are safely operated, effectively managed, and maintained to meet mission needs. DOE s Strategic Partnership Programs account for about 13 percent (or $36 million) of the MESA Complex s fiscal year 2020 budget. These programs include research and development projects sponsored by the Air Force and the Defense Advanced Research Projects Agency. DOE s Laboratory Directed Research and Development work accounts for about 10 percent (or $28 million) of the MESA Complex s fiscal year 2020 budget. Each of DOE s 16 contractor-operated laboratories including Sandia may direct a portion of the funding they receive from DOE to scientists who conduct independent research. The statutory limit on this laboratory-directed research and development work is between five to seven percent of funds provided by DOE to the laboratories for national security activities. DOD provides about 6 percent (or $17 million) of the MESA Complex s fiscal year 2020 budget through Strategic Partnership Programs. According to Sandia documentation, this funding comes directly from the Air Force and Navy to support the production of microelectronics that are integrated into nuclear weapon delivery platforms. Other sources account for about 12 percent (or $34 million) of the MESA Complex s fiscal year 2020 budget. Among other things, this funding comes from indirect rates applied to all Sandia programs to support the MESA Complex s management and operations. <2. NNSA Completed Actions over the Past Decade to Sustain Its Microelectronics Capability at Sandia and Identified but Did Not Pursue Alternatives for a New Future Capability> Over the past decade, NNSA completed several actions to sustain its existing strategic radiation-hardened microelectronics facilities at Sandia through 2025 while simultaneously identifying future alternatives for its microelectronics capability beyond 2025. In particular, during fiscal years 2012 through 2019, NNSA engaged in a $150 million effort at Sandia to sustain operations at the SiFab Facility through 2025. NNSA pursued this effort in response to a 2010 study conducted by Sandia that identified the need for millions of dollars in funding to sustain the SiFab Facility through 2025. NNSA s sustainment efforts focused on the following two areas: Infrastructure. NNSA spent about $27 million to complete approximately 25 infrastructure projects that support microelectronics production. For example, NNSA installed two new 20,000-gallon tanks for water storage to improve the facility s deionized water system, which provides ultra-high purity water for use in certain processing steps. NNSA also replaced a portion of the facility s acid exhaust system. Equipment. NNSA spent about $123 million on production equipment for two main purposes: (1) to replace aging equipment that Sandia classified as being at high risk of failure; and (2) to refurbish existing equipment and procure equipment that will be used to produce microelectronics once Sandia completes its ongoing effort to convert the production process from using 6-inch silicon wafers to 8-inch wafers. Prior to these equipment investments, the SiFab Facility relied on aging equipment to perform certain processing steps using a manual process. In fiscal year 2018, Sandia refurbished existing equipment and purchased new equipment that is more automated and is intended to increase process reliability. In addition, according to Sandia documentation, Sandia needed to convert its production process to use 8-inch silicon wafers because the commercial sector had increasingly limited maintenance support and service for equipment that processed 6-inch wafers. While NNSA was working with Sandia to sustain the SiFab Facility through 2025, the agency also began identifying and evaluating options for producing microelectronics after 2025, such as constructing a new multibillion-dollar production facility at Sandia. However, because of changes to key assumptions, NNSA decided in November 2018 not to pursue any of the identified alternatives and instead stated that the agency was going to assess options to sustain its current capability at Sandia beyond 2025. See figure 4 for a summary of NNSA s actions to sustain the SiFab facility and consider alternatives. More specifically, NNSA took the following actions during the past decade to identify alternatives for producing microelectronics beyond 2025: In 2011, NNSA s Deputy Administrator for Defense Programs requested proposals from the agency s three nuclear weapons laboratories for flagship experimental science, technology, and engineering facilities to help ensure that NNSA will have the capabilities to address future national security needs. In response, Sandia submitted a proposal to NNSA in 2012 to construct a new, multibillion-dollar microelectronics production facility, called the Center for Heterogeneous Integration, Packaging, and Processes (CHIP2). The Sandia proposal estimated that CHIP2 would take 14 years to design and build at an estimated cost of $2.5 billion. The proposal indicated that the facility would increase microelectronics functionality and trustworthiness by creating a trusted supply chain into the future for design, fabrication, testing, and packaging activities. As a result of the time needed to design and construct CHIP2, investment would still be needed to sustain the MESA SiFab Facility through 2025. NNSA commissioned two studies by The Aerospace Corporation, a federally funded research and development center sponsored by the Air Force, to help the agency evaluate Sandia s CHIP2 proposal against other potential alternatives, such as contracting with commercial entities to produce microelectronics. These studies, completed in August and September 2014, generally ranked the CHIP2 proposal at or near the top of the alternatives but also stated that CHIP2 did not stand out as a decidedly better option. Nonetheless, in early 2015, NNSA s Deputy Administrator for Defense Programs issued a memorandum recommending that NNSA pursue the CHIP2 proposal as a formal capital asset project, subject to DOE s project management order on acquisition of capital assets. In 2016, in accordance with DOE s project management order, NNSA developed two key documents during the initiation phase of its capital asset project supporting the CHIP2 proposal, which NNSA referred to as the Trusted Microelectronics Capability (TMC) project. NNSA first developed a mission need statement, which is a formal document that identifies a credible performance gap between current capabilities and those needed to achieve the goals stated in the agency s strategic plan. The mission need should be stated in a way that is solution-neutral. The project s mission need statement stated that, among other things, after 2025 the SiFab Facility faced a severe risk of equipment and facility failures that could have detrimental impacts on future microelectronics production schedules. The statement noted that continued refurbishment of the SiFab Facility beyond 2025 could result in significant downtime during critical weapon development and production cycles, as the facility was constructed in the 1980s and was not sized for modern microelectronics production equipment and supporting infrastructure. NNSA next developed a requirements document, which describes the ultimate goals the project must satisfy while also identifying key assumptions and constraints. The requirements document identified several key requirements, including that the TMC project must be able to provide NNSA with trusted access to produce microelectronics in support of the agency s nuclear weapons mission. Between 2016 and 2017, in accordance with DOE s project management order, NNSA conducted an analysis of alternatives for the TMC project based on achieving NNSA s mission need statement. Such an analysis identifies, analyzes, and selects a preferred alternative to best meet the mission need by comparing the operational effectiveness, costs, and risks of potential alternatives, according to DOE documentation. During this process, NNSA considered 21 alternatives for meeting the mission need statement, among them the CHIP2 proposal as well as several alternatives that included partnerships with commercial industry and other government production facilities. The final TMC analysis of alternatives report, dated January 2018, did not identify the CHIP2 proposal as a preferred alternative because of the proposal s high life-cycle costs, high total project cost, and long project schedule. Instead, the report identified two preferred alternatives as best meeting NNSA s needs: (1) partnering with an existing, government-owned, contractor- operated production facility other than Sandia; and (2) entering into an interagency agreement with DOD and at least one member of the intelligence community, as well as a commercial entity, to design, build, and operate a state-of-the-art production facility. Ultimately, NNSA decided not to pursue either preferred alternative because of changing assumptions. For example, one of NNSA s key assumptions for the TMC analysis of alternatives was that the SiFab Facility could not remain operational beyond 2025. However, NNSA tasked The Aerospace Corporation to validate this assumption, and in January 2018, The Aerospace Corporation completed a study concluding that the SiFab Facility could remain viable until 2040 with prioritized and well-planned infrastructure repairs and equipment replacements. Another example of changing assumptions concerned the preferred alternative under which NNSA would enter into an interagency agreement with DOD and at least one member of the intelligence community to design, build, and operate a state-of-the-art production facility. This preferred alternative assumed that DOD, the intelligence community, or both, would pay to develop and build the production facility (estimated to cost from $350 million up to $1.2 billion), while NNSA would pay to equip its portion of the production process. The TMC analysis of alternatives report stated that commitment from DOD and the intelligence community would be vital, and that this alternative carried significant execution risks. In January 2018, NNSA documentation stated that this interagency alternative was no longer viable because other agencies stated they were no longer interested in a potential partnership. Partly as a result of these changes in key assumptions, in November 2018, NNSA wrote in a letter to Congress that it was no longer requesting funding for the TMC and was assessing what investments were needed to extend the operational life of the SiFab Facility to 2040. <3. NNSA Has Decided to Upgrade and Sustain Its Microelectronics Capability at Sandia through 2040, but Its Management Approach Does Not Fully Incorporate Key Controls> As part of NNSA s ongoing approach to managing its strategic radiation- hardened microelectronics activities, the agency plans to upgrade and sustain its microelectronics capability at Sandia through 2040, which it estimates will cost about $1 billion over the next 20 years. NNSA is also in the preliminary stages of identifying and evaluating options for a microelectronics capability beyond 2040. In addition, NNSA is starting to implement a revised management approach, including appointing a coordinator to guide certain aspects of its microelectronics activities. However, NNSA s approach does not fully incorporate key management controls, such as developing an overarching management plan, which the agency has applied to other important activities. <3.1. NNSA Plans to Upgrade and Sustain Its Microelectronics Capability at Sandia through 2040 and Is Beginning to Identify Options for a Capability Beyond 2040> In 2019, NNSA made three key decisions related to upgrading and sustaining its microelectronics capability at Sandia through 2040. First, NNSA approved plans to further upgrade its process for producing microelectronics. This upgraded process, called CMOS8, contains some features of the currently employed CMOS7 process, but is a more advanced technology node that also includes many new features, according to Sandia documentation. Second, NNSA approved plans to produce and integrate into future nuclear weapons a more advanced type of microelectronics component called a field programmable gate array (FPGA). According to Sandia documentation, strategic radiation- hardened FPGAs can be produced using the CMOS8 process but not the CMOS7 process. Third, Sandia developed and NNSA approved a plan to identify, prioritize, and provide budget estimates to sustain Sandia s microelectronics infrastructure and equipment at the MESA Complex over the next 20 years. This plan incorporates NNSA s decisions to develop the CMOS8 process and produce FPGAs. According to NNSA and Sandia documents, the rationale behind and expected benefits of these three key decisions are as follows: The CMOS8 process will allow Sandia to produce microelectronics at a smaller, more advanced technology node (180nm) compared with the current CMOS7 technology node (350nm). NNSA documentation states that, among other things, the CMOS8 process is expected to produce microelectronics that have twice the processing speed compared with those produced using the CMOS7 process. Such advances are needed to help ensure that future nuclear weapons remain safe, secure, and reliable while operating in increasingly hostile threat environments and that the weapons meet increased performance requirements, according to Sandia documentation. According to NNSA officials, the agency agreed with Sandia s assessment on implementing the CMOS8 production process based, in part, on findings and recommendations contained in an independent study commissioned by NNSA and completed by multiple entities including The Aerospace Corporation. According to Sandia documentation, while FPGAs have never been used before in a nuclear weapon, they may significantly reduce the cycle time for microelectronics research, development, and production compared with cycle times for ASICs used in nuclear weapons. This reduction may be possible because the ASICs currently used in nuclear weapons are uniquely designed and produced to carry out specific functions, whereas FPGAs can be produced using a common design and then programmed after production (but before insertion into a nuclear weapon) to carry out different functions, according to NNSA officials. Reduced cycle time from FPGAs could alleviate schedule pressure on future weapon modernization programs because cycle times for designing and producing ASICs for LEPs have historically been about 10 years before production of the first weapon, according to Sandia documentation. Sandia s plan will provide NNSA with the basis for the investment profile needed to sustain the MESA Complex s infrastructure and equipment through 2040. Because the sustainment effort will last at least 20 years, NNSA officials said that having a long-term planning document that provides a current baseline for the condition of Sandia s microelectronics infrastructure and equipment, identifies challenges, and recommends specific sustainment activities will be a useful management tool. The plan for extending the life of the MESA Complex at Sandia provides cost and schedule estimates related to sustainment of existing facilities and equipment, as well as installation of new equipment for CMOS8 and development and maturation of the FPGA technology. Overall, the plan calls for spending about $1 billion over the next 20 years. Specifically, the plan identifies spending for the following activities: Sustainment of existing facilities and equipment. The plan identifies about $900 million in spending from fiscal years 2020 through 2040 or about $45 million a year for the next 20 years to complete identified infrastructure and equipment projects. The plan calls for spending roughly half of the $900 million on projects to upgrade existing infrastructure within the MESA Complex. In particular, Sandia plans to spend about $120 million from fiscal years 2020 through 2024 on projects to improve or upgrade infrastructure within the SiFab Facility that is considered to be in poor condition based on information contained in NNSA s infrastructure condition database. The SiFab Facility is to be the physical location for the majority of production tools for CMOS8. Two of these projects would replace electrical power and distribution equipment at an estimated cost of about $50 million, while another project would replace the facility s chemical distribution system at an estimated cost of about $5 million. Sandia plans to spend the other half of the $900 million on equipment-related projects. For example, Sandia plans to spend about $85 million from fiscal years 2021 through 2026 on projects to support existing, non-CMOS8 production processes such as producing transistors in the Micro Fabrication Facility as well as activities that support microelectronics production, such as laboratory analysis, testing, and packaging. For example, Sandia plans to spend $1.5 million on a computerized tomography machine to support microelectronics testing. Development of CMOS8 and production of FPGAs. The MESA Complex extended life plan identifies about $170 million in spending from fiscal years 2020 through 2027 related to developing, maturing, installing, and implementing the CMOS8 process and the FPGA technology. Sandia contractor representatives told us that the CMOS8 process relies on newer and more advanced equipment to complete critical individual processing steps compared with the current CMOS7 process. As a result, the plan identifies about $70 million (out of the $170 million total) to acquire approximately 30 pieces of equipment, which Sandia will need to install and then qualify their performance. In addition, the plan identifies almost $90 million (out of the $170 million total) for developing and maturing the CMOS8 production process and the FPGA technology. According to Sandia documentation, Sandia plans to begin using the CMOS8 process to produce FPGAs for integration into a future nuclear weapon program at the end of fiscal year 2027. In addition to upgrading and sustaining Sandia s microelectronics capabilities through 2040, NNSA is in the preliminary stages of identifying and evaluating options to ensure a continued microelectronics capability beyond 2040, according to NNSA officials and documentation. In particular, NNSA has identified the following two key options: NNSA is in the initial stages of identifying and evaluating options to construct a new facility for producing microelectronics by 2040 and beyond. In December 2019, NNSA officials provided us with documentation stating that the agency plans to begin evaluating options for a new microelectronics facility in 2021 with the goal of completing construction in 2030, installing needed equipment in the completed facility by 2033, and qualifying the production process and begin producing microelectronics for integration into nuclear weapons no later than 2035. In NNSA s fiscal year 2021 budget request, which was released in February 2020, the agency requested funds to begin evaluation and early planning activities for this new microelectronics facility. NNSA is also evaluating whether the agency might be able to leverage a recent investment by DOD in a U.S. commercial microelectronics production facility to help meet NNSA s microelectronics production needs after 2040. Specifically, DOD announced in October 2019 that it had awarded a contract to a U.S.- owned-and-operated microelectronics commercial production facility to, among other things, enhance its radiation-hardened microelectronics production process to meet DOD s microelectronic needs for systems (such as satellites) that operate in environments with increased radiation levels. Over the next two years, the U.S. commercial microelectronics production facility plans to adapt its current production process and develop a new process that will produce microelectronics at a smaller node, according to DOD documentation. According to NNSA officials we interviewed in February 2020, NNSA and DOD are in preliminary discussions to determine if NNSA could make additional investments in this same facility to potentially produce strategic radiation-hardened microelectronics for integration into nuclear weapons. NNSA officials said that there was no firm timeframe for making an investment decision because such a decision would need to be made after the microelectronics facility begins producing microelectronics at the smaller node. <3.2. NNSA Is Starting to Implementing a Revised Microelectronics Management Approach, but This Approach Does Not Fully Incorporate Key Management Controls> NNSA is starting to implement a revised approach to managing its microelectronics activities. During our initial interviews with NNSA officials in early 2019, they stated that NNSA had not established a formal management structure to oversee the agency s microelectronics activities. Instead, they said that NNSA had delegated primary responsible for overseeing such activities to two officials within NNSA s Office of Defense Programs, who both served in multiple positions and had other duties within the office. According to these officials, once NNSA formally canceled the TMC project in November 2018, management efforts were focused on making initial determinations on the actions and budget estimates needed to sustain NNSA s existing microelectronics capability at Sandia until 2040. These efforts included coordinating with multiple NNSA offices such as the Office of Safety, Infrastructure and Operations to understand their future microelectronics needs and requirements and to review draft MESA Complex sustainment documentation prepared by Sandia. However, officials from NNSA s Office of Defense Programs told us that in late 2019 they determined that a more coordinated management approach would better position NNSA to oversee microelectronics activities and make informed budgetary and programmatic decisions. Specifically, NNSA officials stated that in November 2019 the Office of Defense Programs created and filled a new full-time microelectronics coordinator position within a sub-office, the Office of Research, Development, Test, and Evaluation. The microelectronics coordinator told us that NNSA has not yet finalized an official position description for the coordinator role. However, the coordinator said that the position will primarily be responsible for developing the CMOS8 process and the FPGA technology and integrating the research and development activities of the Office of Research, Development, Test, and Evaluation with another sub-office, the Office of Production Modernization. In addition, officials from NNSA s Office of Defense Programs and Office of Safety, Infrastructure and Operations told us that they continue to use other existing processes to manage microelectronics activities at Sandia. For example, these officials said that they use the annual planning, programming, budgeting, and evaluation process, along with the annual work authorization process, to coordinate across NNSA offices on budgetary matters and work activities associated with microelectronics activities at Sandia. As part of these processes, agency officials told us that they issue annual implementation plans to direct the work of Sandia contractors related to microelectronics activities. NNSA officials then monitor the contractors progress toward completing the identified scope of work and work activities. For example, NNSA officials said that they conduct monthly meetings with contractor representatives to review status and financial reports. They also said that they hold mid-year and end-of-year program reviews with contractor representatives. To help management achieve desired results such as ensuring a continued microelectronics capability federal agencies design, implement, and operate internal controls, which comprise the plans, methods, policies, and procedures used to fulfill an entity s mission, goals, and objectives. Federal standards for internal control state that management should, among other things: design control activities, such as by developing policies, procedures, techniques, and mechanisms that enforce management s directives, to achieve objectives and respond to risk; and establish an organizational structure, assign responsibility, and delegate authority to achieve the entity s objectives. NNSA has implemented internal controls at the agency level, in part, by developing and implementing directives that provide an organizational structure for the agency to plan, execute, control, and assess its programs and projects while also assigning responsibility and delegating authority for key management roles. For example, one purpose of NNSA s 2019 program management directives is to increase management efficiency and effectiveness by, among other things, clearly defining management responsibilities and authorities. In addition, DOE s project management order for the acquisition of capital assets lists principles for successful project execution such as disciplined, up-front planning; line management accountability; and effective implementation of all management systems (such as risk and performance management) supporting the project. In particular and as applicable to front-end planning, NNSA s and DOE s directives related to program and project management both include the following controls: Appointment of a federal manager, who is vested with the authority to carry out assigned responsibilities to meet program or project milestones on schedule and on budget, who manages the coordination of deliverables between the multiple entities (such as different program offices) involved, and who is responsible and accountable for planning, implementing, and executing a program or project, which includes responsibility for developing an overarching management plan; An overarching management plan, which establishes the procedures to define, execute, and monitor a program or project, as well as establishing specific requirements in a variety of areas such as cost estimating, an integrated schedule, performance management, and risk management to use to develop a baseline and against which to measure and monitor; A mission need statement, which identifies a credible gap between current capabilities and those needed to achieve the goals stated in the strategic plan; and A requirements document that describes the ultimate goals the program or project must satisfy while also identifying key assumptions and constraints. However, while some in NNSA and at Sandia have recognized the need to coordinate microelectronics activities to effectively carry them out and meet specific goals by specific dates, as evidenced by the hiring of a coordinator, Office of Defense Programs leadership have not fully developed controls to better manage and coordinate its microelectronics activities. Specifically, NNSA does not have or has not fully developed the following: Federal manager with coordination or oversight authority. NNSA has not established a federal management position with the authority and accountability to better coordinate or oversee NNSA s microelectronics activities. Instead, as described above, agency officials told us that NNSA s Office of Defense Programs established a coordinator position within a sub-office, the Office of Research, Testing, Development and Evaluation in November 2019 to help guide the agency s efforts to develop the CMOS8 process and the FPGA technology, among other things. Moreover, in May 2020, NNSA stated that senior leadership within the Office of Defense Programs have not endorsed the formal role of a microelectronics coordinator and that the coordinator s role and responsibilities are currently under review. NNSA also stated that the coordinator has not been given authority to manage an annual budget for microelectronics activities and that it was unlikely that such authority would be granted. This statement stands in contrast to earlier statements made to us that the coordinator would have responsibility for an annual budget of about $50 million, subject to future appropriations. Management plan. NNSA has not developed an overarching management plan to guide and coordinate the agency s microelectronics activities. Instead, NNSA officials from the Office of Defense Programs and the Office of Safety, Infrastructure and Operations told us that the agency is in the very early stages of developing a NNSA plan that will incorporate key decisions and approaches outlined in the Sandia s 20-year MESA sustainment plan, among other things. While NNSA officials are still evaluating the specific contents of this plan, they said that the plan may outline specific roles and responsibilities for each NNSA office involved in microelectronics, describe how these offices will interact with the microelectronics coordinator, and provide options for future microelectronics technology development efforts. However, it is unclear whether the document will define the planning approach, procedures, and processes that NNSA will use to ensure coordinated management in multiple areas and across multiple offices, such as developing cost estimates, an integrated schedule, and performance metrics. Agency officials said that this plan, when finalized, will provide a useful tool for coordinating various aspects of NNSA s microelectronics activities, but they did not provide an estimated date for when the plan will be completed. Mission need statement and requirements document. NNSA has not developed a current mission need statement or a current program requirement document. In 2016, as required by DOE s project management order on the acquisition of capital assets, NNSA issued a formal mission need statement and a requirements document to guide its assessment of the cancelled TMC project (as described earlier in this report). However, agency officials told us that these 2016 documents are no longer applicable to NNSA s current approach to sustaining its microelectronics capability and evaluating options to ensure a continued capability after 2040. NNSA officials said that they intend to establish an updated set of requirements to guide the agency s future microelectronics capability, and that they will consider these requirements in establishing a future mission need statement. However, NNSA officials did not provide a timeframe for finalizing these documents. NNSA officials acknowledged the importance of using management controls and that the controls described above would be useful, but they could not identify any specific DOE or NNSA directives, government-wide guidance, or best practices that they follow to manage their microelectronics activities. Instead, they offered three reasons why the agency has not implemented a more coordinated and robust set of management controls to oversee the agency s microelectronics activities: Microelectronics production has historically been managed as a component production effort by an LEP, which is led by an NNSA program manager within the Office of Defense Programs who coordinates directly with other NNSA offices and Sandia contractors. Because NNSA has not designed microelectronics as a formal program, the requirements contained in the agency s program management directives are not binding on microelectronics activities. NNSA officials said that the multiple projects (identified in the MESA Complex extended life plan) to upgrade and sustain the microelectronics capabilities at Sandia through 2040 at an estimated cost of over $1 billion over 20 years will not be subject to DOE s project management order, as these projects are for sustainment and not for new facility construction. According to officials from NNSA s Office of Safety, Infrastructure, and Operations, infrastructure investments are being planned and managed as maintenance and repair efforts. NNSA officials told us that the agency s current efforts provide the necessary structure for NNSA to oversee and manage its microelectronics capability. However, NNSA has recognized the importance of implementing a more coordinated and robust set of management controls for other important activities within its nuclear security mission that similarly have not been treated in the past as specific programs. For example, as we reported in June 2019, while NNSA historically managed its high-explosive capability without a formal mechanism to coordinate activities across multiple programs, it recently implemented a more robust set of management controls to oversee its high-explosive activities. Specifically, in 2018 NNSA appointed an enterprise manager to help coordinate these activities. NNSA also encouraged the enterprise manager to adopt, where appropriate, the program management controls contained in an NNSA directive on managing nuclear weapon life extension and strategic materials programs. Subsequently, the enterprise manager issued a strategic plan that provided an organizational structure for the agency s high explosives capability. By taking a similar approach to its management of microelectronics activities and incorporating a more coordinated and robust set of management controls, the agency would have increased assurance that its planned microelectronics activities are clearly defined, efficiently executed, and effectively monitored. <4. Conclusions> NNSA s ability to produce unique microelectronics for nuclear weapons is essential to ensuring a credible U.S. nuclear deterrent. Producing such microelectronics is a complex task, and NNSA is limited in its ability to partner with the commercial sector for such production. Over the next two decades, NNSA will undertake an expensive and ambitious approach to upgrade and sustain its existing microelectronics production facilities and capabilities. Specifically, NNSA plans to spend about $1 billion over the next 20 years to, among other things, upgrade its process to produce a new type of microelectronic component that has never been integrated into a nuclear weapon. In addition, NNSA officials said that the agency will need to identify and analyze options for a continued capability after 2040, and that effort could begin as early as 2021. To increase its management and oversight of the agency s microelectronics activities, NNSA has taken some positive steps such as appointing a microelectronics coordinator within the Office of Defense Programs and approving certain long-term planning documents. However, in contrast to other NNSA activities, including programs and projects, NNSA has not fully developed a coordinated and robust set of management controls to oversee its microelectronics activities. For example, NNSA has not established an overarching management plan to manage and coordinate the cost, schedule, and risks associated with its microelectronics activities. By incorporating a more coordinated and robust set of management controls, NNSA would have increased assurance that its planned microelectronics activities are clearly defined, efficiently executed, and effectively monitored. <5. Recommendation for Executive Action> The NNSA Administrator should incorporate additional management controls to better oversee and coordinate NNSA s microelectronics activities. Such management controls could include investing the microelectronics coordinator with increased responsibility and authority, developing an overarching management plan, and developing a mission need statement and a microelectronics requirements document. (Recommendation 1) <6. Agency Comments and Our Evaluation> We provided a draft of this report to DOD and NNSA for review and comment. DOD did not provide any comments. In its written comments, reproduced in appendix I, NNSA neither agreed nor disagreed with our recommendation but provided three main comments. First, NNSA stated that by December 2020 the agency plans to complete a strategic management plan that will more clearly articulate the integration of management controls for the various components of its microelectronics activities. NNSA stated that it believes this action is consistent with our recommendation. We are encouraged by this planned action and will evaluate the completed strategic management plan to determine if it meets the intent of our recommendation. Second, NNSA stated that our report did not clearly convey the differences between the management of microelectronics and other weapons or materials programs and did not include all aspects of its microelectronics activities (such as the procurement of commercial off the shelf components) in our audit s scope. In response, we added references to the various aspects of NNSA s microelectronics activities and clarified that our report focuses on NNSA s strategic radiation- hardened microelectronics activities at Sandia s MESA Complex. As stated in the report, we focused on this specific aspect of NNSA s microelectronics mission because of the language in the Senate committee report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2019, which included a provision for us to review NNSA s efforts to recapitalize its strategic radiation-hardened microelectronics design and production capacity. We also focused on this specific aspect of NNSA s mission because the fiscal year 2020 Stockpile Stewardship and Management Plan lists the continued production of strategic radiation-hardened microelectronics as one of four key challenges to the agency s nuclear stockpile mission. Third, NNSA stated that our audit did not include an assessment of management controls for the range of activities that work together to ensure the effectiveness of microelectronics planning and execution. However, our report identifies and describes these management controls, and as part of our work we considered how these controls work together. In addition and as stated above, NNSA intends to complete a strategic management plan to more clearly articulate the integration of its various microelectronics management controls, which is especially important as the agency invests about $1 billion dollars over the next 20 years while simultaneously needing to meet microelectronics production deliverables for multiple nuclear weapon modernization programs. NNSA also provided technical comments, which we incorporated in our report as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, the Secretary of Defense, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or at bawdena@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the National Nuclear Security Administration Appendix II: GAO Contact and Staff Acknowledgments <7. GAO Contact Staff Acknowledgments> Allison B. Bawden at (202) 512-3841 or bawdena@gao.gov In addition to the contact named above, Jason Holliday (Assistant Director), Patrick Bernard (Analyst in Charge), and Alisa Carrigan made key contributions to this report. Also contributing to this report were Jonathan Felbinger, Juan Garay, Lisa Gardner, Cindy Gilbert, Cynthia Norris, and Dan C. Royer. | Why GAO Did This Study
Microelectronics (see figure) form the basis of nearly all electronic products, including nuclear weapons. U.S. nuclear weapons use a unique supply of “strategic radiation-hardened” microelectronics that must function properly when exposed to high levels of radiation. NNSA's facilities at Sandia are the only source for these unique microelectronics, and the age of the facilities may pose significant risk to NNSA's capability after 2025.
A Senate committee report accompanying the National Defense Authorization Act for Fiscal Year 2019 included a provision for GAO to review NNSA's strategic radiation- hardened microelectronics activities. This report (1) describes NNSA's actions over the past decade to sustain existing facilities and identify future alternatives; and (2) examines NNSA's ongoing approach to managing its microelectronics activities and the extent to which this approach incorporates key management controls. GAO reviewed documents and interviewed officials and contractor representatives from NNSA and Sandia, toured Sandia's microelectronics facilities, and reviewed NNSA program and project management controls.
What GAO Found
Over the past decade, the Department of Energy's (DOE) National Nuclear Security Administration (NNSA) completed several actions to sustain the condition of its existing microelectronics facilities at Sandia National Laboratories (Sandia), which are NNSA's only source for producing strategic radiation- hardened microelectronics that can operate in environments with extreme exposure to radiation. In particular, during fiscal years 2012 through 2019, NNSA carried out a multiyear, $150-million effort at Sandia to replace or refurbish infrastructure and equipment in its primary microelectronics production facility to ensure continued operations through 2025. While NNSA was working with Sandia to sustain current facilities, the agency also began identifying and evaluating options for producing microelectronics after 2025, including constructing a new multi-billion dollar production facility at Sandia. However, because of changes to key assumptions, including longer-term viability of existing facilities, NNSA decided in November 2018 not to pursue any of the identified alternatives and instead stated that the agency was going to assess options to sustain its current capability at Sandia.
NNSA's ongoing approach to managing its strategic radiation-hardened microelectronics activities includes two key efforts. First, the agency decided in October 2019 to invest about $1 billion over the next 20 years to upgrade and sustain its microelectronics capability at Sandia through 2040. Specifically, NNSA plans to upgrade its production process as well as complete identified infrastructure (such as electrical distribution) and equipment projects. Second, in November 2019 NNSA created and filled a new full-time microelectronics coordinator position that, among other things, will have responsibility for certain aspects of the agency's microelectronics activities, according to agency officials. However, NNSA's approach does not fully incorporate key management controls that NNSA applies to other important activities. For example, DOE and NNSA require their programs and projects to establish an overarching management plan that describes the procedures to define, execute, and monitor a program or project as well as establishing specific requirements in a variety of areas such as cost estimating and performance management. NNSA has not established a similar management plan to oversee and coordinate its microelectronics activities. By incorporating these key management controls, NNSA would have increased assurance that its planned microelectronics activities are clearly defined, efficiently executed, and effectively monitored.
What GAO Recommends
GAO recommends that NNSA incorporate additional management controls, such as developing an overarching management plan, to better oversee and coordinate its microelectronics activities. NNSA neither agreed nor disagreed with this recommendation. |
gao_GAO-20-181 | gao_GAO-20-181_0 | <1. Background> GSA serves as the federal government s primary civilian real property agent. When GSA does not have available federally owned space that can meet the needs of federal agency tenants, it leases space for these agencies in privately owned buildings. The Administrator of GSA delegates leasing authority to GSA regional commissioners, who further delegate authority to lease contracting officers. For leases that GSA procures for tenant agencies, GSA serves as the lessee and pays rent to the building owner, who serves as the lessor. The tenant agency pays monthly rent to GSA, which includes a fee for GSA s services, and uses the leased space subject to the terms of an occupancy agreement with GSA. This agreement typically specifies not only the rent fee but also the amount the tenant agency must reimburse the lessor for improvements to the leased space such as changes to walls, electrical outlets, telephone lines, and secure rooms these are known as tenant Improvements. GSA leasing process. GSA uses different processes to carry out the leasing process depending on the size, cost, and type of the lease. For all of these processes, the leasing-planning process begins when GSA receives a request for space from a tenant agency and determines that fulfilling the request will require leasing space. According to the typical process outlined in the GSA Public Buildings Service (PBS) PBS Desk Leasing Guide, officials work with the tenant agency to define the requirements for the leased space, including the geographic area in which GSA will search for available properties. After this initial stage, GSA takes additional steps to acquire a new lease, see figure 1. For certain office space leases larger than 500 square feet, which represent more than 90 percent of GSA s leases as of the end of fiscal year 2019, GSA can deviate from its typical leasing process and instead use what it calls the Automated Advanced Acquisition Program (AAAP). GSA began using a predecessor to this program in 1991 in the National Capital Region only and rolled out the current version to all national markets in 2015. In this program, instead of GSA s first proposing requirements to potential lessors, the lessors first submit offers to GSA for pre-existing available space, including the space s size, location, and features, and the rent amounts the lessor is offering for different lease durations. Once GSA develops a set of requirements with a tenant agency, it evaluates these standing offers to eliminate those that would not meet the space requirements, ranks the bids by present value, and selects the lowest cost option, see figure 2. GSA is required to take further steps for high value leases with a net annual rent above the statutory prospectus threshold adjusted by GSA to $3.1 million in fiscal year 2019. For these leases, GSA must submit a prospectus, or proposal, to the House and Senate authorizing committees for their review and approval. As of the end of fiscal year 2019, GSA managed 8,045 leases, of which 291, or about 4 percent, had current annual rents above the 2019 prospectus level. These leases, however, accounted for about 41 percent of GSA s total annual rent obligations. GSA leases. GSA leases differ substantially from typical commercial leases. For example, in a GSA lease, GSA as the lessee proposes the lease requirements. In a typical commercial office space lease, however, the lessor drafts the lease requirements and proposes them to the prospective tenant. For additional examples of the differences between GSA and typical commercial leases, see table 1. GSA s lease reform efforts. In 2011, GSA issued a lease-reform implementation plan in response to comments from lessors and tenant agencies. In this plan GSA recommended changes including developing new lease models to better tailor its lease requirements to specific circumstances, and improving elements of its leasing process. As part of this and other initiatives since then, GSA developed leasing products and tools that it can use in various situations. These include: Simplified lease model: GSA developed this lease model for lower value leases with a facility security level of I or II, and a net annual rent total rent minus operating expenses of up to $150,000. GSA designed this model as a faster and more efficient method of processing lower value leases. As compared to GSA s standard and global lease models which can be used on leases of any size this model contains fewer requirements and may not have certain more complex elements such as annual operating-cost adjustments, real estate tax adjustments, or an allowance for tenant substitution. In addition, the model requires GSA and the tenant agency to finalize the complete set of space requirements prior to GSA s advertising the lease, a requirement that eliminates negotiations on the tenant improvements after GSA awards the lease. Net-of-utilities leases: As discussed in table 1, in most GSA leases the lessor is responsible for paying the utilities, and must estimate future utility costs as part of its bid for the lease. In a net-of-utilities lease, the tenant pays the utility costs for tenant space directly. A 2016 GSA study indicated that GSA could achieve savings through net-of-utilities leases for a small number of leases with certain characteristics including: the lease being over 50,000 square feet, a single tenant agency occupying the entire space, the tenant agency consuming large amounts of energy, and several other factors. GSA estimates that around 360 of its more than 8,000 leases meet these criteria. Succeeding and superseding leases: In most cases, GSA is required to conduct a full and open competition for leases. However, in certain circumstances GSA instead pursues succeeding or superseding leases. In circumstances where relocating to a new leased property would result in substantial relocation or duplication costs that GSA could not reasonably expect to recover through competition, GSA is allowed to pursue a succeeding lease, and when market conditions warrant renegotiation of an existing lease or when the tenant agency needs to make substantial modifications to a space before the expiration of a lease, GSA is allowed to pursue superseding leases. <2. Selected Stakeholders Identified Several Aspects of GSA Leases That Affect Cost and Competition, and GSA Has Taken Some Steps to Address These Concerns> The GSA leasing stakeholders we spoke with identified some aspects of GSA leasing that are attractive to potential lessors such as the government s good credit and GSA s long average occupancy. They also identified a number of aspects of these leases that they said can affect their costs and the number of lessors who are willing and able to bid on a GSA lease. These areas were: Structure: overall characteristics of a lease including the volume and complexity of requirements, and how GSA structures rent payments, reimbursements for tenant improvements, and provision of services; Requirements: specific provisions in the lease such as early termination, janitorial and maintenance, tenant substitution, and real estate taxes; and Process: the steps lessors must follow to complete a GSA lease, such as the length of time and GSA s ability to remain in a space after the end of the lease. <2.1. Lessors Said GSA Leases Are Attractive because of Lower Financial Risk and Stability> The stakeholders we spoke with identified a number of benefits of GSA leasing that are attractive to potential lessors, including the government s credit worthiness, long average tenancy in a space, and positive relationships with GSA and tenant agencies. Eighteen of the 20 lessors we spoke with identified the government s credit worthiness as a benefit. This credit, lessors said, is better than many private sector tenants and presents lower risks, and some of the more experienced lessors said that GSA leases are an important part of their overall lease portfolios. For example lessors said that GSA leases represent a reliable revenue stream and that they are confident they will be paid on time for the full term of the lease, while for commercial leases even for large companies there is an increased risk of a rent default. Eight of the 20 lessors said that GSA and tenant agencies are relatively easy tenants to work with once the lease is in place. For example, lessors said the tenant agencies are very professional, and some of them said that they generally do not receive many requests for service from the occupying staff. In addition, seven lessors mentioned GSA s long average tenancy in a space, which they said helps with a lessor s long-term financial stability. One lessor said that commercial tenants stay on average three to five years, while their GSA tenants have lease lengths of 10 or 15 years. According to GSA, agencies occupy spaces leased through GSA for an average of around 22 years. Lessor Perspective on GSA Leases The government is a Grade A tenant. <2.2. Stakeholders Identified Structural Aspects of GSA Leases That Can Affect Cost and Competition> The lessors and real estate brokers we spoke with told us that the way GSA structures aspects of its leases can affect cost and competition. These aspects include the volume and complexity of requirements in the leases, the way GSA structures rent payments, how GSA defines and reimburses costs for tenant improvements, and the full service nature of GSA leases. Many lessors told us that they increase their bid prices in response to these aspects of GSA leases. GSA officials said that each of these aspects reflects GSA s contracting policy, and it is not required to structure its leases this way by law, regulation, or executive order; however, they use these requirements to provide additional flexibility in managing their lease portfolio and reduce risk to the government. <2.2.1. Volume and Complexity> About three-fourths of lessors we interviewed said the volume and complexity of GSA lease requirements make these leases less attractive to potential bidders and can result in fewer bidders competing for a lease. These lessors further stated that GSA s leases, in contrast to many private sector leases, can be quite lengthy up to 85 pages and contain many references to other documents that are not included in the lease text such as a seismic certification, a small business subcontracting plan, a Department of Labor wage determination, and a foreign ownership and financing certification. Lessor Perspective on GSA Leases GSA s lease is three-fourths of an inch thick, has many cross- references, takes weeks to read, and requires an attorney to understand. Lessors must look up these other documents to fully understand the lease requirements, and some of the lessors we spoke to said that it can be difficult for them to quickly find the most important information. Lessors also noted that in response to the volume and complexity of requirements they may increase their bid prices. To account for risks inherent in these complex contracts lessors may also use the services of legal counsel or other experts, which could also increase costs. GSA officials told us that in the past several years they have made efforts to streamline their leases, including by eliminating duplicative or unnecessary provisions. One lessor told us that GSA has improved its leases by making them more intuitive and easier to read, a development that could be helpful for new potential lessors. <2.2.2. Rent Structure> About half of the stakeholders we spoke with, including 10 of the 12 more experienced lessors, said the way GSA structures its rent payments makes it difficult for these lessors to predict what actual operating costs will be in the future. Lessors said that because the shell rent (i.e. the building structure and systems) portion is typically flat over the firm term of a lease, and the operating expenses only increase at the consumer price index s rate, the rental payments they receive are generally not sufficient to cover their actual increases in expenses. In addition, these lessors said that in a GSA lease, the lessor is typically responsible for providing utility services and that lessors pass these costs through to GSA as part of the operating cost portion of the rent. In a private sector lease, these costs are typically the tenant s responsibility. To account for these issues, 11 lessors told us that they increase their bid prices to ensure that they will cover their costs, and two lessors told us that they would not bid on another GSA lease unless there were additional cost increases built into the lease. GSA officials told us that structuring rent payments this way provides GSA with a standardized method for addressing inflation and budgeting for future rental costs. Lessor Perspective on GSA Leases The way GSA accounts for base rent and operating expenses is different than in a private sector lease. In our leases, the base rent is frozen throughout the term of the lease and only the operating expenses are allowed to increase based on inflation. Because of this, when preparing a bid we have to estimate operating expenses years into the future, which can be difficult, and if we guess too low we can end up losing money on the lease. <2.2.3. Tenant Improvements> About one-third of the stakeholders we spoke with said the way GSA structures reimbursement for tenant improvements is a challenge, and three lessors said GSA s requirements for construction standards and space designs can be difficult to meet. Stakeholders said that GSA s requirement that lessors fund construction costs for tenant improvements upfront can put financial stress on lessors. For example, stakeholders said that lessors often must take on substantial debt in order to finance the construction of the tenant improvements. GSA reimburses lessors for tenant improvement costs over the firm term of the lease, but lessors told us that these payments do not begin until after the space is occupied, which can be delayed by the tenant agency s changing its requirements. In prior work we found that this process of paying tenant improvements over the firm term of a lease increases the overall cost to the federal government of leasing space, primarily due to interest costs passed through by the lessors. In addition, half of the lessors we spoke with identified challenges with the process of developing and finalizing agency requirements for leased space, including frequent changes to space requirements and the need to develop detailed construction information before bidding on a lease. Lessor Perspective on GSA Leases At the beginning I had to agree to a certain dollar amount for the tenant improvements, even though I did not know when the construction would happen, or how I would get paid back. You can get paid back in a lump sum, or the tenant improvements can be amortized over the lease term, but you do not know which it will be at the start of the process. This makes financing difficult. Six lessors told us that they increase the cost of their bids in part due to GSA often over-estimating the cost of tenant improvements. This situation can require a lessor to take out a larger loan than necessary, which adds financing costs to the project. Lessors said that this situation can also prevent some potential lessors from bidding if they cannot obtain the amount of financing GSA requires. Additionally, lessors cited some tenant agencies space requirements which can call for expensive materials or difficult to construct items. For example, they described leases where they had to construct multiple restrooms or heating and cooling systems for small offices that typically house fewer than five employees. GSA officials told us that they structure the tenant improvements requirements in this way in order to establish expectations for the lessor. They said that they rely on tenant agencies to develop initial requirements for leased spaces, and they work with those agencies on the final designs and construction standards. We examined space requirements of the five federal agencies we reviewed that lease large amounts of space through GSA, and each of these agencies uses standardized guidance such as a handbook or design guide. These documents included information on developing specific requirements for leased space such as identifying the size of space needed, the types of workspaces used, and sample layouts for different types of facilities. Officials from these agencies told us that they use these handbooks as their primary reference when setting requirements for leased spaces and approving the final designs, and to develop these handbooks they use agency mission needs, government- wide security standards, and requirements from laws, regulations, and executive orders. They said that they generally rely on GSA to provide them with local market information such as the availability of suitable existing buildings, market rents, and other factors. <2.2.4. Full Service Leases> About one-third of stakeholders we spoke with identified the full service nature of GSA s leases as difficult, time consuming, and expensive requiring them to estimate highly variable costs far into the future. For example, one lessor spoke of being required to provide all services janitorial, maintenance and utilities which can include simple things like replacing light bulbs. Further, the lessor has to work around the tenant agency s operating hours to provide these services. Five lessors told us that they raise their bid prices to cover the costs of full service leases because they are cost and labor intensive. One lessor said that lessors estimate on the high end of the range to make sure they make a profit. Lessor Perspective on GSA Leases The biggest issue for a potential lessor to consider is how hands-on they want to be GSA leases are full service leases requiring lots of attention. GSA officials told us that they structure leases this way because full service leases are generally less expensive to the government avoiding the administrative burden of having to establish and maintain a contract for each service and avoiding the risk of higher than expected utility costs. In 2017, GSA issued guidance to its lease contracting officers on using net-of-utilities leases those structured so that the tenant agency pays the utilities. GSA officials and stakeholders we spoke with told us that having a tenant agency pay utilities directly gives agencies an incentive to cut down on energy use and could result in lower costs. According to GSA, structuring leases as net-of-utilities leases requires substantial resources to manage and monitor. Therefore, GSA s current policy is to use this structure for only certain large, energy-intensive leases. GSA officials told us they plan to continue using net-of-utilities leases but do not have plans to expand the program further. <2.3. Stakeholders Cited Specific GSA Lease Requirements That Can Affect Cost and Competition> Stakeholders identified a number of specific GSA lease requirements that they said can affect cost and competition. These requirements include early termination options, GSA s unilateral ability to substitute the tenant agency, provisions for reimbursing real estate taxes, and ongoing janitorial and maintenance requirements. Most of these requirements are GSA contracting policy, but the janitorial and tenant substitution requirements have some elements that GSA says it uses in response to either a law, a regulation, an executive order, or a combination of these and other sources. <2.3.1. Early Termination> About two-thirds of stakeholders, including all 12 more experienced lessors, identified GSA s including early termination options in leases as affecting the cost of the leases. GSA leases typically have a date after which GSA can terminate the lease with as little as 90 days notice, and since many GSA leases require significant initial capital for construction of the tenant improvements, some lessors told us they need to take out a loan using GSA s future rent payments as the source of repayment. However, stakeholders and other experts told us that many loan underwriters will not consider any payments after GSA s termination right date due to the risk that the GSA will leave the space, a factor that they said makes the loans more expensive and difficult to obtain. Nine of the lessors and two of the other experts we spoke with also said that it was unlikely GSA would ever exercise its termination options. Four lessors told us that they increase their bid prices to reflect the increased risk and expense that the early termination clauses provide, and four lessors and one broker told us that lessors may not bid on a lease at all if GSA includes an early termination option. Lessor Perspective on GSA Leases The market, and lenders, look at the firm term as the length of the lease, and don t take the soft term into account as GSA does soft terms are the biggest structural obstacle in GSA lease requirements. If GSA included soft terms in leases just for emergencies, rather than as a matter of practice, the soft terms would not be as much of a problem. GSA officials told us that these options allow them to maintain flexibility and use space efficiently despite changing tenant agency missions and space needs. In response to data GSA has collected from AAAP bids showing that GSA receives lower bids for longer firm-term leases, GSA has begun lengthening the firm term of its new leases. Specifically, GSA s analysis of AAAP bids data showed that for lease offers in fiscal years 2017 and 2018, lessors bid a lower rent amount for a 10-year firm term as opposed to a 5-year term 96 percent of the time with an average savings of around 10 percent. GSA officials told us that they have been using more 10- and 15-year firm terms as opposed to the previous standard practice of five years. For example, according to GSA, in fiscal year 2014, 19 percent of GSA s leased inventory had a firm term of 10 years or more, and in fiscal year 2017, this figure had risen to 26 percent. In addition, GSA has implemented a lease-term-setting tool, which officials said will help them lengthen the firm terms of leases where appropriate. <2.3.2. Janitorial and Maintenance> About one-third of the stakeholders we spoke with identified janitorial and maintenance services as a challenge, and two lessors said that costs for janitorial and maintenance services can be difficult to estimate. For example, one lessor told us that it is difficult to estimate these costs two years into the future, let alone for the 10 or more years of a GSA lease, because of changes to local job market conditions and labor laws. In addition, stakeholders said that GSA leases require more frequent or comprehensive janitorial and maintenance services than do private sector leases. For example, lessors said that some cleaning and paint and carpet replacement intervals were more frequent than the industry standard. Four lessors told us that they include the additional costs for these services into the cost of their bids, and some lessors told us that they include up to 125 percent of their estimated true costs in their bids. According to GSA, it developed some of these requirements, particularly those related to specific cleaning products that lessors must use, in response to a combination of several laws, executive orders, and agency initiatives or recommendations. Some of the other requirements, such as the intervals for carpet and paint replacement, are GSA s contracting policy, and officials told us that they have remained relatively static since the 1990 s. Lessor Perspective on GSA Leases In one lease, we found that janitorial services for GSA cost approximately twice as much as normal cost for a non-GSA lease. <2.3.3. Tenant Substitution> About one-third of the stakeholders we spoke with said that lessors particularly lessors with multi-tenant buildings are concerned about GSA s ability to substitute one tenant agency for another, a requirement that can affect competition for leases. One concern cited was the possibility of substituting a law enforcement agency (e.g., ICE or FBI) that may have armed officers into a building previously occupied by an administrative tenant agency. Another was that increased traffic may result from substituting a busy public-facing agency (e.g., SSA or IRS) into a formerly quiet building environment. Stakeholders and other experts we spoke with said that scenarios like these can affect other tenants willingness to renew leases in a building; however, as we found in 2016, they also told us that GSA rarely exercises this option. Two stakeholders and another expert told us that lessors take specific actions in response to this requirement, including negotiating with GSA over modifying this clause, which one said they have been successful in doing. Federal regulation requires GSA to include this clause in leases with annual rents above the simplified acquisition threshold unless the lease contracting officer determines that it would not be appropriate. This regulation, however, stems from a general GSA statutory authority regarding federal property. GSA s leasing regulations do not require GSA to use this requirement in leases with net annual rents under the simplified lease acquisition threshold, but GSA officials told us that as a matter of practice they also include it in these smaller leases. GSA officials told us that GSA, as the lessee, is ultimately responsible for a lease s financial obligation, and the ability to substitute tenant agencies helps GSA mitigate the costs of vacant leased space in the event a tenant agency chooses to leave a leased property. Lessor Perspective on GSA Leases The substitution of tenant requirement is especially an issue in multi- tenant buildings, and lenders can have trouble with it as well, but GSA almost never uses it. Our organization tries to get GSA to modify these clauses, and we are successful about 50 percent of the time, but this varies by GSA region. <2.3.4. Real Estate Taxes> About one-third of the stakeholders we spoke with said GSA s requirements for real estate tax reimbursement may lead lessors to increase their bid prices to account for real estate tax uncertainty. GSA reimburses lessors for increases in real estate taxes above a base year the first full year after GSA certifies the leased space as fit for occupancy. Lessors told us that since the date of occupancy is dependent on the completion of the design and construction process, the duration of which is difficult to estimate, when setting bids they have to estimate taxes without knowing the base year. Two lessors told us that when bidding on a lease they estimate on the high side to make sure they cover their costs, and another other lessor said that their organization might not bid on a GSA lease because of issues with the real estate tax requirements. GSA officials told us that they use these requirements because they allow GSA to establish the real estate tax base and the portion that GSA will reimburse. Officials also told us that lessors have told them that their current approach to tax adjustment places a risk on lessors that may ultimately get passed on to GSA in the form of higher rent, and at a May 2018 GSA industry event, lessors discussed difficulties with setting the base year. GSA officials told us that they are developing new requirements for lease construction that would allow for real estate taxes to be directly passed through by the lessor to GSA. Lessor Perspective on GSA Leases The base year is often not clearly stated in the lease and is sometimes mentioned informally (e.g., in emails) the lessor has no recourse to negotiate over the tax base year with GSA. It poses one of the biggest risks and has caused us to walk away from some bids after not being able to get a clear lease amendment specifying the tax base year. <2.4. Stakeholders Identified the GSA Leasing Process as Affecting Cost and Competition> The lessors and real estate brokers we spoke with also identified a number of general areas of GSA s leasing process that they said can increase costs or reduce the number of bidders. These areas included the length of time it can take to finalize a GSA lease, GSA s ability to occupy a space after lease expiration generally without penalty or the payment of damages beyond continuing rent payments referred to as a holdover and GSA s propensity for entering into short-term extensions for current leases while negotiating new leases. <2.4.1. Length of Time> About two-thirds of the lessors we spoke with mentioned frustration with the length of time it takes to finalize a GSA lease. Lessors told us that after GSA awards a lease, it can take more than a year of additional negotiations with the lessor, GSA, and the federal tenant agency to finalize the design requirements and construct the space. In 2016 we reported that the total length of GSA s leasing process could be up to six to eight years. Because GSA does not generally begin to pay rent until after the space is fit for occupancy, lessors said that the length of time it takes to complete the lease award, design and construction processes can create financial stress on a lessor. For example, one lessor said that GSA s overall leasing process was challenging, and the largest issue, rather than any particular requirement, was agreeing on the design after lease award. This length of time was because the tenant agency was slow to make decisions about the space design, and while GSA tried to coordinate by setting up weekly meetings about this design among GSA, the tenant agency and the lessor, there were also several layers of time- consuming GSA review. About one-third of the lessors we spoke with also identified challenges communicating with GSA and the tenant agency during the lease negotiation process, including challenges identifying points of contact and resolving disputes. Three lessors said that they or other lessors might not bid on additional GSA leases specifically because of the lengthy and complex process to finalize a lease. GSA officials told us that they rely on space requirements from the tenant agency and that the faster they receive those requirements, the faster the bid award can be completed and design process finalized. Lessor Perspective on GSA Leases If it were up to me, I wouldn t bid on any more GSA leases because they are too time intensive not only for management at our organization, but also for our accounting, engineering, construction and property management teams. GSA officials told us that they have been using a number of initiatives to speed up their leasing process. For example, they said that in response to these time pressures they have begun requesting requirements as much as 48 months in advance of when a new lease will be needed. Officials from three of the five tenant agencies we spoke with told us that it can be difficult to estimate their space needs so far in advance because their missions and space needs can change. In addition, since 2015 GSA has been using the AAAP in which potential lessors submit standing bids for vacant space that GSA then matches to requirements for new and continuing leases in all of its national real estate markets. Four of the more experienced lessors we spoke with told us that they had noticed positive changes as a result of the AAAP. These changes included faster lease processing times and an overall simpler leasing process with less negotiating. One lessor told us that they only bid on new GSA leases that are part of this program. <2.4.2. Holdovers and Short-term Extensions> One-quarter of the lessors we spoke with identified drawbacks associated with GSA lease holdovers and short-term extensions, and at least three of the lessors we spoke with had experienced a holdover for one of their leases. Lessors said that the possibility of GSA s holding over in a space or signing a short-term extension can affect their ability to finance a building and their time frame for finding a new tenant if GSA exits a property. Lessors also noted communications difficulties with GSA, for example some said that they had reached out to GSA to negotiate a lease well in advance of an incumbent lease s going into holdover, but this action did not help them get a new lease on time. Lessors told us that they bid much higher rates for short-term extensions than they do for leases awarded through the normal process. In 2015 we reported that a significant number of GSA leases experience a holdover or short-term extension and that these can cause uncertainty for tenant agencies and lessors, increase GSA s workload, and delay the completion of building maintenance and other tenant improvements. Lessor Perspective on GSA Leases Holdovers and short-term extensions diminish lessors opinions of GSA. Reducing holdovers and short-term extensions is one of the key tenets of GSA s 2018 2023 Lease Cost Avoidance Plan. One method GSA uses to more quickly process leases for tenant agencies remaining in their current space is the superseding and/or succeeding lease. In 2018 GSA developed a revised tool to help its officials more quickly estimate whether GSA would likely achieve lower costs using a succeeding lease as opposed to performing a full and open competition for a new lease. Lease contracting officers can use this tool to identify leases that would be likely candidates for a succeeding or superseding lease earlier in the process. We analyzed the leases GSA entered into during fiscal years 2016 through 2018 and found about 29 percent of them were succeeding or superseding leases. GSA officials told us that they have tried to increase awareness of the new tool and appropriate use of succeeding and superseding leases through training programs. <3. GSA Does Not Have Complete Information to Address Stakeholder Concerns and Assess Its Simplified Lease Model> GSA began reform efforts in 2011 by conducting outreach, introducing new lease models, and adjusting some leasing provisions in response to stakeholder concerns. While GSA has continued its industry outreach, its more recent outreach efforts have not gathered information from a representative group of lessors. Further, GSA has not analyzed the information it does collect and therefore does not know if its reform efforts are adequately addressing stakeholder concerns. Also, GSA has not assessed whether one of its reform efforts the simplified lease model is achieving its intended benefits or how it could affect risk. <3.1. GSA s Recent Stakeholder Outreach Efforts Are Limited, and GSA Lacks Information on Lessor Concerns> Since fiscal year 2018, GSA has conducted informal industry outreach to certain lessors and other stakeholders about the leasing process. These efforts have included attending and making presentations at industry conferences, facilitating industry meetings with regional commissioners, and hosting feedback sessions. For example, in May 2019 GSA gave a presentation to a large industry organization on the current status of its efforts to reduce lease costs, and in May 2018 staff participated in a training event organized by GSA s Office of Government-wide Policy where officials from industry shared their experiences with the leasing process. GSA officials told us that they gather information primarily from two industry groups, both of which have reached out to GSA, have a large number of members that are GSA lessors, and have a significant amount of knowledge of the GSA leasing process. GSA officials told us that they have used information mainly from these two groups to inform reform efforts, including creating net-of-utilities leases and longer firm-term leases. However, these two groups are focused primarily on organizations such as real estate brokers and investment trusts that are experts in the GSA leasing process. These organizations are not representative of GSA s total population of lessors, which also includes many smaller organizations that have less experience with the GSA leasing process. By focusing its efforts on these larger groups, GSA is missing the perspective of smaller lessors, whose representatives may not attend industry meetings. These smaller lessors may have different types of concerns that GSA is not capturing. For example, in our sample of 20 lessors we identified areas where the perspectives of organizations with varying levels of experience with GSA leases differed. More than half of the less experienced organizations identified experiencing communication challenges with GSA and the tenant agency, while only two of the more experienced organizations identified this concern. Concerns about early termination clauses in GSA leases were cited by less than half of the less experienced organizations, but all of the more experienced organizations mentioned this clause as affecting their willingness to do business with GSA. Also, one of the brokers we spoke with said that smaller lessors tend to have different concerns about leasing requirements than larger lessors, but also have less ability to react to those concerns by, for example, raising their bid prices. In addition to limiting outreach to two groups that do not represent all types of GSA lessors, GSA has not maintained official records of the information it receives from these efforts. Further, it has not analyzed the information that it collects from lessors and other stakeholders for use in revising the leasing process. These omissions hinder GSA s ability to identify the full range of lessor concerns. GSA s recent approach to outreach differs from earlier approaches where GSA conducted more formal outreach to lessors. For example, in 2011 GSA performed formal outreach in order to inform decisions about significant changes to its leasing process. Officials told us that they selected a wide variety of lessors and held formal outreach sessions where GSA took minutes and maintained a record of all of the comments. GSA then analyzed the comments and used the results of its analysis to inform the initiatives it was conducting at that time, including the development of the simplified lease model. In addition, in 2017 GSA established the Office of Leasing Industry Outreach Program, which was a formal program to allow industry representatives to discuss various leasing issues with GSA officials through conference calls, webinars, and in-person sessions. GSA conducted nine monthly sessions with this program in 2017 and kept a formal record of only the first four sessions. Officials told us that they have since shifted their approach to conduct outreach more like that conducted by the Office of Government-wide Policy discussed above. Federal internal control standards call for agencies to communicate with, and obtain quality information from, external parties such as stakeholders that can help the agency achieve its objectives. While GSA has in the past collected and analyzed information from a wide variety of stakeholders to the leasing process, the real estate market is constantly changing. By obtaining current information from a broad spectrum of stakeholders and documenting and analyzing the information collected, GSA would be better positioned to know whether its lease reforms are addressing stakeholder concerns and how its lease requirements affect cost and competition. <3.2. GSA Does Not Know Whether Its Simplified Lease Model Is Achieving Anticipated Benefits> As previously noted, GSA developed its simplified lease model in 2011 to simplify the acquisition of smaller value leases with the intent of making the leasing process more efficient and cost-effective. GSA officials told us that using this model is also intended to help them achieve other lease reform goals including reducing holdovers and short-term extensions by speeding up the leasing process and making GSA leases more attractive to a wider spectrum of potential lessors. In addition, officials said that they believe greater use of the simplified lease model would increase competition for leases, particularly in real estate markets with high demand for office space. Since initial implementation, GSA has undertaken initiatives to increase the use of this model, including by raising the eligibility threshold from $150,000 to $250,000, and GSA officials told us that they have proposed raising the threshold to $500,000, a move that would cover more than 70 percent of GSA s leases. However, GSA has not performed any analysis on the number of leases that were eligible for, but did not use, this model. Using available data, we analyzed the leases GSA entered into during fiscal years 2016 through 2018 that were potentially eligible for the simplified lease model and compared those that used the model to those that used GSA s global and standard lease models. We found that the group of leases where GSA had used the simplified lease model had achieved lower rents both overall and per square foot than the group of potentially eligible leases where GSA had used its standard or global models (see table 2). These leases had lower average costs even though they had shorter average total terms and firm terms. This finding is notable because, according to GSA, longer leases typically have lower costs than shorter ones. However, our analysis of available data also found that GSA only used the simplified lease model on 124 of the 406 leases that were potentially eligible, or about 31 percent (see table 2). GSA officials told us that they face two primary challenges in increasing adoption of the simplified lease model. First, lease contracting officers must choose to use the simplified model as opposed to GSA s standard lease model. While GSA s leasing policy states that lease contracting officers should use the simplified lease model to the maximum practical extent, the lease contracting officers generally have wide discretion in selecting the type of lease to use for a particular acquisition. GSA officials told us that they believe some lease contracting officers may be hesitant to use the model because it is less familiar to them. GSA officials also told us that they have provided training for lease contracting officers on the appropriate use of the simplified lease model and have encouraged them to use it. Second, in order for GSA to use the simplified lease model, tenant agencies must provide a complete set of space requirements that GSA can use in a lease solicitation what GSA calls biddable requirements prior to GSA s advertising the lease. According to GSA officials, tenant agencies do not always provide these requirements on time. By having biddable requirements in place before receiving bids, GSA can avoid negotiating these requirements after the lease is awarded. GSA officials and lessors told us that not having these requirements in place is a major source of project delays. GSA tracks both when it receives initial requirements from the tenant agencies and when the more fully developed requirements that GSA uses in its standard lease model solicitations are in place. In order to use the simplified lease model, GSA and the tenant agency then develop biddable requirements that need additional detail. An Example of challenges agencies face in providing lease requirements to the General Services Administration (GSA): Officials from three of the five tenant agencies we spoke with told us that it can be difficult for them to provide GSA with requirements two or more years in advance because agency missions and space needs change. For example, Internal Revenue Service officials told us that providing requirements 36 months in advance of a lease s expiring is difficult for them because they may not know what their agency budget and personnel will be that far in advance. Officials from the Federal Bureau of Investigation said that lead times greater than three years are challenging because their agency missions change frequently, which leads to changing space needs. GSA has taken some steps to increase use of the simplified lease model. For example, several GSA regions have begun to work with SSA on a pilot program to reduce the time it takes for GSA to complete leases with that agency, including by increasing the availability of the simplified lease model. This program is in the early stages and, according to the charter, developed in August 2019, its objectives are to reduce the total time it takes to complete leases, increase up-front knowledge of project costs, and minimize the number of changes needed to leases all while maintaining or reducing the average costs for these projects. GSA and SSA plan to accomplish these objectives by identifying the areas of the leasing process most prone to delays, developing strategies for more quickly finalizing the complete requirements needed to use the simplified lease model, and testing the improvements in both large and small real estate markets. GSA plans to begin testing the changes developed by this program during the first half of 2020. SSA officials told us that they typically begin planning approximately 42 months prior to lease expiration with the goal of providing initial requirements to GSA by 36 months prior. GSA lacks comprehensive information on the benefits and challenges of using the simplified lease model because it has not evaluated the results it has obtained from using it. For example, officials told us that they have not analyzed the lease processing times or rental rates they have achieved using the model. Officials also said that they already collect the data they would need to study the model and they have used this data to analyze related issues such as lease holdovers and short-term extensions. Officials also told us that they do not consider use of the simplified lease model to pose any financial risks provided that lease contracting officers follow GSA s existing policies. However, they told us that GSA has not reviewed financial and other risks that may arise from using the model. These factors include risks due to the model s not containing certain provisions that may protect GSA, such as tenant substitution. We have reported that agencies can use information about the performance of programs to identify problems or weaknesses, to try to identify factors causing the problems, and to modify programs to address them. Program assessment helps to establish a program s effectiveness. Without conducting such an assessment, GSA does not have the information needed to determine whether the simplified lease model is achieving intended results, whether to make improvements, or how to mitigate any risks. <4. Conclusions> The federal government spends nearly $6 billion annually on leasing space from private entities, and GSA has taken steps to encourage private sector competition for government leases. GSA s efforts to address stakeholder concerns with lease requirements have had some success. Specifically, GSA s 2011 formal stakeholder outreach and subsequent development of new lease models and other process changes have given GSA some options to reduce leases complexity and better tailor leases to the needs of individual projects. However, because GSA s recent outreach has not included a representative group of its lessors, and it has not documented and analyzed the information collected from this outreach, GSA may not have the information it needs to fully address lessors concerns. Further, the simplified lease model which GSA developed to address some of these stakeholder concerns and more effectively use its resources has been in use for several years. Given that GSA has proposed further expanding the use of the model to higher value leases, it is important to know the results GSA has obtained from using the model, such as the characteristics of leases for which it achieves the greatest savings in costs and time, and the extent to which it bears financial or other risks from its use. Such information would help inform GSA s future decision-making on the use of the simplified lease model. <5. Recommendations for Executive Action> We are making the following three recommendations to GSA: The Administrator of the General Services Administration should expand its outreach as appropriate to obtain feedback from lessors that are representative of its entire lease portfolio. (Recommendation 1) The Administrator of the General Services Administration should, for future outreach efforts, document and assess lessors feedback about the leasing process. (Recommendation 2) The Administrator of the General Services Administration should evaluate whether the simplified lease model is achieving its intended results. (Recommendation 3) <6. Agency Comments> We provided a draft of this report for review to the General Services Administration, the Social Security Administration, and the Departments of Homeland Security, the Interior, Justice, and the Treasury. The General Services Administration concurred with our recommendations in its written comments, which are reproduced in appendix II. The General Services Administration and the Department of the Interior provided technical comments, which we incorporated as appropriate. The Departments of Homeland Security, Justice, and the Treasury, and the Social Security Administration had no comments on the draft report. As agreed with your offices, unless you publically announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Administrator of the General Services Administration; the Secretaries of the Departments of Homeland Security, the Interior, and the Treasury; the Commissioner of the Social Security Administration; the Attorney General; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-2834 or rectanusl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report examines (1) lease requirements selected stakeholders identified as affecting cost and competition and steps GSA has taken to address their concerns, and (2) how GSA has identified stakeholder concerns and evaluated its simplified lease model. To obtain information for both objectives, we reviewed laws, regulations, and executive orders covering GSA leases and GSA s leasing process. We also obtained data from GSA on each of the 1,618 leases it entered into between the beginning of fiscal year 2016 and the end of fiscal year 2018, the most recent data available. This data included fields for the current annual rent, the size of the lease in rentable square feet, the lease model GSA used, the facility security level, the occupying agency, and the lease s effective and expiration dates, among others. We assessed the reliability of this data by reviewing documentation; interviewing GSA officials; electronically testing the data by, for example, examining missing values and outliers; and verifying the accuracy of potentially erroneous data with GSA officials. We concluded that the data were reliable for the purposes of selecting a sample of GSA lessors and reporting on GSA s portfolio of leases and the general characteristics of the groups of leases that used different lease models. In addition, to address both objectives, we collected information from and interviewed a non-generalizable sample of 20 GSA lessors to obtain their perspectives on GSA leases and GSA s leasing process. To select these lessors, we used the fiscal year 2016 2018 lease data that GSA provided and selected leases using the annual rent amount as the primary selection criteria. We excluded leases that used models designed for specific lease products, such as leases for parking structures or leases on airport properties, and we also excluded leases that were successions or supersessions of leases that had already been established under different models. To make the selections, we first split the data into three groups based on annual rent, the first group of leases with annual rents under $150,000; the second group with annual rents between $150,000 and below $500,000; and the last group with annual rents above $500,000. We then randomly ordered the leases within each of the three groups, and selected 53 total leases in that order from the three groups. We checked this grouping to ensure that the selected leases had similar characteristics to GSA s general population in other important lease characteristics such as lease model used and GSA region. We then randomly ordered the selected leases and contacted the lessors for those leases in that order. We interviewed the first 20 lessors from our selected leases who agreed to be interviewed. When contacting the lessors we found that in most cases the lessor named in GSA s data was a subsidiary to another organization. In those cases, we interviewed the organization that self- identified as being responsible for the selected lease, or their representative. We conducted these interviews between March 2019 and June 2019 and used a semi-structured interview format with open-ended questions for those interviews. During these interviews, we asked for lessors views on the requirements in GSA s leases that can affect their willingness to bid on GSA leases and the prices they can offer, actions they take in response to those requirements, other areas of GSA s leasing process that can be difficult for them, the benefits to leasing to GSA, and their perspectives on GSA s recent lease reform efforts. To obtain a broader perspective on GSA s leasing process, we also conducted semi-structured interviews on the same topics with six real estate brokers who are participating in the GSA Leasing Support Services contract. We asked the brokers to provide their experiences on which areas of GSA leases result in the greatest number of cost and competition issues from lessors, and what the lessors do about those areas. We also interviewed four other experts on GSA leasing including professional organizations and attorneys who represent building owners, and former GSA officials. Although the results of these stakeholder interviews are not generalizable to the entire population of GSA lessors, they provide illustrative examples of lessors experiences with GSA leases and the leasing process. After conducting these semi-structured interviews with lessors and brokers, we conducted a content analysis of the interview data. To conduct this analysis, we organized the responses by topic area, and then one GAO analyst reviewed all of the interview responses and identified recurring themes. Using the identified themes, the analyst then developed categories for coding the interview responses and independently coded the responses for each question. To ensure accuracy, a second GAO analyst reviewed the first analyst s coding of the interview responses, and then the two analysts reconciled any discrepancies. To identify the lease requirements that stakeholders we spoke with identified as affecting cost and competition, we synthesized information from our content analysis of interview responses to identify the most commonly mentioned requirements. We selected the eight most commonly mentioned requirements by summing the total number of responses from both the lessors and the brokers. As part of this analysis we also selected the four areas stakeholders most often mentioned as challenges that were related to GSA s leasing process, as opposed to a specific requirement, but that stakeholders nonetheless identified as having effects on cost and competition. To assess how the responses from lessors may have differed based on how much experience a lessor has with GSA, we grouped the lessors we spoke with into two categories. The first category was those lessors who had told us that they had experience with three or more GSA leases, we referred to these lessors as more experienced, and the second category was those lessors who had experience with one or two GSA leases, we referred to those lessors as less experienced. To identify the source of the GSA requirements stakeholders identified, we reviewed GSA documents and interviewed officials to learn about each of the requirements. In addition, we reviewed laws, regulations and executive orders that governed GSA s use of these requirements. To determine how GSA and tenant agencies develop requirements for leased space one of the requirements stakeholders identified we selected five bureau-level and independent agencies to review how they develop initial requirements for leased space and how they work with GSA and the lessor to finalize those requirements. We selected these agencies by the number of GSA leases they had entered into during fiscal years 2016-2018, using the lease data for that time period provided by GSA. We selected the agencies that had entered into the greatest number of leases, and in order to ensure that we had a diversity of experiences from across the federal government, and we limited our selection to executive branch independent agencies and one-bureau-level entity from each cabinet department. Based on these factors, we selected (1) Department of the Interior Fish and Wildlife Service (FWS); (2) Department of the Treasury Internal Revenue Service (IRS); (3) Department of Justice Federal Bureau of Investigation (FBI); (4) Social Security Administration (SSA); and (5) Department of Homeland Security Immigration and Customs Enforcement (ICE). While the views of these agencies are not representative of all executive branch agencies, they provide a range of examples and experiences with leasing space through GSA. We reviewed documents and interviewed officials from each of these five agencies to learn about how they develop requirements for leased space, how they work with GSA to identify feasible properties, how they participate in the development of the final space design and construction, and how they plan for their future leased space needs. To identify the steps GSA has taken to identify stakeholder concerns and evaluate its simplified lease model, we reviewed pertinent GSA documents and interviewed GSA officials on recent lease reform efforts, including how GSA has defined them, what information GSA used to develop them, how GSA has implemented them, and how GSA has assessed their performance. In addition, we obtained information from our interviews with lessors and real estate brokers about their impressions of GSA s lease reform efforts, including whether they were aware of the efforts, and what effects they had observed. We compared GSA s efforts to identify and address stakeholder concerns to Federal Standards for Internal Control related to external communication. To identify how often GSA has used its simplified lease model and the characteristics of the leases for which GSA used the model, we used the GSA fiscal year 2016 2018 lease data described previously. We analyzed the data to obtain information about the number of leases that had used each of GSA s lease models, and the average rent amounts, size, and terms. Even though the facility security level is an additional eligibility requirement for the model, we could not include it in this analysis because GSA does not have security level information for many of the leases in this dataset. However, we determined that omitting this data field did not substantially change the results of this analysis because only a small number of leases with costs below $150,000 also had a facility security level of III or above. We were not able to assess the extent to which the lower rental costs might be attributable to the use of the simplified lease model because there are other factors that that contribute to its use that are not included in GSA s data. For example, in order for GSA to use the simplified lease model, tenant agencies must provide fully developments prior to GSA advertising the lease. The data do not include the date GSA received these requirements. We compared GSA s efforts to evaluate its simplified lease model to criteria from our prior work on the use of performance information for decision-making. We conducted this performance audit from October 2018 to December 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the General Services Administration Appendix III: GAO Contact and Staff Acknowledgments <7. GAO Contact> <8. Staff Acknowledgments> In addition to the contact named above, Amelia Bates Shachoy, Assistant Director; Alex Fedell, Analyst-in-Charge; James Duke; Cynthia Grant; Geoffrey Hamilton; Gina Hoover; Terence Lam; Malika Rice; Kelly Rubin; Jim Russell; Patrick Tierney; and Amelia Michelle Weathers made key contributions to this report. | Why GAO Did This Study
As the federal government's landlord, GSA works with lessors and real estate brokers to identify space for other federal agencies to use. As part of this process, GSA uses leases that include requirements not commonly used in the private sector. These requirements and GSA's lengthy and complex leasing process can affect federal leasing costs and competition for leases.
GAO was asked to review issues related to cost and competition for GSA leases with private sector lessors. This report examines: (1) lease requirements selected stakeholders identified as affecting cost and competition and steps GSA has taken to address stakeholders' concerns, and (2) how GSA has identified stakeholders' concerns and evaluated its simplified lease model. GAO reviewed pertinent federal statutes and regulations and GSA's contracting policy and leasing data from fiscal years 2016–2018. GAO conducted interviews with 20 GSA lessors selected from GSA's data to represent a range of location, and cost of the leases and the six real estate brokers that work with GSA.
What GAO Found
Stakeholders, including 20 lessors (e.g., building owners) and the six real-estate brokers that negotiate federal government leases, identified several aspects of the General Services Administration's (GSA) leases that can affect cost and competition. For example, specific lease requirements such as early termination (see table) can lead lessors to increase their rent rates or decide not to bid on a lease—thereby increasing federal leasing costs or decreasing competition. According to GSA officials, many of these lease aspects reflect contracting policy rather than being required by law, regulation, or executive order. GSA has made some changes, such as lengthening the term of some leases, to address stakeholder concerns. Stakeholders also identified the time it takes to complete a lease and GSA's propensity for staying in a space beyond the term of a lease as increasing costs and making GSA leases less attractive to potential bidders.
Source: GAO analysis of stakeholder information. | GAO-20-181
GSA has undertaken initiatives to identify stakeholders' concerns to inform its reform efforts, but it lacks complete information to address concerns or evaluate its efforts. Specifically, GSA has not gathered information from a representative group of lessors because its recent outreach has involved two industry groups that focus primarily on organizations such as real estate brokers and investment trusts that are experts in GSA leasing. These organizations may not have the same concerns as smaller, less experienced, organizations. By obtaining information from a broad spectrum of stakeholders, GSA would be better positioned to know whether its leasing reforms are addressing stakeholders' concerns. Additionally, to expedite processing of lower-value leases, GSA developed a simplified lease model that excludes some requirements that stakeholders identified as challenging but may protect GSA, such as tenant substitution. GAO found that for fiscal years 2016 to 2018, GSA used the model for only about one-third of potentially eligible leases. GSA has proposed increasing use of the model, but it does not know whether the model as currently used is achieving the anticipated benefits, including reduced lease processing times, or the impact of financial or other risks from this model because GSA has not evaluated its use. Without such an assessment, GSA does not have the information needed to determine whether the simplified lease model is achieving its intended results, whether to make improvements, or how to mitigate any risks.
What GAO Recommends
GAO is making three recommendations, including that GSA: (1) expand its outreach as appropriate to obtain feedback from lessors that are representative of its entire lease portfolio, and (2) evaluate whether the simplified lease model is achieving its intended results. GSA agreed with the recommendations and said it believes there are additional opportunities to expand its outreach efforts and evaluate the simplified lease model. |
gao_GAO-20-274 | gao_GAO-20-274_0 | <1. Background> <1.1. Key Terms and Definitions> There are various statutes, regulations, and agency policies that set forth how DHS components are to make decisions about, or process, the family members they encounter. For the purposes of this report, we use the following key terms and definitions. Family. Federal immigration law does not specifically define the term family for the purposes of identifying family relationships that are to be documented at apprehension. DHS components and other federal agencies use the term family for individuals with a variety of relationships such as step-, half-, foster, or adoptive family members. Some family relationships, including parent-child, may be claimed upon apprehension, but CBP may determine that the relationship is invalid. For example, CBP may determine that (1) those claiming a familial relationship are not related or (2) their relationship does not meet the relevant component or agency s operational definition of family. For the purposes of this report, family refers generally to noncitizens with claimed familial relationships. Unaccompanied alien child (UAC). The Homeland Security Act of 2002 defines a UAC as a child under the age of 18, who has no lawful immigration status in the United States and who has no parent or legal guardian present in the United States, or if present, no parent or legal guardian available to provide care and physical custody for that child. Family unit. Federal immigration law does not specifically define the term family unit. However, CBP and ICE policy and guidance documents generally define a family unit as the inverse of a UAC. In other words, a family unit includes a noncitizen child under the age of 18, who has no lawful immigration status in the United States, accompanied by a noncitizen parent or legal guardian who is able to provide care and physical custody. For the purposes of this report, family unit refers to this specific subset of family, as previously defined. Dependent. For a number of immigration benefit applications, including asylum, a spouse or child may be included as dependents on a principal s application and derive lawful immigration status from the principal applicant if the applicant is granted relief. Similarly, consistent with regulation, USCIS policy is to include a spouse or child in a principal applicant s positive credible fear determination if they arrived concurrently and the spouse or child wants to be included. In this context, child is generally defined in federal immigration law as an unmarried biological or legally adopted child under age 21. For the purposes of this report, we refer to principal applicants spouses and unmarried children under age 21 as dependents. <1.2. Federal Agencies Roles and Responsibilities> Family members who are apprehended together may encounter multiple federal agencies and components during their immigration proceedings, including DHS components, HHS s ORR, and the Department of Justice s Executive Office for Immigration Review (EOIR), as shown in figure 1. CBP documents the circumstances of noncitizens apprehension. After Border Patrol agents or OFO officers apprehend noncitizens, including families, they are to interview each individual, using interpreters if needed, and collect personal information such as their names, countries of nationality, and age. Agents and officers also collect biometric information, such as photographs and fingerprints, from certain individuals. Border Patrol agents and OFO officers use fingerprints to run records checks against federal government databases to determine if individuals have any previous immigration or criminal history. Agents and officers are to enter information about the individuals in the appropriate automated data system as soon as possible, in accordance with CBP policy. Border Patrol agents and OFO officers print copies of the information they enter into their data systems to create a paper file, known as an A-file, for each noncitizen they apprehend. One of the key required DHS forms in the A-file is Form I-213, Record of Deportable/ Inadmissible Alien. Among other things, this form captures biographic information and includes a narrative section for agents and officers to document the circumstances of the apprehension. According to CBP policy, Border Patrol agents and OFO officers are to determine the validity of family relationships among individuals they apprehend. To do so, for example, they are to review any available documentation, such as birth certificates; monitor interactions between adults and children; and use their law enforcement training, such as interview skills, to help assess the validity of family relationships. After making decisions about the validity of familial relationships, agents and officers are to decide whether and how family members will be detained together while in CBP custody. According to CBP s 2015 National Standards on Transport, Escort, Detention, and Search, CBP will maintain family unity to the greatest extent operationally feasible, absent a legal requirement or articulable safety or security concern that requires separation. According to CBP officials, if individuals are determined to be ineligible for admission into the United States, agents and officers must decide how to process them, which may include placing them into full or expedited immigration removal proceedings, consistent with the Immigration and Nationality Act. In full removal proceedings, individuals have the opportunity to present evidence to an immigration judge to challenge their removal from the United States and apply for various forms of relief or protection, including asylum. In expedited removal proceedings, the government can order individuals removed from the United States without further hearings before an immigration judge unless they indicate an intention to apply for asylum, a fear of persecution or torture, or a fear of return to their home country. Most arriving noncitizens are eligible to be placed into expedited removal proceedings, with certain exceptions, according to Border Patrol and OFO officials. Individuals placed in expedited removal proceedings and who express a fear of persecution or torture are generally subject to mandatory detention under the Immigration and Nationality Act pending a final determination of credible fear of persecution. Regarding family units, in particular, Border Patrol and OFO officials stated that Border Patrol agents and OFO officers typically determine whether ICE has available detention space in one of its family residential centers before placing family units into expedited removal proceedings. ICE and ORR detain or shelter noncitizens and share information about UAC. ICE, among other things, is responsible for detaining and removing noncitizens, including families, who are in the United States in violation of U.S. immigration law and subject to removal. ICE officers are to determine whether to detain, release, or remove such individuals based on a variety of factors, including statutory requirements, medical considerations, and the availability of detention space. ICE detains adults over age 18 in detention facilities that are segregated by gender. For family units placed in expedited removal, ICE officers have the authority to accept or deny a CBP referral for detention in one of ICE s family residential centers a decision that ICE officials stated is largely dependent upon available detention space. As of October 2019, ICE operated three family residential centers, with different population characteristics in each center: South Texas Family Residential Center (Dilley, TX), which has a maximum capacity of 2,400 beds for female adults and their male or female children. Karnes County Residential Center (Karnes, TX), which has a maximum capacity of 830 beds for male adults and their male children. Berks County Residential Center (Leesport, PA), which has a maximum capacity of 96 beds for male or female adults and their male or female children. When an individual is transferred from CBP to ICE custody, ICE officers are to enter information about that person in ICE s data system. The paper A-file is also transferred from CBP to ICE and, according to ICE officials, ICE officers generally review the A-file upon transfer to ensure that it is sufficiently complete. ICE s data system automatically pulls some information, such as basic biographic information, from CBP s data systems. ICE officers are to enter new information into ICE s data system, such as the location(s) where officers detained or released the individual and the documents officers served to the individual, among other things. If CBP or ICE officials determine that a child or children under the age of 18 and without lawful status in the United States arrived in the country without an accompanying parent or legal guardian, the child is classified as a UAC and is to be transferred to ORR custody. Additionally, if DHS determines that a child should be separated from their accompanying parent or parents, DHS then considers the child to be a UAC and transfers him or her to the custody of ORR. ORR provides interim care for UAC at its shelters and identifies qualified sponsors in the United States to take custody of the child while the child waits for his or her full immigration proceedings. CBP s data systems can share some information about UAC automatically with ORR, including biographic information such as name, date of birth, and alien number; and information about related UAC, such as siblings, who were apprehended together. To assess the suitability of potential sponsors, ORR staff collects information from potential sponsors, which may include parents or other family members, to establish and identify their relationship to the child. For example, ORR screening of potential sponsors includes various background checks. According to ORR officials, they are required to attempt to contact a child s parent, regardless of the parent s location, any time they place a child with a sponsor. According to ORR officials, ORR is also responsible for coordinating reunification of separated family units if DHS and HHS determine it is appropriate, or if the adult is later determined by a federal court to be a class member in the ongoing Ms. L v. ICE litigation, related to family separations. ORR officials said that they rely on ICE to gather additional information, such as detailed information from an adult or UAC s Form I-213, when that information is not available or shared at the time a UAC is transferred to ORR custody. USCIS and EOIR consider claims of relief from removal from the United States. USCIS screens individuals in expedited removal most of whom are in ICE detention facilities for credible fear if they indicate an intention to apply for asylum, a fear of persecution or torture, or a fear of returning to their home country. In this screening, an asylum officer is to review certain documentation from CBP and ICE; perform background checks using various automated databases; interview the individual to obtain more details on his or her fear claim, overall credibility, and the nature of any relationships with family members with whom he or she was apprehended; and determine whether there are any dependents who could potentially be included in the individual s fear determination. The regulation governing the credible fear process allows dependents specifically a spouse or unmarried child under the age of 21 of a principal applicant to be included in the applicant s credible fear determination, if the dependent (1) arrived in the United States concurrently with the principal applicant and (2) desires to be included in the principal applicant s determination. For cases in which USCIS concludes the screening with a positive determination, USCIS is to issue a Notice to Appear, thereby placing the individual into full removal proceedings before an immigration judge. Consistent with regulation, if a principal applicant receives a positive credible fear determination, it is USCIS policy that his or her dependents may be included in the positive determination and be placed into full removal proceedings if the dependent arrived concurrently with the principal applicant and wants to be included in the principal s credible fear determination. For cases in which the asylum officer concludes the screening with a negative determination, USCIS is to refer the individual to ICE for removal from the United States, unless he or she requests a review of the negative determination by an immigration judge. Those in full removal proceedings who apply for asylum before an immigration judge may include a spouse and/or unmarried children under age 21 in their asylum application. If the judge grants asylum to the principal applicant, his or her dependents may also be granted asylum. <1.3. Our Work on Fragmentation, Overlap, and Duplication of Federal Programs> In 2010, Public Law 111-139 included a provision for us to identify and report annually on programs, agencies, offices, and initiatives either within departments or government-wide with duplicative goals and activities. In our annual reports to Congress from 2011 through 2019 in fulfillment of this provision, we described areas in which we found evidence of fragmentation, overlap, and duplication among federal programs, including those managed by DHS. To supplement these reports, we developed a guide to identify options to reduce or better manage the negative effects of fragmentation, overlap, and duplication, and evaluate the potential trade-offs and unintended consequences of these options. In this report, we use the following definitions: Fragmentation occurs when more than one agency (or more than one organization within an agency) is involved in the same broad area of national interest and opportunities exist to improve service delivery. Overlap occurs when multiple programs have similar goals, engage in similar activities or strategies to achieve those goals, or target similar beneficiaries. Overlap may result from statutory or other limitations beyond the agency s control. Duplication occurs when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries. <2. DHS s Processes to Identify, Collect, Document, and Share Information about Apprehended Family Members Are Fragmented> <2.1. DHS Has Not Identified the Information about Family Members Apprehended at the Border That Its Components Collectively Need> DHS has not identified the information about family members apprehended together that its components collectively need or communicated that information to relevant components across the department. Based on our analysis of agency documentation and interviews with agency officials, we determined that CBP, USCIS, and ICE require different information about family members who are apprehended together and each component collects such information that is relevant to its respective operational needs. Specifically, CBP, as the apprehending agency at the border, needs information about family members apprehended together for the purposes of, among other things, informing how family members are to be detained while in CBP custody. In addition, USCIS needs information on family members to identify individuals who may be eligible dependents for credible fear screening purposes. ICE needs information on family members to assist USCIS in identifying eligible dependents and to assist ORR in identifying individuals who may be eligible sponsors for UAC based on their family relationship. While each DHS component has identified the information needed to meet its own specific requirements regarding family members, DHS has not identified information needs regarding family members across its components, resulting in a lack of shared understanding of all components needs and fragmented information collection. For example, the information that CBP collects about family members is not aligned with the information that other components, or agencies that might subsequently encounter these family members, need to identify eligible dependents for credible fear purposes or suitable sponsors for UAC. CBP. Regarding family units, CBP (including Border Patrol and OFO) generally collects information about members of family units including parents and their children under age 18 who are apprehended together. CBP components assign a unique identifier to a family unit that allows members records to be linked. CBP components use the information they collect about members of family units to inform how they are to be detained while in CBP custody and to determine how their immigration proceedings are to proceed. In addition, CBP may collect information about certain other relationships among family members apprehended together because CBP and its components Border Patrol and OFO have policies that allow certain family members who are not defined as family units to be detained together while in CBP custody. For example, with regard to Border Patrol, family groups composed exclusively of children under the age of 18 such as siblings or a parent under age 18 and his or her child may be held together in CBP custody, according to Border Patrol guidance. As another example, family members who Border Patrol or OFO agents or officers determine need to be detained together, such as a parent and their child over age 18 with significant medical needs, may also be held together in CBP custody. Border Patrol and OFO have developed processes to collect information about the relationships between family members who are to be detained together, including Border Patrol assigning them a family group number in Border Patrol s data system and OFO documenting the relationship between a juvenile accompanied by a non-parent family member, to facilitate their detention together while in CBP custody. However, CBP generally does not collect information about certain family members such as spouses or children age 18 to 21 because CBP does not have a need to collect such information if, for example, those family members will not be detained together. Other components may require this information, as described below. USCIS. USCIS requires information about family members for credible fear screening and asylum eligibility purposes, consistent with immigration law. Based on our analysis of agency documentation and interviews with agency officials, this differs from the information that CBP collects about family members for its operational purposes. Specifically, spouses and unmarried children under age 21 may be included in their spouse or parent s credible fear screening if the family members arrived in the United States together. At the credible fear screening interview, USCIS is to document the name, country of nationality, and alien number, if known, for the spouse and name, date of birth, country of nationality, and alien number, if known, for the child or children of all individuals being screened for credible fear. In addition, consistent with regulation, it is USCIS policy to include any dependents who arrived concurrently with the principal applicant, such as a spouse or unmarried child under the age 21, on a principal applicant s positive credible fear determination if the dependent wants to be included. This results in both the principal applicant and any dependents being issued a Notice to Appear for full removal proceedings. In addition, USCIS s training on screening families for credible fear states that families do not need to be detained together to be included in a positive determination. In other words, a principal applicant in a credible fear screening may be detained at one of ICE s family residential centers and his or her dependent spouse or child between the ages of 18 and 21 may be detained separately at an adult detention facility. Specifically, since ICE s adult detention facilities are segregated by gender, a female might be detained in a separate adult detention facility from her male spouse. If a parent or spouse receives a positive credible fear screening, his or her dependent s case could be linked and both family members could receive a notice to appear in immigration court for full immigration proceedings. According to USCIS headquarters officials, USCIS relies on information obtained during the credible fear screening interview to identify family members because the information that USCIS receives from CBP about the circumstances of an apprehension generally does not include details about spouses or children age 18-21. Further, USCIS officials said that family members over age 18 who are apprehended together may be detained in separate ICE facilities and referred to USCIS for fear screenings at different times, which makes it difficult for USCIS and ICE to locate such family members. In addition, USCIS officials said that ICE is often not aware of the family relationship between family members if they are detained separately. Specifically, although ICE is responsible for detaining noncitizens who express fear of returning to their home country before they are screened for such fear by USCIS, ICE officials responsible for detention management told us that (1) they are often not aware of family relationships between family members detained separately and (2) they treat anyone over age 18 as an adult and do not consider that a child age 18 to 21 or a spouse could be a dependent on a credible fear claim. ICE. In addition to assisting USCIS in identifying eligible dependents for credible fear screening purposes, ICE assists ORR in identifying qualified sponsors for UAC. According to ORR, qualified sponsors include, among others, and in order of preference: parent or legal guardian; an immediate relative who previously served as a primary caretaker of the child; an immediate relative who did not previously serve as a primary caretaker of the child; and other distant relatives or unrelated adults with a pre- established relationship with the child. When a child apprehended by CBP is classified as a UAC and transferred to ORR s custody, CBP is to provide ORR with information about family members with whom the UAC was apprehended. However, officials from ORR told us that they sometimes receive UAC referrals either through an automated system or via email from CBP with no information about family members with whom the child was apprehended, but subsequently learn from the child that the child was apprehended with a family member. According to ICE and ORR officials, when ORR has questions about potential sponsors for a child in their care, they coordinate with officials from ICE s juvenile and family management program to obtain additional information about the circumstances of the child s apprehension or family members with whom a child was traveling. ICE officials stated that CBP generally provides the information on family members traveling with UAC to ORR, if CBP is aware of such information; however, according to ICE officials, children may not share all relevant details about their family members with CBP agents and officers when they are apprehended, and they may be more comfortable sharing such details once they are in ORR custody. ICE officials said that they can search their data systems, including law enforcement records, for information about the circumstances of a child s apprehension, which ORR uses when evaluating potential sponsors for the child. ORR cannot access such law enforcement records. For example, ICE can use Border Patrol s event unique identifier to search for information about adults who Border Patrol apprehended at the same time as a child, and can use this information to attempt to identify if there are family relationships between an adult and unaccompanied child. ORR officials said that the lack of family member information they receive from CBP or ICE, or delays in receiving such information, can delay the release of a child from a shelter to a qualified sponsor. Our previous work on collaboration has shown that establishing compatible policies, procedures, and other means to operate across agency boundaries can enhance and sustain collaborative efforts and help ensure that fragmented efforts are being managed effectively. Further, leading practices of high-performing organizations include fostering collaboration both within and across organizational boundaries to achieve results. Moreover, federal programs contributing to the same or similar results should collaborate to ensure that program efforts are mutually reinforcing, and should clarify roles and responsibilities for their joint and individual efforts. Our interviews and analysis indicate that the information each DHS component collects about family members meets its own information needs, but does not consider the information needs of other components that might encounter those family members. Officials from CBP and ICE confirmed that they collect information about family members to meet their own operational needs. For example, CBP may not collect information about spouses apprehended together because CBP does not need such information for its operational purposes. Further, Border Patrol and OFO officials we spoke with told us that CBP components collect all relevant information needed for their operational purposes but that CBP is not responsible for collecting information that USCIS needs to identify eligible dependents, including spouses and children age 18 to 21. Without identifying and communicating department- wide information needs with respect to family members who have been apprehended together, DHS does not have reasonable assurance that its components are identifying all individuals who may be eligible for relief from removal from the United States based on their family relationships or that ICE can provide ORR with the information it needs to help evaluate the suitability of potential sponsors for UAC. <2.2. CBP Does Not Routinely Collect and Document Sufficient Information on Apprehended Family Members to Assist Other Agencies Decision- making> CBP s Border Patrol and OFO document the circumstances under which family members are apprehended at or between U.S. ports of entry and, as a result, are in the best position to collect information about their family relationships. However, our analysis of DHS documentation and interviews with officials indicate that CBP does not routinely collect all of the information about family members that is needed to (1) identify eligible dependents as part of the credible fear screening process and (2) evaluate family members for sponsorship placement for UAC. Further, Border Patrol agents and OFO officers do not routinely document that information on the record of apprehension. CBP s Border Patrol agents and OFO officers are to document the circumstances of an apprehension using the required Form I-213, Record of Deportable/ Inadmissible Alien (record of apprehension). The record of apprehension is a key form in the paper A-file and is the official record of an apprehension. Among other things, the record of apprehension captures biographic information about the apprehended individual and includes a narrative section for agents and officers to document details about the circumstances of the apprehension. Border Patrol and OFO s guidance indicates that the record of apprehension may be used as evidence in immigration or criminal courts and that omissions or mistakes on the form may have negative consequences. According to Border Patrol officials, the information captured on the record of apprehension varies and there is no requirement that it include information about family members apprehended together. However, USCIS, ICE, and ORR officials told us that they rely on the record of apprehension for such family information. As discussed below, since CBP does not routinely collect sufficient information about family members apprehended together or document such information on the record of apprehension, there are gaps in the information available to other DHS components about family members apprehended together. Information to identify eligible dependents as part of the credible fear screening process. CBP does not routinely collect sufficient information about relationships between family members apprehended together for USCIS and ICE to later identify if such individuals are eligible dependents as part of the credible fear screening process. As previously discussed, consistent with regulation, it is USCIS policy to include any dependents on a principal applicant s positive credible fear determination if the dependents arrived concurrently with the principal applicant and want to be included on the principal applicant s credible fear determination. However, CBP does not routinely collect information about relationships between all parents, children, and spouses apprehended together at the time of their apprehension or share that information with USCIS. Specifically, CBP does not require its agents and officers to collect information about or to document the relationships between certain family members apprehended together, such as spouses and children age 18 to 21. As a result, USCIS s ability to identify eligible dependents is limited. Asylum officers are to ask all individuals they screen for credible fear if they arrived in the United States with other family members. Asylum officers told us that, when CBP does not collect information about potentially eligible dependents especially spouses and children age 18 to 21 they face challenges in identifying and locating such dependents. Asylum officers also told us that when CBP agents and officers do not collect and document information about relationships at the time family members are apprehended, asylum officers must rely on the information that the applicant provides in the credible fear screening interview, rather than using the screening interview to corroborate family information already collected by CBP at the time of the apprehension. In addition, a USCIS official told us that it can be beneficial for USCIS to have information about relationships between all parents, children, and spouses who are apprehended together for other processes such as if one family member placed into expedited removal proceedings is subject to the reasonable fear process because information in one family member s claim can impact other family members ability to meet the threshold for a positive fear determination. Border Patrol, OFO, and ICE officials stated that, due to the volume of apprehensions at the southwest border, Border Patrol and OFO collect information to meet CBP s operational needs, but that the level of detail documented on the record of apprehension may vary. Specifically, according to one ICE official responsible for detention at a family residential center and an ICE headquarters official, information about family relationships, including that of spouses, is not consistently documented in the information ICE receives from CBP and shares with USCIS. Since USCIS does not receive consistent information about family members from CBP, USCIS officers must rely on the credible fear screening interview to identify potential eligible dependents. When asylum officers identify eligible dependents during the credible fear screening interview, officers attempt to locate these dependents to link them to their parent s or spouse s case. However, according to USCIS and ICE officials, it can be difficult to locate such dependents if they are not detained together. Specifically, because CBP officers and agents do not routinely collect information about the relationships between spouses or parents and children age 18 to 21 or document such information on the record of apprehension at the time they are apprehended, USCIS and ICE do not have the information about those family relationships that they need to locate and identify eligible dependents. Additionally, individuals may not know certain information such as the alien number of their spouse or child that would help USCIS or ICE locate them. ICE officials told us that they assist USCIS officials in locating spouses and children age 18 to 21 for the purposes of making them dependents on a spouse or parent s credible fear application on a case by case basis, but that tracking down such dependents can be difficult. Further, ICE and USCIS officials told us that because they do not have sufficient information about eligible dependents, it is possible that ICE could remove an eligible dependent from the United States while their spouse or parents credible fear claim was pending, or after their spouse or parent received a positive credible fear determination. Information to assist ORR in making placement decisions for children transferred to its custody. CBP does not collect all information about family members at the time of apprehension that is needed to assist ORR in making placement decisions for UAC transferred to its custody, according to ICE and ORR headquarters officials. When CBP refers a child for placement at an ORR shelter, CBP is to share some information with ORR, including the name, age, and alien number of the child, as well as information about any family members with whom the child was apprehended. ORR officials stated they use this information to assist in making placement decisions for the child. However, ORR officials stated that the information CBP provides when the child is referred may not include information about family members with whom the child was apprehended. Further, according to ORR officials, they do not typically receive the child s Form I-213 which documents the circumstances of the child s apprehension from CBP. ORR officials said that they sometimes receive UAC referrals from CBP without any information about other family members and they may subsequently learn from the child that he or she was apprehended with a family member. Additionally, if ORR officials have questions about a child in their custody, officials from ICE s Juvenile and Family Residential Management Unit told us that they are the liaison between DHS and ORR. ICE officials told us that the level of detail that CBP agents and officers collect for UAC apprehended with family members varies. According to an ICE official in ICE s Juvenile and Family Residential Management Unit, the more information that CBP agents and officers provide about the circumstances of a child s apprehension, the better equipped ICE is to answer ORR s questions about familial relationships and potential suitable sponsors for a particular child, as well as to investigate potentially fraudulent familial relationships or circumstances in which an adult apprehended with a child might not be a suitable sponsor. According to ORR officials, they also rely on ICE to provide information about the suitability of reunifying a parent and child where ORR determines that a UAC was separated from their parent or legal guardian with whom they arrived. As we reported in February 2020, DHS and HHS have developed interagency agreements for the transfer and placement of UAC between the two departments; however, information sharing gaps remain. Specifically, ORR headquarters officials stated that they have experienced delays in releasing a child to a sponsor due to missing information about a parent or the inability to notify a parent in ICE detention about sponsorship decisions. We recommended that DHS and HHS should collaborate to address information sharing gaps to ensure that ORR receives information needed to make decisions for UAC, including those apprehended with an adult. DHS and HHS concurred with the recommendations. Border Patrol and OFO developed their own requirements for what information they collect, if any, about family members apprehended together based on their operational needs. However, because CBP agents and officers collect information and document the circumstances of apprehensions when families first arrive in the United States, they are best positioned to identify those family members who were apprehended together and the relationships among them. Additionally, the information that CBP agents and officers collect may impact how family members are subsequently identified or processed by other federal agencies. CBP officials said that their components collect limited information about family members apprehended together because they do not have an operational need for such information and because collecting it is time intensive in an environment where agents and officers are managing a large volume of apprehensions. However, because CBP does not routinely collect sufficient information about family relationships at the time of apprehension, or document that information on the record of apprehension, DHS components do not have information necessary to identify potentially eligible dependents for credible fear purposes and ICE does not have sufficient information to assist ORR in making suitable sponsorship determinations. Further, while we recognize that the collection of additional information on family members can be time intensive for CBP, as the apprehending agency, CBP is best positioned to collect and document information on family members apprehended together. In addition, ICE, USCIS, and ORR may expend resources themselves trying to identify family relationships for their own operational purposes. As previously noted, our prior work on collaboration has shown that establishing compatible policies, procedures, and other means to operate across agency boundaries can enhance and sustain collaborative efforts and help ensure that fragmented efforts are being managed effectively. In October 2019, CBP officials acknowledged that it could be helpful to consider other agencies information needs when collecting information about apprehended families. Collecting information about the relationships between family members apprehended together and documenting that information on the Form I-213 could help address fragmentation among DHS components and improve the information available to other agencies, such as ORR, to ensure that relevant information is available to support decisions on individuals administrative immigration or other proceedings. <2.3. DHS Components Data Systems Have Fragmented Information about Family Members> DHS does not have a mechanism to link the records of family members apprehended together across its components. Specifically, CBP s data systems can assign unique family identifiers to link records of certain family members together, as appropriate, upon apprehension. CBP uses these unique identifiers to facilitate the detention of family members together in CBP custody. They also provide a mechanism for CBP to search for and identify family members that share a unique identifier. However, those identifiers are not readily accessible and usable to USCIS and ICE, which also have operational needs to identify and review records of family members apprehended together. Further, USCIS and ICE s data systems do not assign unique family identifiers. Because DHS s data systems do not have shared family identifiers to link family members, DHS components may not have access to all the information about family members they need to make effective and efficient operational decisions. CBP s data systems assign unique family identifiers. Regarding family units, CBP components have guidance on how Border Patrol agents and OFO officers are to enter information on family units in their respective data systems. CBP s data systems assign a unique identifier to each family unit and link their records, and agents and officers are to collect the following information about family units: Border Patrol guidance indicates that agents are to process adult parents and their children under age 18 who are apprehended together as members of a family unit, and the data system assigns each family unit a unique family unit identifier. This identifier links the records of the family unit members together, and allows agents to search for family unit members using that number. OFO is deploying a new data system and, as of October 2019, OFO officials said that they planned for the new system to be deployed along the southwest border on an ongoing basis as conditions allow. OFO documentation on the new system indicates, and OFO officials told us, that the new system will allow OFO officers to assign a unique family identifier to members of a family unit and will allow officers to document the familial relationship between members of family units. Border Patrol s data system can also assign a unique family group identifier to family members whom agents determine should be detained together for Border Patrol s operational purposes. According to Border Patrol guidance and officials, family group numbers may be used to link family members during Border Patrol detention. Further, these numbers may be documented on the record of apprehension and may be shared with ORR to, for example, link the records of two related UAC when Border Patrol transfers them to ORR custody. However, Border Patrol agents have discretion to determine whether family members apprehended together are to be assigned a unique family group identifier, according to agency documentation and our interviews with agency officials. CBP components do not have a mechanism to share their unique family unit or family group identifiers with ICE or USCIS in a way that is readily accessible and usable. CBP s data systems share limited information on apprehended family members with ICE s data system. When ICE receives custody of a family unit from CBP, ICE officers create a record for each family member in ICE s data system. ICE s data system pulls some information about each family member automatically from CBP s data systems. For example, ICE officers can find basic biographic information about individual family members apprehended by Border Patrol by searching using the individual s alien number, a DHS unique identifier assigned to individuals. In addition, ICE identified a need for more information to help identify family units in ICE custody and developed a mechanism to receive that information from CBP. As of August 2018, ICE s data system displays a family unit banner in the data records of those noncitizens CBP processed as a member of a family unit. This banner flags for ICE officers that the individual was identified by CBP as a family unit member, and ICE s data system displays the Border Patrol or OFO unique family unit identifier. ICE s family unit banner was a positive development and allows ICE to identify individuals in its custody that CBP processed as a member of a family unit. However, the family unit banner does not provide ICE all the information it needs to identify family members, according to ICE officials. Specifically, ICE can see that a particular individual was processed by CBP as a member of a family unit, but ICE cannot use the system to identify other members of that person s family because ICE s data system does not link or display alien numbers for individuals who share a family unit identifier. According to Border Patrol officials, because ICE and Border Patrol s data systems are both housed within ICE s Enforcement Integrated Database repository, ICE should have access to the family unit information collected by Border Patrol. However, ICE officials stated that ICE cannot use the information on family units that CBP s data system shares with ICE s data system to, for example, search for family unit members using Border Patrol s unique family unit identifier. According to ICE officials, ICE officers must use a time consuming and manual process to research potential family associations or identify family unit members using the information CBP provides to ICE. Further, ICE s data system cannot link the records of family unit members in its custody, although these family unit members are generally detained together in one of ICE s family residential centers. According to ICE guidance and ICE officials, ICE s data system only displays family unit information as entered by CBP and such information is not available for individuals identified as members of a family unit after entering ICE custody. As of November 2019, ICE headquarters officials stated that they are working with the ICE data unit to create a new module that would enhance ICE s ability to link and track family units in its data system, including expanding ICE s use of existing family unit information as entered by CBP. According to ICE officials, ICE has established a project team for this effort and hopes to deploy the updates in the fourth quarter of fiscal year 2020. However, ICE did not provide any documentation on this effort, such as a project plan with time frames for deploying these system updates, to verify these plans. Although ICE has taken steps to identify individuals in its custody that CBP documented as members of a family unit, ICE does not have a mechanism to link the records of family unit members together. In addition, ICE does not have a mechanism, such as a unique family group identifier, to link the records of other family members apprehended together. ICE needs information about these other family members to (1) assist USCIS in identifying eligible dependents for credible fear screening purposes and (2) assist ORR in identifying family members with whom a UAC was apprehended and assessing whether they might be suitable sponsors. According to ICE officials, ICE uses a manual process to identify family members apprehended together. Without a mechanism, such as a shared unique identifier, that ICE can use to access information CBP gathered about family members apprehended together, ICE cannot ensure that it has the information it needs to identify eligible dependents, or to answer ORR s questions about UAC with the best available information. As of November 2019, ICE is enhancing its data system s ability to link and track family unit members. However, it is too early to know if ICE s planned system enhancements will include a mechanism that will allow ICE officers to identify family members apprehended together. CBP and ICE s data systems do not share information on apprehended and detained family members with USCIS s data system. USCIS s data system does not receive information about family members (parents, spouses, and children) from CBP or ICE in an automated manner. According to USCIS officials, because CBP s and ICE s data systems do not have a mechanism such as a linked unique family identifier to share information about potential dependents with USCIS s data system automatically, the credible fear interview may be the only way for USCIS to determine that an individual being screened for credible fear was apprehended with other family members, especially if any members of the family are detained separately. For family members detained separately, according to USCIS officials, USCIS asylum officers attempt to locate spouses and children age 18 to 21 when they are made aware of such family relationships as part of the credible fear screening process. However, due to limitations in data sharing between CBP, ICE, and USCIS, USCIS may not be able to locate such spouses and children age 18 to 21 in some circumstances. In particular, USCIS officials told us that, if the spouse or child did not make his or her own claim of credible fear while in CBP or ICE custody, USCIS asylum officers use a time consuming and manual process to attempt to identify family members apprehended together, using data that ICE makes available to USCIS. ICE officials told us that they assist USCIS officials in locating spouses and children age 18 to 21 for the purposes of making them dependents on a spouse or parent s credible fear application on a case by case basis, but that tracking down such dependents can be difficult. USCIS has developed a mechanism to link family members in its own data system, but this linkage is for USCIS s purposes and is unrelated to the unique family unit or family group identifier assigned by CBP components at the time family members are apprehended or to the family unit banner that ICE s data system displays for certain family units. Additionally, USCIS s data system does not assign a unique identifier to family members whose cases are linked for credible fear screening purposes and USCIS does not have access to CBP s family identifiers. A shared family member unique identifier could allow USCIS, CBP, and ICE access to more complete information about family members who were apprehended together and could give USCIS and ICE, in particular, greater assurance that they have complete information about family members apprehended together that they require for their operational needs. Our previous work on collaboration has shown that identifying and addressing needs by leveraging resources, such as information technology resources, can enhance and sustain collaborative efforts, and help ensure that fragmented efforts are being managed effectively. Border Patrol, OFO, ICE, and USCIS data systems were developed to meet each component s operational needs, leading to (1) data system integration limitations and (2) variation in the type of information that each component collects or requires. Components have implemented ways to share some information across their data systems such as ICE s family unit banner for members of family units processed by Border Patrol and USCIS s ability to access some information in ICE s data system to attempt to identify eligible dependents of individuals who have received a positive credible fear determination but such information sharing is limited, and the components do not have a unique shared identifier to identify family members apprehended together. Moreover, DHS and its components have not considered options to share information on family members across components in an automated manner, as each component has been focused on its own operational needs for such information. Evaluating options for developing a shared unique family member identifier across CBP, ICE, and USCIS that would allow each component access to certain information about family members apprehended together would help bridge the information gaps about family relationships between components caused by DHS s fragmented data systems. Further, it would give DHS greater assurance that its components can identify family members who were apprehended together, even after they leave CBP custody. It would also mitigate the risk that, lacking such information, DHS could remove individuals from the United States who may have been eligible for relief based on their family relationship. <3. Conclusions> Although CBP s apprehensions of family members have increased significantly in recent years, DHS has not taken steps to better manage fragmentation, including identifying, collecting, documenting, and sharing the information its components collectively need about family members apprehended together. The information each DHS component collects about family members apprehended together meets its own information needs. However, it does not consider the information needs of other components that might encounter those family members. Border Patrol and OFO officials we spoke with told us that CBP components collect all relevant information needed for their operational purposes but that CBP is not responsible for collecting information that USCIS needs to identify eligible dependents, including spouses and children age 18 to 21. Without identifying information needs with respect to family members who have been apprehended together and without communicating that information department-wide to relevant components DHS does not have reasonable assurance that its components are identifying all individuals who may be eligible for relief from removal from the United States based on their family relationships. In addition, as the component that apprehends individuals arriving at the border, CBP is best positioned to document the circumstances of an apprehension, including by collecting and documenting information about family members who arrive in the United States together. Collecting information about the relationships between family members apprehended together and documenting that information on the Form I- 213, the record of apprehension, would improve management of fragmentation among DHS components and improve the information available to other agencies, such as ORR, to ensure that relevant information is available to support decisions on individuals administrative immigration or other proceedings. Lastly, DHS components data systems were developed to meet each component s operational needs, leading to data system integration limitations and variation in the type of information that each component collects or requires. Components have implemented ways to share some information across their data systems, but such information sharing is limited. Evaluating options for developing a shared unique family member identifier across CBP, ICE, and USCIS that would allow each component access to certain information about family members apprehended together would help bridge the information gaps about family relationships between components caused by DHS s fragmented data systems. <4. Recommendations for Executive Action> We are making the following four recommendations to DHS: The Secretary of Homeland Security should identify the information about family members apprehended together that its components collectively need to process those family members and communicate that information to its components. (Recommendation 1) The Secretary of Homeland Security should ensure that, at the time of apprehension, CBP collects the information that DHS components collectively need to process family members apprehended together. (Recommendation 2) The Secretary of Homeland Security should ensure that CBP documents the information that DHS components collectively need to process family members apprehended together on the Form I-213. (Recommendation 3) The Secretary of Homeland Security should evaluate options for developing a unique identifier shared across DHS components data systems to link family members apprehended together. (Recommendation 4) <5. Agency Comments> We provided a draft of this report to DHS and HHS for their review and comment. DHS provided formal, written comments, which are reproduced in full in appendix I. DHS and HHS also provided technical comments on our draft report, which we incorporated, as appropriate. DHS concurred with our recommendations and described actions planned or underway to address them. For example, in response to our recommendation that DHS identify the information its components need about family members apprehended together, DHS stated that the DHS Office of Immigration Statistics within the DHS Office of Strategy, Policy, and Plans will work with CBP, ICE, USCIS, and interagency partners to establish a comprehensive set of information to collect on family members apprehended at the border. Further, in response to our recommendations that DHS collect and document the information its components collectively need about family members apprehended at the border, DHS stated that after DHS identifies the information about families apprehended together that its components collectively need, CBP will work with DHS s policy office to ensure all required information is collected at the time of apprehension on the Form I-213. In addition, Border Patrol and OFO will issue guidance to their agents and officers to ensure they document the information about family members apprehended together that DHS components collectively need. Regarding our recommendation that DHS evaluate options for developing a unique identifier shared across DHS components data systems to link family members apprehended together, DHS stated that its policy office will work with CBP, ICE, and USCIS to develop a unique shared identifier linking family members apprehended together. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Acting Secretary of Homeland Security, and the Secretary of Health and Human Services. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Appendix I: Comments from the Department of Homeland Security Appendix II: GAO Contact and Staff Acknowledgments <6. GAO Contact> <7. Staff Acknowledgments> In addition to the contact named above, Kathryn Bernet (Assistant Director), Mary Pitts (Analyst in Charge), Carissa Bryant, Miranda Cohen, Michael Harmond, Stephanie Heiken, Leslie Sarapu, Jessica Walker, Dominick Dale, Eric Hauswirth, Jan Montgomery, Heidi Nielson, and Michele Fejfar made key contributions to this work. | Why GAO Did This Study
In fiscal year 2019, CBP reported apprehending more than 527,000 noncitizen family unit members (children under 18 and their parents or legal guardians) at or between U.S. ports of entry along the southwest border—a 227 percent increase over fiscal year 2018. GAO was asked to review issues related to families—including family units—arriving at the southwest border.
This report examines the extent to which DHS has identified, collected, documented, and shared information its components need to inform processes for family members apprehended at the border. GAO analyzed DHS documents; interviewed DHS officials; and visited DHS locations in Arizona, California and Texas, where CBP apprehensions of family units increased in 2017. GAO compared the information gathered with leading practices in collaboration to evaluate DHS components' processes for apprehended family members.
What GAO Found
The Department of Homeland Security's (DHS) processes to identify, collect, document, and share information about family members apprehended at the southwest border are fragmented. DHS's U.S. Customs and Border Protection (CBP) apprehends family members and determines how information about each individual—and his or her relationship to other family members—will be collected and documented. Other DHS components, such as U.S. Immigration and Customs Enforcement (ICE), use information collected at the time of apprehension to inform how those who are members of a family, including children, will proceed through immigration proceedings. Family members apprehended at the border and placed into expedited removal that indicate an intention to apply for asylum, or a fear of persecution or torture or fear of return to their home country, are referred to DHS's U.S. Citizenship and Immigration Services (USCIS) for a credible fear screening. However,
DHS has not identified the information its components collectively need about apprehended family members. Each DHS component collects information to meet its own operational needs, and does not consider the information needs of other components. For example, the information about family members that CBP needs differs from the information about family members that USCIS needs. CBP officials told us they would not generally identify spouses and children age 18 to 21 apprehended with a parent as family members, although USCIS's definition of a dependent for credible fear screening purposes includes spouses and unmarried children under age 21.
CBP collects information about certain family members for its operational purposes, but does not collect and document information at the time of apprehension that other DHS components may later need. Specifically, CBP collects and documents information about parents and their children under age 18 who are apprehended together. However, consistent with regulation, USCIS policy is to include any dependents who arrived concurrently with the principal applicant, such as a spouse or unmarried child under age 21, on a principal applicant's positive credible fear determination if the dependent wants to be included. According to USCIS and ICE officials, it can be difficult to identify spouses and children age 18 to 21 because CBP does not regularly document such family relationships.
DHS does not have a mechanism to link the records of family members apprehended together across its components that need this information. As a result, DHS components may not have access to all the information about family members they need to make effective operational decisions.
Because DHS has not identified the information all of its components collectively need to process family members apprehended at the border, collected and documented that information at the time of apprehension, and evaluated options to share that information across components, consistent with leading practices in collaboration, DHS risks removing individuals from the United States who may have been eligible for relief or protection based on their family relationship.
What GAO Recommends
GAO is making four recommendations to DHS, including that DHS identify the information its components collectively need to process family members apprehended together, collect and document that information at the time of apprehension, and evaluate options for developing a unique identifier shared across DHS's data systems to link family members apprehended together. DHS concurred with the recommendations. |
gao_GAO-19-511 | gao_GAO-19-511_0 | <1. Background> We previously reported that the Army began its modernization efforts defined as efforts to enhance its capabilities and upgrade its weapon systems in the fall of 2017. As a part of this effort, the Army identified six modernization priorities. 1. Long-Range Precision Fires focused on improving the targeting, range, and lethality of, among other things, artillery and rockets. 2. Next Generation Combat Vehicle focused on developing manned and unmanned combat vehicles with updated firepower, protection, mobility, and power generation. 3. Future Vertical Lift focused on developing manned and unmanned aircraft capable of attack, lift, and reconnaissance missions. 4. Army Network focused on developing a mobile system of hardware, software, and infrastructure for reliable and secure communications. 5. Air and Missile Defense focused on improving capabilities for protection against modern and advanced air and missile threats. 6. Soldier Lethality focused on improving capabilities, equipment, and training for all fundamentals of combat including shooting, moving, communicating, protecting, and sustaining combat operations. We also reported that, to fund these priorities, in 2017 the Army realigned over $1 billion in science and technology funding away from efforts that it determined did not align with these priorities. The Army subsequently announced plans to spend an additional $7.5 billion on these priorities over the next 5 years. <1.1. Army Futures Command> Army Futures Command was formed less than a year ago and has not finalized its structure. The Army established the Army Futures Command in June 2018 to consolidate its modernization efforts under one entity and it began initial operations in July 2018. Army Futures Command selected Austin, Texas, as its headquarters location and began to integrate and align resources and personnel. The new command headquarters includes a number of administrative and functional offices that report directly to it, not all of which are co-located with the command in Austin. Specifically: Administrative offices are responsible for providing contracting support, legal support, and small business engagement support to headquarters. These offices are located in Austin, Texas. Army Applications Laboratory is responsible for coordinating outreach to businesses, including small businesses, for headquarters. The Army Applications Laboratory is located in Austin, Texas. Cross-functional teams are the eight teams responsible for identifying capability needs and developing requirements associated with the Army s six priorities. The teams are located in different parts of the country in areas relevant to their capability focus. Medical Research and Development Command is responsible for seeking and developing new medical technologies for use by the Army. This command is in the process of transferring from Army Medical Research and Materiel Command and is located at Fort Detrick, Maryland. In addition to these organizations, the command has three major subordinate components, comprised of several existing requirements and technology development organizations. Specifically: Futures and Concepts Center is responsible for identifying and prioritizing capability and development needs and opportunities. This organization subsumed the Army Capabilities Integration Center formerly part of Army Training and Doctrine Command on December 7, 2018 and is located at Fort Eustis, Virginia. Combat Capabilities Development Command is responsible for conceptualizing and developing solutions for identified needs and opportunities. This organization subsumed the Research, Development, and Engineering Command formerly a part of Army Materiel Command on February 3, 2019 and is located at Aberdeen, Maryland. Combat Systems Directorate is responsible for refining, engineering, and producing new capabilities. The directorate is to communicate with the program executive offices and program management offices reporting to the Assistant Secretary of the Army for Acquisition, Logistics, and Technology. The command is in the process of establishing Combat Systems Directorate in Austin, Texas. Army Futures Command is expected to become fully operational in July 2019, when its headquarters and its subordinate components are fully staffed. Locations for components of the new command are shown in figure 1. According to Army Futures Command officials, as part of their modernization efforts, they plan to coordinate with other existing Army organizations. These include the Office of the Assistant Secretary of the Army for Acquisition, Logistics, and Technology the civilian authority responsible for the overall supervision of acquisition and contracting for the Army. They also plan to coordinate with Army Contracting Command, which is the principle buying agent and provider of contracting support for the Army and operates within Army Materiel Command. <1.2. Small Business Engagement> As we previously stated, others have reported that small businesses are a vital part of the defense industrial base and engaging with them can produce innovative capabilities and emerging technologies to support the warfighter. For the purposes of this report, engagement with small business is defined as a range of activities including: initial outreach to small businesses to identify companies that may have useful information or ideas, information sharing on the Army s capability needs, and formal engagement including processes to enter into business relationships, including contracts and other arrangements. The Small Business Act requires federal agencies to establish annual goals that provide small businesses with contracting opportunities to the maximum extent practicable. Pursuant to the Act, the Small Business Administration negotiates annual small business goals with federal agencies, including the Department of Defense. A portion of the overall goals for the Department of Defense is assigned to the various military components including the Army that have contracting authority. The Army Office of Small Business Programs, responsible for enhancing Army contracting opportunities for small businesses, then assigns portions of the Army s goal to its four major commands with contracting authority: Army Materiel Command, Army Medical Command, Army Corps of Engineers, and the National Guard Bureau. Army Materiel Command is the primary command responsible for the execution and oversight of contracts for Army Futures Command. Historically, the Army has engaged with small businesses in a variety of ways, including awarding contracts for various goods and services that support the warfighter. Federal contracts, including those awarded by the Army, are tracked in the Federal Procurement Data System-Next Generation database. Using data provided by the Army from this database, we identified over 4,500 contracts awarded to small businesses for research and development efforts in the 5 years prior to the establishment of Army Futures Command fiscal years 2013 through 2017. The number of contracts awarded during this time period is summarized in table 1. We identified almost $2.3 billion in obligations to small businesses for research and development from fiscal years 2013 through 2017, or about half of the total amount the Army obligated for all research and development contracts. The obligations for these Army contracts awarded to small businesses for research and development are summarized in table 2. These contract obligations for research and development went to 1,815 small businesses throughout the United States from fiscal years 2013 through 2017. Figure 2 shows this information for each state as well as the District of Columbia and Puerto Rico. About half of the Army contract awards and obligations to small businesses for research and development from fiscal years 2013 through 2017 supported two organizations Research, Development, and Engineering Command and Medical Research and Materiel Command which have transitioned, or are in the process of transitioning, to Army Futures Command. To support research and development efforts for these two organizations, the Army awarded 2,948 out of a total 4,514 small business contracts, and obligated about $1.3 billion out of $2.3 billion from fiscal years 2013 through 2017. In addition to the contracts discussed above, the Army can use other arrangements to engage with small businesses. These other arrangements include: agreements using other transaction authority for research and development activities and developing prototypes; financial assistance mechanisms including grants which are used when the principal purpose of the relationship is to transfer a thing of value to the recipient to carry out a public purpose authorized by law, and substantial involvement by the agency is not expected and cooperative agreements which are also used to transfer a thing of value to carry out a public purpose, but where substantial involvement by the agency is expected; and cooperative research and development agreements under which the government and nonfederal partners may share resources and increase the commercialization of federally developed technology. Unlike contracts, the Federal Procurement Data System-Next Generation database cannot be used to quantify engagement with small businesses using these other arrangements. For example, the financial assistance mechanisms, as well as cooperative research and development agreements, are not generally tracked in the Federal Procurement Data System-Next Generation database. In addition, while it is the Department of Defense s policy to report the use of other transaction authority for prototype projects in the Federal Procurement Data System- Next Generation, the data for this reporting does not distinguish business size. As a result, it cannot be used to quantify the Army s engagement with small businesses under this arrangement. <2. Army Did Not Conduct Analyses Specific to Small Business, but Army Futures Command Stated It Considers Small Business Engagement Important> The Army conducted several analyses related to its modernization efforts, including those directly focused on the creation of Army Futures Command. We identified the following key analyses the Army used to support its modernization efforts: In October 2017, Army reviewed its science and technology portfolio and determined which investments contributed to the Army s modernization priorities and which might be curtailed or eliminated to realign funding. According to Army officials, this review was focused on identifying solutions to known capability needs, not on how small businesses would be affected by the realignment of funds. In early 2018, Army analyzed several options for the roles, responsibilities, staffing, and organizational structure for the proposed Army Futures Command. This analysis did not include an assessment of how small business would be affected by its establishment. In April 2018, Army completed a report on its modernization strategy as mandated by the Congress. The report focused on warfighting challenges, risks, costs, and acquisition timelines for fielding future capabilities. It also included analyses of near-peer competitors, operational requirements, strategic portfolio analyses, and capability gaps. It did not include information on what role, if any, small businesses would have in developing or supplying the means to close capability gaps. Multiple Army officials explained that they did not specifically analyze the effect of modernization on small business as they anticipated continuing their current level of engagement with these entities and perhaps increasing it. Further, senior Army Futures Command officials stated that they consider engagement with small businesses to be critical to their modernization efforts as well as a key aspect of their mission. They also noted that the command s headquarters location in Austin, Texas was chosen, in part, because of its close proximity to science, technology, and engineering talent and small business start-ups that can provide innovative solutions. <3. Army Futures Command Is Taking Steps to Engage with Small Businesses, but Is Not Fully Leveraging Existing Relevant Army Expertise> <3.1. Army Futures Command Stated It Is Continuing Small Business Engagement Efforts of Subordinate Commands and Taking Initial Steps to Enhance Engagement> Senior Army Futures Command officials told us they intend to continue the small business engagement efforts undertaken by components being integrated into the new command. Command officials stated that organizations transitioning to Army Futures Command will continue engaging with small businesses as they have in the past. For example, organizations transitioning to Army Futures Command awarded about $1.3 billion to hundreds of small businesses from fiscal years 2013 through 2017. In addition, prior to transitioning to the new command, the Combat Capabilities Development Command Army Research Laboratory and the Medical Research and Materiel Command participated in outreach events, such as industry days and conferences focused on small businesses, to network with and identify small businesses for potential future awards. According to officials from these commands, these efforts have historically led to business relationships using a variety of arrangements, including contracts, agreements using other transaction authority, grants, cooperative agreements, and cooperative research and development agreements. Officials from Army Futures Command stated that the past efforts of its components aimed at small business engagement would continue. The command also plans to continue utilizing the Small Business Innovation Research and Small Business Technology Transfer programs to award contracts, grants, and cooperative agreements to small businesses. Army Futures Command also intends to use their cross-functional teams to enhance small business engagement. These teams identify capability needs and requirements derived from the Army s six modernization priorities. Officials told us that these cross-functional team efforts can serve as a way to focus small business engagement. For example, the cross-functional teams develop problem statements that describe the capabilities currently needed by the warfighter for a specific activity, such as a need for better communications and networking equipment. These problem statements can then be shared with small businesses as part of outreach efforts such as challenge competitions or industry days and lead to discussions about potential solutions. In addition, Army Futures Command officials told us the command intends to enhance its small business engagement through several initiatives some of which are underway and some of which are in development. Officials told us they were not certain how many of these initiatives have led to specific contracts or awards, but noted that they had in some cases. Command officials told us that they have undertaken four initiatives to engage with small businesses for research and development: Army Research Laboratory Open Campus 2.0 is based on an existing Army Research Laboratory program to transition scientific research from universities to Army technology concepts. It will work with the research communities within universities to develop these concepts and potentially commercialize them. This program is currently directed by the office of the Deputy Commanding General, which is located at the command s headquarters in Austin, Texas. Army Capability Accelerator is a new initiative that engages small businesses in developing and maturing concepts into prototypes and validating early-stage technologies. The accelerator is managed by the Army Applications Laboratory, which is located with the command s headquarters in Austin, Texas. It also provides the support and infrastructure needed to accelerate small businesses concepts into solutions for warfighter capability gaps. Army Capability Accelerator has offices in Austin, Texas, and New York City, New York, and Army Futures Command intends to establish additional offices across the country. Army Capability Accelerator has hosted or co-hosted events allowing small businesses to demonstrate their capabilities and engage with the command. For example, the Austin office hosted a challenge competition in September 2018 to develop a solution for countering a drone threat. Similarly, according to officials, the New York City office hosted a challenge competition in December 2018 where the command funded awards to small businesses for positioning, navigation, and timing capabilities. Army Strategic Capital is a proposed restructuring of a prior initiative intended to leverage venture capital to offset Army development costs through co-investment with existing Army Small Business Innovation Research and Small Business Technology Transfer programs. According to Army Futures Command officials, this initiative will be managed by the office of the Deputy Commanding General in Austin, Texas, but is in the planning stages and could involve legislative or policy changes to clarify or augment the authorities of the command. Halo is a new initiative intended to accelerate the adaptation and transition of commercial and startup-derived products to Army applications and programs. This initiative involves more mature technologies and focuses on the acceleration and integration of prototypes. Army officials stated that Army Applications Laboratory will manage this initiative and that it is under development. These four initiatives are described further in Figure 3 below: Army officials noted that many of their new initiatives address concerns raised by small businesses in working with the government, including the Army, on research and development activities. According to a representative involved with the capability accelerator office in Austin which involves a private company that works with small businesses to facilitate opportunities both across the private sector and, now, with the Army small businesses have expressed concerns about working with the government. Specifically, these representatives identified concerns related to barriers to entry, length of time to reach an award, and the complexity of the government contracting process, among others. Similarly, representatives from the capability accelerator office in New York City stated that the Army needs a way to increase its visibility to small businesses in order to attract the interest of these companies. Army Futures Command officials acknowledged these concerns and said that they are developing efforts to alleviate or overcome them. For example, as part of its Halo initiative, the Army created a program intended to guide small businesses through the government contracting processes. In addition, Halo also plans to use business arrangements designed to decrease the time between initial contact with small businesses and the award of contracts or other agreements. <3.2. Army Futures Command Has Not Fully Leveraged Army s Small Business Expertise but Is Working to Improve Coordination> In its initial efforts to enhance engagement with small businesses, Army Futures Command did not fully leverage the expertise of other Army organizations that previously facilitated small business engagement. Various Army officials have identified several early instances in which the command took steps to engage with small businesses without consulting other Army offices with relevant expertise. For example: Army Office of Small Business Programs According to Army Office of Small Business Programs officials, the command did not consult with them (1) before engaging with small businesses in Texas for research and development efforts; (2) when establishing its small business office, which is still ongoing; and (3) before announcing hiring positions for that office. Army Office of Small Business Programs is positioned to provide direct support to various commands on small business activities. In particular, we previously reported that small business offices are responsible for assisting agencies in increasing small business participation and provide advice on acquisition strategies and market research. Subordinate Commands According to Army officials, Army Futures Command has not fully engaged the organizations that transitioned, or are transitioning to, the command in terms of small business research and development efforts. Combat Capabilities Development Command and its subordinate command Army Research Laboratory, these organizations have years of experience working with small businesses on research and development efforts. Army Research Laboratory is the Army lead for the Small Business Technology Transfer program, and participates in the Small Business Innovation Research program along with other Combat Capabilities Development Command organizations; both of which are designed to stimulate technological innovation. Combat Capabilities Development Command officials stated they have had limited involvement with Army Futures Command headquarters on small business research and development issues. In addition, Medical Research and Development Command officials stated that Army Futures Command headquarters has not interacted with them on small business engagement beyond planning for the organization s transfer to Army Futures Command. Historically, Medical Research and Materiel Command participated in the Small Business Technology Transfer and the Small Business Innovation Research programs and conducted outreach to small businesses through various events, such as industry days and conferences focused on small businesses. Office of the Assistant Secretary of the Army for Acquisition, Logistics, and Technology We reported in January 2019 that it was not yet clear how Army Futures Command will coordinate its responsibilities with the Office of the Assistant Secretary of the Army for Acquisition, Logistics, and Technology. The office conducts outreach to small businesses, sponsors challenge competitions, and promotes small business participation in Army acquisitions. More recently, according to Army officials, the command is seeking to improve and formalize coordination roles and responsibilities related to research and development within and outside the command. For example, Although a formalized agreement between the command and Army Office of Small Business Programs does not yet exist, the command is now actively consulting with this office. According to Army small business officials, the command has been familiarizing small business staff with their office and its small business research and development efforts. The command has also been establishing its small business office with support from Army Office of Small Business Programs. In addition, Army small business officials stated that the office is assessing the command s small business needs to determine how to allocate workforce resources. However, the effort has not been finalized. The command is also working to formalize small business relationships within and among its components. As part of this, the command established a Directorate of Operations at headquarters to facilitate integration of command activities across components, which would include those related to small business research and development. However, the command has not yet assigned a permanent director for the new directorate. According to Army Futures Command officials, as well as Army documents, the command will continue to develop coordination procedures related to research and development with the Assistant Secretary of the Army for Acquisition, Logistics, and Technology. The command is also working with the Assistant Secretary s office on a challenge competition that aims to facilitate small business engagement with the Army and spur innovative technology. Army Futures Command does not have its own procurement authority, so the Army Contracting Command will provide it with contracting support. This support includes making awards to small businesses on behalf of Army Futures Command. Army Contracting Command officials told us they are also supporting the establishment of an Army Futures Command contracting office that would advise on contracting needs. For example, they sent temporary support staff to the headquarters of the new command and are helping with recruitment efforts for permanent personnel. Army Futures Command officials told us they had not prioritized coordinating with other Army organizations that have small business expertise because the command and its officials had other, more pressing priorities, such as establishing the command and engaging directly with small businesses as quickly as possible. Federal internal control standards state that during the establishment of an organizational structure management should consider how organizations across and outside of it interact in order to fulfill their overall responsibilities. This includes establishing reporting lines and roles and responsibilities within and outside the organization as they relate to small business engagement. With those coordination roles and responsibilities established, organizations are better able to communicate the quality information necessary to fulfill their overall small business engagement responsibilities. By taking actions to formally coordinate with and leverage other Army organizations expertise, such as coordinating outreach events, Army Futures Command could improve its opportunities to engage with small businesses and obtain access to the innovative research and development they could provide. Further, if the command does not formalize coordination roles and responsibilities, it risks potentially duplicating small business-related work and creating overlap and fragmentation. <4. Army Futures Command Has Not Yet Developed Tracking or Performance Measures for Small Business Engagement> <4.1. Army Futures Command Does Not Fully Track Small Business Engagement> As previously noted, Army Futures Command stated it is continuing the efforts of its subordinate commands to engage with small businesses and is taking additional steps to enhance engagement. However, command officials told us they do not systematically track the number and timing of outreach events, the number of participants at these events, and the extent to which these outreach efforts result in business arrangements such as contracts. As a result, Army Futures Command officials were uncertain of how often the command across all of its components was engaging with small businesses for research and development efforts. For example, Army Applications Laboratory officials were not able to identify the number and timing of challenge competitions the command has hosted or is planning to host in the future. Some organizations that have transitioned to Army Futures Command, such as Combat Capabilities Development Command, continue to track small business engagement activities for their component. However, Combat Capabilities Development Command officials told us that they were unsure if this data will be tracked at Army Futures Command headquarters. According to Army Futures Command officials, the command has not prioritized tracking small business activities because it focused instead on establishing the command and engaging with small businesses as quickly as possible to identify innovative solutions. Officials did not provide a specific plan for tracking such engagement. According to Federal Internal Control Standards, management should establish monitoring activities for its internal control system and evaluate the results to remediate any identified challenge on a timely basis. Further, management should use quality information from reliable sources in a timely manner to achieve the objectives of the command. By tracking its small business engagement activities, Army Futures Command would have a more comprehensive understanding of the various efforts underway across the command. This would provide opportunities to examine its overall small business engagement efforts. Tracking such information would also allow the command to make adjustments to those efforts to ensure it obtains the innovative input from small businesses the command has stated it needs to achieve its modernization goals. Tracking small business engagement across the command components could also help reduce inefficiencies including overlap, fragmentation, and duplication of its small business engagement efforts. <4.2. Army Futures Command Has Not Yet Established Performance Measures to Assess Small Business Engagement> While Army Futures Command officials told us they consider small businesses to be critical to their success and they have taken steps to engage with small businesses, the command has not yet established measures for evaluating the effectiveness of that engagement across the command nor has it developed a plan to systematically assess these efforts. Command officials told us that they are in the process of considering various measures to do so, but they have not yet determined which specific measures, if any, they will use. There is also no time frame to establish these measures. According to Army Futures Command officials, they would consider small business engagement successful if, for example, a Small Business Innovation Research award resulted in an innovation or a technology that was later transitioned to a weapon systems program or a product that would further support an Army weapon systems program. Command officials told us they have not formalized and implemented these measures because the command and its officials have prioritized focusing on establishing the new command. Components subsumed by Army Futures Command have historically used performance measures to assess their small business engagement. For example, officials from Combat Capabilities Development Command told us that they previously used several outcome-based measures, including the number of Small Business Innovation Research products incorporated into fielded Army acquisition programs, contracts awarded to small businesses, and total dollars obligated to small businesses for research and development. This previously collected information was then provided to management in various small business offices in semiannual reports. Officials told us they have continued to monitor this information since the transition to Army Futures Command. Officials from Medical Research and Development Command also reported that they have performance measures and that they use these measures to assess the success of their small business engagement. For example, they said that they develop summary reports after outreach events with small businesses. These reports describe the event, outcomes, and how participation at the event enhanced utilization of small businesses for research and development efforts. The reports are also used internally as market research for future opportunities. Internal control standards call for management to use quality information to make informed decisions and to define objectives in specific and measurable terms so that performance toward achieving those objectives can be assessed. Management should also determine whether performance measures for the objectives are appropriate for evaluating performance. Once performance measures are defined, management should then establish and operate monitoring activities that allow them to evaluate the effectiveness of the internal control system. Establishing performance measures and developing a plan to capture and monitor information on its small business engagement would help ensure Army Futures Command is not missing opportunities to make informed management and investment decisions for its research and development efforts. Establishing these measures and a plan to monitor how the command assesses small business engagement would also help it to evaluate the overall effectiveness of its small business engagement in providing support to the warfighter and identifying which small business efforts have been most effective. <5. Conclusions> The establishment of Army Futures Command represents a considerable change to how the Army develops new weapon systems and prepares for the future. While Army Futures Command is still finalizing how it will operate, it is already engaging with small businesses in various ways. However, the command could better manage these efforts. In particular, formalizing coordination roles and responsibilities with Army organizations that already have small business experience, such as the Army Office of Small Business Programs, would allow the command to leverage additional expertise as it pertains to small business engagement for research and development. In addition, Army Futures Command does not systematically track engagement across the command. By tracking this activity, the command could more effectively oversee and manage overall small business engagement. Finally, while Army Futures Command officials consider engaging with small businesses critical to the success of modernization, it has not yet developed performance measures to assess the effectiveness of its small business engagement nor has it developed a plan for systematically assessing its efforts. Establishing performance measures, and using them to assess small business engagement, would provide the command with information to evaluate, and potentially enhance, its engagement with small businesses to help accomplish its research and development efforts. <6. Recommendations for Executive Action> We are making three recommendations to the Secretary of the Army. The Secretary of the Army should direct the Commanding General of Army Futures Command to formalize coordination roles and responsibilities for small business engagement in support of research and development with relevant Army entities. (Recommendation 1) The Secretary of the Army should direct the Commanding General of Army Futures Command to systematically track its small business engagement in support of research and development across its subordinate organizations. (Recommendation 2) The Secretary of the Army should direct the Commanding General of Army Futures Command, in coordination with relevant Army entities, to establish command-wide performance measures and develop a plan to use these measures to systematically assess the effectiveness of small business engagement in support of research and development. (Recommendation 3) <7. Agency Comments> We provided a draft of this report to the Army for review and comment. In its written comments, reproduced in appendix II, the Army concurred with all three of our recommendations. The Army also provided technical comments which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; the Acting Secretary of Defense; and the Acting Secretary of the Army. In addition, the report is available at no charge on the GAO Website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or LudwigsonJ@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology You asked us to examine how small businesses that support research and development efforts could be affected by the establishment of Army Futures Command. This report (1) describes what analyses, if any, the Army conducted to determine the effect of its modernization initiatives on small businesses; (2) describes how Army Futures Command is engaging with small businesses to support research and development efforts and assesses how it is coordinating with other relevant Army organizations; and (3) assesses how Army Futures Command plans to track and measure the performance of its engagement with small businesses to support research and development efforts. We analyzed research and development contract awards and obligations made during fiscal years 2013 through 2017 for the Army. The data are presented in the background as it is prior to the establishment of Army Futures Command in 2018. For the number of contracts, we used the number of new base contract awards for research and development. For the obligations, we analyzed both newly awarded base contracts and associated orders under indefinite-delivery contracts since funds would be obligated at the order level. The obligations in this analysis include only those made during the fiscal year the contract was awarded. To identify and analyze contracts awarded during that time period, we requested data in the Federal Procurement Data System-Next Generation database from the Army. The Army used the product and service codes for research and development to extract the relevant data for fiscal years 2013 through 2017. The data also included contracts awarded through the Small Business Innovation Research and Small Business Technology Transfer programs for that time period and business size and registered location. We excluded foreign military sales obligations. We did not include subcontractor data. We obtained the funding codes for organizations that are transitioning to Army Futures Command, which includes the former Army Research, Development, and Engineering Command and the Army Medical Research and Materiel Command, portions of which are transitioning to the new command. To determine the proportion of contracts and associated obligations that supported these organizations, we used their funding codes to identify the number of contracts and associated obligations during our selected time period. To assess the reliability of the Federal Procurement Data System-Next Generation data, we electronically tested for missing data, outliers, and inconsistent coding. Based on these steps, we determined the data were sufficiently reliable for identifying and analyzing Army contracts awarded from fiscal years 2013 through 2017 for research and development efforts and their obligations. We obtained data on grants, cooperative agreements, and other types of agreements using the Defense Assistance Awards Data System. We conducted initial analysis on the data and discussed reliability and validity of the data with agency officials. As a result, we determined that the data were not sufficiently reliable for the purpose of this engagement and we excluded them from our review. To describe analyses the Army conducted on the potential effect modernization efforts could have on small businesses, we collected and reviewed available studies and analyses the Army conducted. We reviewed the Army s science and technology portfolio analysis, studies related to the establishment and future organizational structure of Army Futures Command, and the Army s modernization strategy to determine if the Army analyzed how small businesses could be affected. To describe how Army Futures Command is engaging with small businesses to support research and development efforts, we reviewed policies, procedures, and guidance from the Department of Defense, Department of the Army, Army Futures Command, and other relevant Army organizations on small business engagement. We also reviewed relevant sections of the Federal Acquisition Regulation, as well as Defense and Army supplements to the Federal Acquisition Regulation, to understand the framework for small business participation in support of research and development efforts. We also reviewed relevant statutes, regulations, and policies regarding research and development and small business programs. We collected and analyzed documentation on how Army Futures Command engages with small businesses, including its roles and responsibilities, outreach efforts, and award documentation as well as those of its subordinate components. To assess how Army Futures Command coordinates with other Army organizations, we reviewed policy documentation, such as a memorandum of understanding on coordinating contract support and for small business engagement, in addition to operational orders outlining roles and responsibilities. We assessed the information we collected against Federal Standards for Internal Control related to organizational structure, reporting lines, roles and responsibilities, and using quality information. To assess how Army Futures Command plans to track and measure its engagement with small businesses, we reviewed policies from the Department of Defense and Army on engagement with small businesses. To understand how Army Futures Command plans to track its small business engagement, we reviewed policy documentation from the command, operational orders, briefs and memoranda. We also reviewed documentation on how organizations tracked this data prior to transitioning to Army Futures Command. In order to assess any performance measures Army Futures Command plans to use to evaluate its small business engagement, we reviewed available documentation on the establishment of the command. We also reviewed documentation from organizations transitioning to Army Futures Command to determine how these organizations previously monitored and evaluated their small business engagement. In addition, we assessed the information we collected against Federal Standards for Internal Control related to establishing monitoring activities, using quality information, defining objectives, and evaluating results. To more completely understand the small business engagement efforts of the new command, we interviewed officials from various Army offices, including the Office of the Under Secretary of the Army, Army Futures Command, organizations transitioning to the new command, Army Office of Small Business Programs, members of the Office of the Assistant Secretary of the Army for Acquisition, Logistics, and Technology, and Army Contracting Command. We also met with two private sector entities the Army has coordinated with for outreach to small businesses. These entities have experience in engaging small businesses both in the private sector and for government programs and discussed with us the concerns and challenges small businesses have in working with the government. These views are not generalizable but provide perspective on matters relevant to the Army s efforts to engage with small businesses. We conducted this performance audit from September 2018 to July 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Army Appendix III: GAO Contact and Staff Acknowledgments <8. GAO Contact> Jon Ludwigson at (202) 512-4841 or LudwigsonJ@gao.gov. <9. Staff Acknowledgments> In addition to the contact named above, J. Kristopher Keener (Assistant Director), Andrea C. Evans (Analyst-in-Charge), Hilary Benedict, Emily Bond, Frederick K. Childers, Matthew T. Crosby, Lori A. Fields, Julia Kennon, Jean McSween, Monique Nasrallah, Anh Nguyen, Kevin O Neill, William Shear, and Anne Stevens made contributions to this report. | Why GAO Did This Study
The Army is modernizing its weapon systems to improve its ability to face near-peer adversaries. To consolidate and oversee these efforts, the Army established Army Futures Command. The command plans to work with small businesses to develop innovative capabilities through research and development activities.
GAO was asked how the establishment of Army Futures Command could affect small businesses that support research and development efforts. This report examines, among other objectives, how the command (1) engages with small businesses and coordinates with other Army organizations and (2) plans to track and measure the effectiveness of that engagement.
GAO reviewed the Army's internal analyses of its own modernization efforts; reviewed and analyzed policies and procedures on the command's small business engagement; and interviewed Army officials engaged in modernization efforts as well as two private companies selected because they facilitate Army's work with small businesses.
What GAO Found
Army Futures Command, established in June 2018 by combining several existing Army organizations and expected to be fully operational in July 2019, is engaging with small businesses. The command considers small business engagement critical to its success and officials reported it intends to continue the engagement activities of the organizations that are moving into it such as conducting outreach and awarding contracts. The Army recognizes the importance of small businesses and has awarded $2.3 billion to hundreds of small businesses from fiscal year 2013 through 2017. The command is also taking initial steps to enhance small business engagement (see figure). Army officials noted that these new efforts are intended to address concerns raised by small businesses in working with the government, such as delays between initial outreach and entering into contracts.
However, the command has not fully leveraged other Army organizations that work with small businesses, such as the Army Office of Small Business Programs. According to command officials, they prioritized setting up the command structure and engaging with small businesses quickly, instead of focusing on coordination. The command has recently been working to improve coordination, but has not formally coordinated such as by establishing agreements with other Army organizations that have small business expertise. Doing so would help Army Futures Command leverage this past experience and avoid missing opportunities to engage with these companies and access innovative research and development.
The command does not track how frequently or in what ways it engages with small businesses for research and development across all command components. Similarly, command officials stated they have considered performance measures to assess the effectiveness of their engagement efforts, but have not yet developed command-wide measures or a plan to assess effectiveness. Tracking and measuring engagement would help ensure the command obtains quality information that may help the Army evaluate, and potentially enhance, its small business engagement
What GAO Recommends
GAO is making three recommendations including that the Army Futures Command coordinate with relevant Army organizations on small business engagement efforts for research and development; systematically track its small business engagement; and develop command-wide performance measures and a plan to use them to assess the effectiveness of its small business engagement. The Army concurred with all three recommendations. |
gao_GAO-20-459 | gao_GAO-20-459_0 | <1. Background International Safety Management (ISM) Code and Safety Management System (SMS) Requirements> The ISM Code was established to provide an international standard for the safe management and operation of ships and for pollution prevention. The code establishes safety management objectives, such as preventing human injury or loss of life, and identifies a framework of key elements required to be considered for inclusion in an SMS. According to the ISM Code, each vessel operator should develop, implement, and maintain an SMS that is to include functional requirements, such as procedures to prepare for and respond to emergency situations. An SMS is typically not a single plan and can take different forms. It is up to the vessel operator to determine how best to operationalize these requirements. The SMS plan documents generally contain proprietary information and are not retained by the Coast Guard or the ROs performing services on the Coast Guard s behalf. <2. Key Entities Involved in Vessel SMS Activities> There are three key entities involved in vessel SMS activities vessel operators, ROs, and the U.S. Coast Guard. These entities SMS responsibilities are described below. <2.1. Vessel Operators> Vessel operators are responsible for developing an SMS in accordance with ISM Code requirements if they operate U.S-flagged vessels that are subject to the ISM Code, such as a vessel engaged in a foreign voyage that is carrying more than 12 passengers, or a tanker or freight vessel of at least 500 gross tons, among other vessel types. Vessel operators are required to perform an internal audit of their company s SMS each year to ensure it is being implemented effectively. Vessel operators are also responsible for obtaining the requisite evidence that the company and each of its applicable vessels are in compliance with the ISM Code. In practice, this means that the vessel operators obtain certification from ROs, which are described below. According to the Coast Guard, there were approximately 1,170 U.S.-flagged vessels that maintained SMS certifications in 2019. <2.2. Recognized Organizations> An RO refers to an international classification society authorized by the Coast Guard to conduct applicable vessel oversight and certification services on its behalf. The Coast Guard has authorized several ROs to conduct SMS audits and issue applicable certificates, but over 95 percent of these vessel oversight and compliance activities are conducted by a single RO, the American Bureau of Shipping. ROs have to meet specific requirements for authorization, such as making information about vessel class and inspections available to the Coast Guard. In order to be authorized, the RO needs to have been an international classification society for 30 years and have a history of taking appropriate corrective actions in addressing, among other things, vessel deficiencies. ROs are to conduct the following SMS activities on the Coast Guard s behalf: review SMS documents and conduct initial company and vessel audits to verify compliance with the ISM Code and applicable national and international requirements; issue a Document of Compliance to the vessel operator and a Safety Management Certificate for the vessel, which is valid for up to 5 years; conduct annual SMS compliance audits of the vessel operator; conduct an intermediate SMS compliance audit for the vessel at least once during the 5-year period; and conduct renewal SMS compliance audits of vessel operator and vessel(s) prior to expiration of the 5-year certificate. <2.3. U.S. Coast Guard> The U.S. Coast Guard is ultimately responsible for guaranteeing the effectiveness of SMS compliance activities and audits that ROs perform on its behalf. The Coast Guard s oversight activities of ROs are conducted by the Office of Commercial Vessel Compliance. This office oversees a range of different activities to help ensure SMS compliance with the ISM Code and applicable federal regulations. Such activities include managing the commercial vessel inspection program, developing related guidance, and overseeing SMS audits and related activities performed by ROs. In addition to oversight provided by officials at Coast Guard headquarters, marine inspectors within local Coast Guard field units are also responsible for conducting vessel inspections, which routinely include assessing SMS effectiveness for applicable vessels. The Coast Guard Verifies SMS Compliance through Recurrent Vessel Inspections and Has Initiated Additional Oversight of Third Parties <3. The Coast Guard Verifies SMS Compliance through Recurrent Inspections of Applicable U.S.-Flagged Vessels> The Coast Guard verifies SMS compliance as part of its overall vessel compliance activities, such as conducting annual inspections of applicable U.S.-flagged vessels. According to the Coast Guard, recurrent vessel inspections are important opportunities for its marine inspectors to verify the effectiveness of the vessels SMS, even if SMS oversight is not the primary purpose of the vessel inspections. When conducting an annual vessel inspection, Coast Guard marine inspectors are to look for material deficiencies, such as poor condition of vessel structures, missing or defective equipment, or hazardous conditions that could indicate a potential SMS nonconformity. According to Coast Guard officials, marine inspectors routinely review the Coast Guard s internal database for a record of any past deficiencies and are to inspect the vessel s SMS documentation to determine if the Safety Management Certificate is up- to-date and the drill logs are current, among other things. The Coast Guard advises vessel operators to self-report or, in other words, proactively manage their vessels and report any deficiencies identified by the vessel s crew and report them at the beginning of any Coast Guard inspection. When conducting an annual vessel inspection, Coast Guard marine inspectors are to follow a five-step process to identify any SMS-related deficiencies, determine if there are clear grounds for an expanded vessel inspection, and specify any applicable compliance options. The process requires distinguishing between normal wear and tear to the vessel and deficiencies that could be the result of failures to implement an effective SMS. (See appendix II for further details on this five-step process.) A more in-depth inspection, if warranted, may include a review of maintenance schedules and records, crew training records and certifications, emergency procedures, and associated interviews with the vessel master and crew. Marine inspectors are to record any identified deficiencies on a Form 835V, which specifies the time frames and procedures required to address the identified deficiencies. See figure 1 for a blank copy of the Form 835V. The Coast Guard uses a range of options for addressing SMS-related deficiencies. Some deficiencies, such as improperly secured wiring or missing documentation, can sometimes be corrected by the vessel s crew during the course of a Coast Guard inspection. According to Coast Guard guidance, if marine inspectors identify serious deficiencies that could indicate broader SMS failures, such as an absence of required equipment or failure by the company to notify the Coast Guard of reportable marine casualties and hazards, the inspectors record an SMS-related deficiency and require an internal SMS audit. An internal SMS audit is for technical or operational deficiencies that individually or collectively do not warrant the detention of the vessel but indicate a failure or lack of effectiveness of the SMS. The internal SMS audit and any corrective actions are to be completed by the vessel operator within three months from the date of the Coast Guard vessel inspection. If during the course of a vessel inspection Coast Guard inspectors observe more serious deficiencies or failures, such as defective or missing fire-fighting or life-saving equipment, the vessel is to be detained and an external audit is to be performed by the RO prior to the vessel being released from detention. Figure 2 shows the Coast Guard s process for ensuring SMS compliance during vessel inspections. <3.1. The Coast Guard Conducts Additional SMS Oversight of Vessels Designated as Higher Risk> In addition to the annual vessel inspections it conducts, the Coast Guard also maintains a list of vessels that require additional oversight, referred to as the fleet risk index. The Coast Guard Office of Commercial Vessel Compliance evaluates vessels enrolled in the Alternate Compliance Program and the Maritime Security Program to develop the fleet risk index using modeling that considers and weighs multiple risk factors to assign each vessel a risk score. This list is used internally by Coast Guard inspectors when prioritizing vessels for additional oversight and more frequent inspections. Assessed risk factors include vessel detentions, marine violations/enforcement actions, vessel deficiencies, vessel type, and vessel age, among others. According to Coast Guard officials, the Coast Guard uses the fleet risk index to identify approximately 50 vessels each year that are subject to inspections every 6 months rather than annually. In 2018, the Coast Guard stipulated that traveling inspectors would accompany the local inspection team to conduct all inspections aboard vessels designated for additional oversight. According to Coast Guard officials, traveling inspectors have additional training and inspection expertise, including supplemental coursework in auditing and quality management systems, and they routinely conduct additional background research on these vessels prior to participating in the inspections. <3.2. Results of the Coast Guard s Vessel SMS Compliance Activities for 2018 and 2019> Based, in part, on recommendations in the EL FARO investigative report, in 2018 the Coast Guard took steps to improve its management of the Alternate Compliance Program, including efforts to improve data reporting. For example, the Coast Guard revised its form for documenting deficiencies during annual vessel inspections. In particular, since March 2018, the Form 835V has included a checkbox to indicate if a deficiency is related to an SMS. According to the Coast Guard, this revision will allow for enhanced annual reporting of safety-related deficiencies identified during compliance activities. The Coast Guard reported it conducts approximately 1,200 inspections each year of vessels that are either required to maintain a Safety Management Certificate, or do so voluntarily. According to the Coast Guard, in calendar year 2018, the Coast Guard issued between 70 and 130 SMS-related deficiencies (reporting available for April through December only), and for calendar year 2019, the Coast Guard issued between 183 and 212 SMS-related deficiencies. Given the limited data and time frames available, we were not able to identify any trends regarding SMS deficiencies. However, we noted that the highest number of safety-related deficiencies cited in 2019 were related to maintenance of vessels and equipment 43 of the 212 annual deficiencies. The second-highest number of deficiencies addressed issues related to emergency preparedness 37 of the 212 annual deficiencies. Some specific examples in this category relate to the posting of applicable emergency instructions and providing updated records of emergency drills. According to Coast Guard headquarters officials, the Coast Guard plans to review and assess the SMS deficiency data to provide feedback to inspectors, vessel operators, and ROs. The officials also stated that SMS deficiencies will be included in future risk-based vessel inspection programs, including the fleet risk index discussed earlier. <4. The Coast Guard Has Initiated Efforts to Enhance Its Oversight of ROs Since 2018> Following the investigative reports of the EL FARO sinking, the Coast Guard initiated several efforts in 2018 to enhance oversight of the ROs that perform SMS-related services and certifications on its behalf. These efforts were largely driven by actions identified by the Commandant of the Coast Guard in December 2017 in response to EL FARO investigative report recommendations. In particular, the Coast Guard established a new group to monitor ROs, developed new SMS-related guidance and associated work instructions, increased direct observations of ROs, developed key performance indicators, and developed guidance to request internal investigations for certain RO deficiencies. It is too early for us to assess the overall effectiveness of these Coast Guard efforts; however, we believe they are positive steps toward enhancing oversight of ROs. Further information on each of these efforts is provided in the sections that follow. Established a new group within the Office of Commercial Vessel Compliance. The Coast Guard established a new group within its Office of Commercial Vessel Compliance in 2018 to help monitor the global performance of the U.S.-flagged fleet, provide enhanced oversight of ROs performing vessel safety management functions, and implement any necessary changes to related roles and responsibilities. Developed SMS-related guidance and work instructions. The Office of Commercial Vessel Compliance developed several new work instructions to help inform mariners, the public, the Coast Guard, and other federal and state regulators in applying SMS-related statutory and regulatory requirements. The following are examples of applicable guidance issued since 2018: CVC-WI-003(1): USCG Oversight of Safety Management Systems on U.S. Flag Vessels (March 23, 2018). This document contains guidance for assessing the effectiveness of the SMS on U.S.-flagged vessels, including directions for evaluating potential deficiencies and compliance options during the course of a vessel inspection. CVC-WI-004(1): U.S. Flag Interpretations on the ISM Code (April 16, 2018). This document provides guidance regarding the Coast Guard s interpretations on the application and implementation of the ISM Code. Increased the number of Coast Guard direct observations of ROs performing vessel and company audits. The Coast Guard reported it has increased the number of direct observations of ROs conducting vessel and company SMS audits since 2018. According to the Coast Guard, audit observations aboard vessels are routinely performed by traveling inspectors. Additionally, staff from the new Commercial Vessel Compliance group are observing an increased number of company audits. This group has eight staff available for direct observations of ROs, all of whom have received training in international auditing and safety management standards. The Coast Guard reported that the number of audit observations attended by the Commercial Vessel Compliance staff increased from three in 2018 to 21 in 2019. According to the Coast Guard, these additional observations serve as a mechanism to provide increased oversight of the ROs and the companies or vessels being audited, as well as to verify that the services provided by ROs are effectively executed in accordance with established requirements. Developed key performance indicators for assessing ROs. In mid- 2018, Coast Guard officials identified 10 key performance indicators to be used to evaluate the performance of ROs. Due, in part, to challenges with collecting and synthesizing the requested data from the different ROs, the Coast Guard reported on limited performance information in the 2018 Domestic Annual Report. According to Coast Guard officials, the Coast Guard is working with each of the ROs and the International Association of Classification Societies to standardize the key performance indicator data to better integrate the data into the Coast Guard s data system. The Coast Guard said that it plans to include a subset of the key performance indicators in its 2019 annual report, which is scheduled for issuance in April 2020. See appendix III for more information on these key performance indicators. Developed guidance for ROs on quality cases. In May 2018, the Coast Guard also issued guidance that describes a new oversight mechanism, referred to as a quality case. If a Coast Guard marine inspector observes evidence during the course of a vessel inspection that an RO is not adequately performing its required SMS-related functions, the Coast Guard can request that the RO conduct a root-cause analysis to help identify the underlying issue(s). This analysis would generally involve the RO evaluating its quality management system and reporting findings and corrective actions to the Coast Guard. From May 2018 to November 2019, the Coast Guard reported it initiated 13 quality cases; one of which was SMS-related. Vessel SMS Plans Address Some of the Potential Shipboard Emergencies and Response Procedures Proposed by Coast Guard Guidance Each of the 12 SMS plans (or plan excerpts) for U.S.-flagged vessels that we reviewed identify potential shipboard emergencies and applicable response procedures, but they do not address the full range of emergency scenarios included in Coast Guard guidance. While the 12 SMS plans do not address all potential emergencies included in Coast Guard guidance, the plans do address the broad, functional requirement to identify potential shipboard emergencies and applicable response procedures to address them, as required by the ISM Code and applicable federal regulations. In reviewing the 12 SMS plans, we also found variation among the specific scope and formats of the emergency preparedness sections. Four of the 12 SMS plans are large documents spanning hundreds of pages that incorporate various component manuals. For example, one vessel operator provided a comprehensive SMS plan document of nearly 600 pages that includes six different procedural manuals covering the following issues: Management, Vessel, Safety, Environmental, Cargo Operations, and Emergency Response. For the other eight SMS plans we reviewed, the vessel operators provided us with either a stand-alone manual specifically addressing shipboard emergency preparedness and response procedures, or individual chapters and excerpts that included this information. According to Coast Guard and RO officials, the ISM Code does not require a specific format or level of detail for SMS plans and, rather, allows vessel operators flexibility to choose how they will implement and document SMS requirements based on their specific operations and business processes. In addition to reviewing the SMS plans for content and format, we also reviewed each of the 12 SMS plans (or excerpts) to determine the extent to which they address 21 different potential shipboard emergencies identified in 2018 Coast Guard guidance related to the application and implementation of the ISM Code (see table 1). The number of unique, potential shipboard emergency scenarios addressed in the SMS plan documents we reviewed generally range from five to 16. Ship routing procedures related to heavy weather, which is an emergency scenario highlighted in the EL FARO investigative report, is clearly identified in five of the 12 SMS plans reviewed. However, one additional SMS plan makes reference to a separate heavy weather plan that was not included in the primary SMS plan documents that we reviewed. The most frequently addressed shipboard emergency scenarios that are addressed in at least 10 of the 12 SMS plans we reviewed are Fire, Collision, Grounding, Abandon Ship, and Man Overboard. In addition, 10 of the 12 SMS plans we reviewed also identify additional potential emergency shipboard scenarios not included in the 2018 Coast Guard guidance, such as breakaway from dock, emergency towing, or confined space rescue. While none of the SMS plans that we reviewed specifically address all 21 potential shipboard emergencies identified in the 2018 Coast Guard guidance, the guidance states that it is not a substitute for applicable legal requirements, nor is it itself a rule. According to officials from the two ROs with whom we discussed this program, their auditors are provided the 2018 Coast Guard guidance to use as part of their SMS audit criteria. The officials noted, however, that their auditors may be limited to issuing an observation to the vessel operator if any potential shipboard emergency listed in Coast Guard guidance is not addressed in SMS plan documents. Under the ISM Code, an observation is not the same as an SMS nonconformity, which would require specific corrective action. Officials from one RO noted that any nonconformities identified would need to be based on specified mandatory requirements, such as ISM Code provisions, U.S. statutes, or applicable U.S. or international regulations, and not solely on the 2018 Coast Guard guidance. In addition to the fact that the emergencies listed in the guidance are not required to be included in SMS plans, there are other factors to explain why the SMS plans we reviewed may not address all 21 potential shipboard emergency scenarios identified in the 2018 Coast Guard guidance. Such factors include the following: Size and nature of vessel operations. According to RO and Coast Guard officials, not all of the 21 potential shipboard emergency scenarios contained in the 2018 Coast Guard guidance are applicable for each type of vessel or for all geographical operating areas. For example, specific emergency procedures related to piracy or terrorism, cargo-related accidents, helicopter rescue operations, or loss of key personnel may not be necessary for towing vessels, given the nature of their operations, their limited size, and the reduced number of crew required to operate that type of vessel. Similarly, icing conditions would not be expected to be included in the SMS plans for those vessels that operate solely in temperate waters. Additional time may be needed to incorporate expanded potential shipboard emergency scenarios into existing SMS plans. Although the Coast Guard guidance identifying the 21 potential shipboard emergency scenarios was issued in April 2018, vessel operators may still be in the process of revising their SMS plans to include additional potential shipboard emergency scenarios and applicable emergency response procedures. For example, we observed that six of the 21 scenarios included in the 2018 Coast Guard guidance are not listed in related guidance provided by the International Association of Classification Societies. These six scenarios are among those observed with the lowest frequency during our review of SMS plans. It is feasible that information related to these scenarios such as loss of key personnel, or loss of communications with a vessel may exist elsewhere in vessel operators SMS documents or in other vessel plans, but not incorporated as potential shipboard emergency response scenarios as proposed in the 2018 Coast Guard guidance. Along these lines, officials from the ROs with whom we spoke also noted that, in accordance with the ISM Code, they routinely use a sampling approach when conducting annual company SMS audits, and would generally not review the entire scope of an SMS plan each year. As a result of the sampling process, the annual audits occurring since April 2018 may not have addressed any potential observations related to the expanded scope of potential shipboard emergencies included in the Coast Guard guidance for SMS plans. As noted previously, the ISM Code and corresponding U.S. regulations and Coast Guard guidance allow vessel operators flexibility in how they address SMS functional requirements, including the documentation of potential shipboard emergencies and applicable response procedures in their SMS plans. Following the EL FARO incident, in 2018 the Coast Guard developed guidance to help inform vessel operators and ROs of potential shipboard emergency scenarios to consider. However, similar to the SMS-compliance and oversight practices used by comparable agencies in other developed countries, we found that the Coast Guard does not have a direct role in reviewing or approving vessel SMS plan documents, including response procedures for potential shipboard emergency scenarios. Rather, as described earlier, the Coast Guard relies on periodic vessel inspections and oversight of ROs that perform more rigorous ISM audits on the Coast Guard s behalf. Although the Coast Guard has taken positive steps since 2018 to develop additional guidance and increase the number of observations of RO audits and inspections, the extent to which these efforts will result in any specific changes to the content of SMS plans by vessel operators in the future is yet to be determined. Agency Comments We requested comments on a draft of this report from DHS and the Coast Guard. Officials from the Coast Guard provided technical comments, which we incorporated into the report as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, the U.S. Coast Guard, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (206) 287-4804 or AndersonN@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Appendix I: Key Roles and Responsibilities of Recognized Organizations Related to Safety Management Systems Federal regulations allow the Commandant of the Coast Guard to delegate certain functions to authorized classification societies. In order for a classification society to be recognized by the Coast Guard and receive statutory authority to carry out delegated functions as a Recognized Organization (RO), the classification society must meet certain requirements, including having functioned as an international classification society for at least 30 years and having established a history of appropriate corrective actions in addressing vessel casualties and deficiencies, among other things. With respect to safety management systems (SMS), ROs once authorized by the Coast Guard are able to perform SMS-related audits and issue SMS-related certifications and documentation. The following information summarizes the key roles and responsibilities of ROs related to International Safety Management (ISM) Code certification services and the key activities that ROs perform to fulfill their delegated SMS compliance functions on behalf of the Coast Guard. Interim verification. When a new company (i.e., vessel owner/operator) is established, or an existing company wants to add a new vessel type to its current Document of Compliance, the RO is to first verify that the company has an SMS that complies with ISM Code requirements. If the RO determines that the company is in compliance, it issues the company an interim Document of Compliance (which applies to the entire company) that is valid for up to 12 months. Initial verification. After receiving an interim Document of Compliance, a company applies for ISM Code certification, and an RO conducts an SMS audit of the company s shoreside management system that is to include a visit to the company s physical offices. Following the satisfactory completion of the audit and verification that the company s SMS has been in operation for at least 3 months, the RO would issue the company a Document of Compliance that is valid for 5 years. After the RO issues the Document of Compliance, the RO is to verify that the company s SMS has been functioning effectively for at least 3 months for each of the vessels for which the company is seeking a Safety Management Certificate. A Safety Management Certificate is vessel- specific and may only be issued to a vessel if the company holds a valid Document of Compliance. To perform the initial verification, the RO is to assess each vessel to determine if the company s SMS is being employed effectively on that vessel. Annual or intermediate verification. The RO is responsible for verifying a company s Document of Compliance every year and for verifying the company s Safety Management Certificates at least once during the 5- year period covered by the issued certificates. ROs generally verify Safety Management Certificates between 2 and 3 years after their issuance. Annual and intermediate verifications are opportunities for the RO to verify whether the company has taken appropriate actions to sufficiently address any deficiencies the RO may have identified during previous audits. Renewal verification. Up to 3 months before a company s Document of Compliance or a vessel s Safety Management Certificate expires, the RO is to conduct a renewal verification. The renewal verification is to address all elements of the SMS, including activities required under the ISM code. Additional Verification. The Coast Guard may also require additional verification to ensure that an SMS is functioning effectively for example, to make sure that the company has sufficiently implemented appropriate corrective actions to address any identified deficiencies. Appendix II: Coast Guard s Process for Evaluating Safety Management System Deficiencies and Corrective Action Options This appendix provides summary information on the Coast Guard s process for evaluating safety management system (SMS) deficiencies and corrective action options if a Coast Guard marine inspector identifies any SMS-related deficiencies during a vessel inspection. Appendix III: Key Performance Indicators for Assessing Recognized Organizations In mid-2018, Coast Guard officials identified 10 key performance indicators to be used to evaluate the performance of Recognized Organizations (RO). Information on these 10 performance indicators is summarized below. 1: Number of RO-issued statutory findings divided by the number of statutory surveys conducted (e.g., 100 findings / 10 surveys = 10 Key Performance Indicators). 2: Number of RO Safety Management Certificate audit findings divided by the number of Safety Management Certificate audits conducted 3: Number of RO Document of Compliance audit findings divided by the number of Document of Compliance audits conducted (includes all types of Document of Compliance audits). 4: Number of RO associations to Port State Control Detentions under the Paris and Tokyo Memoranda of Understanding, and Coast Guard Port State Control programs. 5: Number of International Association of Classification Societies Procedural Requirement-17s (IACS PR-17) issued divided by the total number of RO applicable surveys conducted. 6: Total number of U.S. commercial vessel casualties divided by the total number of commercial vessels in the U.S. fleet of responsibility. 7: Total number of RO nonconformities issued by the Coast Guard divided by the number of statutory surveys and International Safety Management (ISM) audits conducted. 8: Total number of Coast Guard-issued deficiencies related to statutory certificates divided by the total number of Coast Guard inspections conducted. 9: Total number of RO-associated Flag State Detentions divided by the total number of statutory surveys and audits performed. 10: Number of Coast Guard-issued ISM-related deficiencies divided by the total number of Coast Guard inspections completed. Appendix IV: GAO Contact and Staff Acknowledgments <5. GAO Contact Acknowledgments> Nathan Anderson, (206) 287-4804 or AndersonN@gao.gov In addition to the contact named above, Christopher Conrad (Assistant Director), Ryan Lambert (Analyst-in-Charge), Ben Nelson, Elizabeth Dretsch, Tracey King, Kevin Reeves, and Benjamin Crossley made key contributions to this report. | Why GAO Did This Study
In October 2015, the U.S cargo vessel EL FARO sank after encountering heavy seas and winds from Hurricane Joaquin, killing all 33 crew members. Subsequent investigations cited deficiencies in the vessel's SMS plans as a factor that may have contributed to the vessel's sinking. Some in Congress have raised questions about the effectiveness of vessel SMS plans and the Coast Guard's oversight of third parties responsible for ensuring vessels comply with international standards and federal regulations.
The Hamm Alert Maritime Safety Act of 2018 included a provision for GAO to review Coast Guard oversight and enforcement of vessel SMS plans. Accordingly, this report addresses (1) how the Coast Guard (a) verifies domestic commercial vessels' SMS plans comply with federal regulations and (b) conducts oversight of ROs, and (2) the extent to which domestic vessels' SMS plans identify potential shipboard emergencies and include applicable response procedures.
To address these objectives, GAO reviewed Coast Guard regulations and guidance, accompanied marine inspectors on vessel inspections and audits, and analyzed available data on identified vessel deficiencies. GAO also reviewed the format and content of a nongeneralizable sample of 12 SMS plans representing various types of vessels and interviewed relevant Coast Guard and RO officials.
What GAO Found
The Coast Guard verifies that domestic commercial vessels comply with safety management system (SMS) requirements through activities that include conducting annual inspections of applicable U.S.-flagged vessels. In practice, the Coast Guard delegates primary vessel SMS compliance activities to third party entities, called Recognized Organizations (ROs). Among their responsibilities, ROs coordinate with vessel operators to review SMS plans, issue applicable vessel certificates, and conduct SMS compliance audits at the company level and aboard each vessel. Because the Coast Guard relies on ROs to perform SMS certification services on its behalf, it has initiated a series of efforts to enhance its oversight of ROs since 2018. The efforts include:
establishing a new group within the Coast Guard to monitor ROs,
developing new SMS-related guidance and work instructions,
increasing direct observations of ROs performing SMS audits,
developing key performance indicators for assessing ROs, and
requesting internal investigations for certain RO deficiencies.
It is too soon to assess the effectiveness of these efforts; however, GAO believes these are positive steps toward enhancing the Coast Guard's oversight of ROs.
Each of the 12 domestic vessel SMS plans GAO reviewed include potential shipboard emergencies and applicable response procedures to address them. None of the plans address all 21 potential shipboard emergencies included in 2018 Coast Guard guidance. However, these 21 potential emergencies are not required to be included in SMS plans; rather, they are suggested as part of the 2018 guidance. Further, GAO found that the SMS plans may not address all potential shipboard emergencies because not all emergency scenarios are applicable for each type of vessel or geographical operating area. Also, vessel operators may still be in the process of revising their SMS plans to include additional emergency scenarios and applicable response procedures. |
gao_GAO-19-631 | gao_GAO-19-631_0 | <1. Background> <1.1. Legacy Retirement System> The military retirement system is a government-funded benefit system that has historically been considered a significant incentive in recruiting and retaining a voluntary, career military force. Until recently, almost all active-duty servicemembers were enrolled in the High-3 (legacy) retirement system. In this system, servicemembers who served at least 20 years earned a DB annuity. Those who were eligible earned 2.5 percentage points per year of service multiplied by the average of their highest 36 months of basic pay, with payments beginning upon retirement from the military and adjusted annually for inflation. Servicemembers also had the option to contribute a portion of their basic pay to a personal TSP account, but DOD provided no contributions. A previous GAO report found that active-duty servicemembers rate of reaching 20 years of service varied substantially among the military service branches (see fig. 1). For example, for active-duty servicemembers entering military service in 1992, the estimated probability of reaching 20 years of service was almost 15 percentage points higher and more than three times higher for the Air Force than the Marine Corps. Federal law established the Military Compensation and Retirement Modernization Commission (MCRMC) in the NDAA for Fiscal Year 2013 to study the military s compensation system in detail and make recommendations to modernize servicemembers pay and benefits. The MCRMC s final report, released in January 2015, recommended that Congress revise the military retirement system so DOD could help more servicemembers save for retirement earlier in their careers, leverage the retention power of the legacy retirement system, give the services greater flexibility to retain quality people in demanding career fields, and promote servicemembers financial literacy, among other things. <1.2. Blended Retirement System (BRS)> The NDAA for Fiscal Year 2016 established BRS to replace the legacy retirement system. As with the legacy retirement system, servicemembers in BRS must serve 20 years to receive a DB annuity. Under BRS, eligible retirees receive a DB monthly benefit equal to 2 percentage points per year of service multiplied by the average of a servicemember s highest 36 months of basic pay lower than the 2.5 percentage point multiplier under the legacy retirement system. BRS also provides servicemembers with DC benefits through an employer contribution, which did not exist in the legacy retirement system. For servicemembers who began their service on or after January 1, 2018, DOD automatically contributes 1 percent of a servicemember s basic pay into the individual s TSP account after 60 days of service and, after 2 years of service, matches a servicemember s contributions up to 4 percent of their basic pay, for a maximum military contribution of 5 percent of a servicemember s basic pay. These servicemembers are automatically enrolled in BRS at a 3 percent default contribution rate. DOD estimates that with automatic enrollment in TSP and the automatic government contribution, 85 percent of new servicemembers covered by BRS will receive at least some retirement benefits when they leave military service. BRS offers servicemembers some additional features and benefits not offered under the legacy retirement system. Servicemembers under BRS are eligible for a one-time continuation payment as a retention incentive at the servicemember s mid-career point, between 8 and 12 years of service. Servicemembers who accept the continuation benefit incur an additional service obligation. BRS also offers servicemembers who serve 20 years or more the option to convert the present-value equivalent of either 25 or 50 percent of their DB annuity payments for the period from their date of retirement until the date they reach their Social Security full retirement age (FRA) to a lump-sum payment upon retirement from the military. Taking this lump-sum payment would reduce the retiree s annuity payments only until he or she reaches FRA, after which the annuity payments would revert to the full benefit level (see fig. 2). Active-duty servicemembers with fewer than 12 years of service as of December 31, 2017 were eligible to enroll in BRS until December 31, 2018. The decision to opt in to BRS or remain in the legacy retirement system was irrevocable. <1.3. Financial Literacy Education Training> Compared to the legacy retirement system, which provided only a DB plan, the BRS s enhanced DC benefit and reduced DB annuity shifts more of the responsibility for managing servicemembers retirement security from DOD to servicemembers. To help ensure that servicemembers have the financial literacy to make sound financial decisions, the NDAA for Fiscal Year 2016 added a requirement for DOD to provide servicemembers ongoing financial literacy training at various career and life stages, including at initial entry, promotions, vesting in the TSP, eligibility for continuation pay, marriage, divorce, and the birth of a first child. GAO s prior work on financial literacy training compiled testimony from experts from the private sector, federal government agencies, nongovernmental organizations, and academic institutions to: define financial literacy as the ability to use knowledge and skills to manage financial resources effectively for a lifetime of well-being; identify the workplace as a particularly effective venue for providing financial education and helping individuals improve their financial decision making; and summarize the effectiveness of various interventions and how to address the needs of workplace populations traditionally underserved by financial education. <2. DOD Used a Multi- Faceted Approach to Implement BRS Training and Outreach Campaigns and Is Developing Continuing Education on Saving for Retirement> <2.1. DOD Administered BRS Education and Outreach Campaigns for Eligible Servicemembers> DOD developed three courses to help servicemembers make informed decisions about whether to opt in to BRS or remain in the legacy retirement system. The BRS Opt-In Course was available as a 2-hour online or in-person course that servicemembers had to attest they had completed before opting into the new retirement system. DOD reported that 91 percent of an estimated 1.7 million eligible servicemembers attested that they had completed the training during the BRS opt-in period. The course included information on (1) the importance of saving for retirement, (2) the differences between the legacy retirement system and BRS, (3) factors for servicemembers to consider in choosing between the two retirement systems, and (4) tools and resources for servicemembers to consult when making their opt-in decision. DOD developed two additional BRS trainings for key military personnel in an effort to expand the network of in-person resources available to servicemembers eligible to opt into BRS. One course provided installation-level financial management professionals Personal Financial Managers (PFMs) and Personal Financial Counselors (PFCs) with more detailed information to reinforce the BRS Opt-In Course curriculum for servicemembers and answer their specific questions about BRS. The other course provided optional training to military supervisors regardless of their eligibility to opt into BRS. DOD officials said it was important to educate military supervisors on BRS since many junior servicemembers discuss personal financial information with their direct supervisors. DOD officials said that the agency released both of these trainings in advance of the BRS Opt-In Course so that PFMs and supervisors would have time to understand the new system and prepare for questions from servicemembers. DOD also developed the BRS New Accession Course for servicemembers who entered the military on or after January 1, 2018 and who are automatically enrolled in BRS. (See fig. 3.) Servicemembers take this course when entering service as part of their mandatory basic training ( boot camp ) or at the first school they attend after basic training. This course explains BRS s key components, identifies the tools and resources available to help servicemembers save for retirement, and encourages servicemembers to actively manage their TSP accounts. DOD officials said that the New Accession Course is very similar in content to the BRS Opt-In Course but without comparisons to the legacy retirement system. The course facilitator leads servicemembers through a series of short videos on BRS, asks questions at the end of each of the course sections, and is available to answer servicemembers questions throughout the course. DOD publicized BRS by creating a central website that links to outreach material in a variety of media formats, including videos available on YouTube, social media content, an interactive online comparison calculator, webinars, and external websites such as Military OneSource and https://www.tsp.gov. For example, DOD s central BRS website links to its BRS Fact or Fiction video series, which addressed various BRS misconceptions through 20 brief videos. In the video series, DOD introduced the #BlendedRetirement hashtag, then distributed supplementary BRS infographics with this hashtag to link back to additional resources on social media sites. DOD officials said they also are developing a mobile app to provide servicemembers easy access to financial readiness information through tools like calculators and games. Additionally, DOD s interactive online BRS calculator allowed servicemembers to enter personal financial information, such as their military grade, estimated date of military separation or retirement, and TSP contribution percentage, so those who were eligible to opt into BRS could compare how their retirement savings outcomes might differ under BRS and under the legacy retirement system. DOD s Office of Financial Readiness also trained financial counselors across the service branches to supplement the information in its BRS trainings as well as to provide servicemembers in-person financial literacy education. DOD officials said that the agency employs at least one PFM at most military installations or uses PFCs, who are government contractors. DOD officials said that PFMs and PFCs travel as needed to provide support at multiple installations. One PFM we interviewed estimated that PFMs provide as many as 10 group presentations per week on retirement issues that they tailor to fit their audiences needs. Another said one-on-one counseling sessions allowed servicemembers to share their personal financial situations, receive information germane to their unique circumstances, and explore available tools and resources. DOD officials said that, as outlined in federal statute, the role of PFMs and PFCs is to educate servicemembers about financial options available to them and not to provide financial advice. In addition to the centralized trainings and resources DOD created, the service branches used their internal communication systems for BRS outreach campaigns and created additional training tailored to the needs of their servicemembers (see fig. 4). For example, according to Navy officials, during the final 6 months of the BRS opt-in period, the Navy posted approximately 80 Facebook and Twitter posts to its accounts, with many of these reminding servicemembers of their opt-in choice. The posts linked to additional resources and advertised outreach like the Navy s Facebook Live event, which utilized social media to provide servicemembers online access to financial experts who could answer their retirement-related questions. Military supervisors also said that most of the service branches sent targeted communications to supervisors to remind eligible servicemembers at regular meetings to complete the BRS Opt-In Course. The service branches also created supplemental BRS trainings tailored to meet their servicemembers needs. For example, the Marine Corps developed a classroom-based BRS training that included specific instructions on how to use the Marines data systems to make BRS decisions, as well as statistics on the average percentage of Marines that complete 20 years of service. <2.2. DOD Is Developing Continuing Financial Literacy Education for Servicemembers on BRS and on Saving for Retirement> With all incoming servicemembers automatically enrolled in BRS as of January 1, 2018, DOD officials said the agency has shifted its continuing financial literacy training from the opt-in decision to saving for retirement. As with the BRS training, the military provides continuing financial literacy education through both DOD and the service branches. DOD s Office of Financial Readiness provides policy, education, advocacy, and program oversight to promote servicemembers financial readiness. While DOD developed the BRS trainings and conducted outreach, DOD officials said that the service branches have the primary responsibility for developing and providing servicemembers continuing financial literacy education, including on saving for retirement, based on their own resources and their servicemembers needs. The service branches use a variety of formats (see fig. 5). DOD is also developing a plan to provide continuing financial literacy education to servicemembers at various career and life stages. DOD officials said the agency plans to improve the consistency of the continuing financial literacy education provided by the service branches and consolidate it so it is delivered at the career and life stages specified by the NDAA for Fiscal Year 2016. DOD s Office of Financial Readiness released guidance in August 2019 to provide the service branches a common set of learning objectives for financial literacy education aligned with these specific career and life stages. DOD officials told us that the service branches are responsible for delivering the continuing financial literacy education to servicemembers at these stages according to their schedules and resources. <3. DOD Training Reflected Many Financial Literacy Effective Practices, but Servicemembers Challenges Can Inform Future Training Efforts> <3.1. BRS Training Met Many Financial Literacy Effective Practices, but DOD Did Not Use Course Assessments to Improve Content> We found that DOD s Blended Retirement System (BRS) trainings met many established financial literacy training effective practices (see sidebar on next page and table 1). However, lack of assessments of some courses affected DOD s ability to measure how well the courses helped participants and to make any needed changes. Financial education experts have found that financial literacy trainings that meet effective practices can improve employees overall financial wellness. These experts identified the workplace as a particularly effective venue for providing financial education and helping individuals improve their financial decision making because employers have the potential to reach large numbers of adults in a cost-effective manner at a place where they make important financial decisions. Effective Financial Literacy Training Practices Information is unbiased: Employers financial literacy education programs should provide financial information that avoids even the appearance of conflicts of interest. Links to one-on-one financial help: Programs should provide access to one-on- one financial coaches who can help employees understand and take action on their priorities. Leverages trusted messengers: Programs should use trusted coworkers and other peers to provide or facilitate assistance on financial matters. Assesses employees financial literacy to provide assistance and help set priorities: Programs should periodically assess employees financial situations and goals to pinpoint how best to provide assistance and help employees set priorities. Enables employees to take action directly from the course: Programs should provide employees the means, for example, through direct links or forms provided in the course, to convert knowledge to financial action. According to DOD officials, servicemembers will make more financial decisions that may impact their ability to successfully save for retirement under BRS than under the legacy retirement system, which makes providing effective financial literacy training to servicemembers particularly important. We found that all of DOD s BRS trainings met the applicable financial literacy effective practices of presenting unbiased information, directing servicemembers to options for one-on-one financial help, and employing trusted messengers such as military peers and Personal Financial Managers (PFMs) to deliver the course information. For example, each of the applicable BRS trainings encouraged servicemembers to work with PFMs to understand how their personal financial circumstances impact saving for retirement. While the BRS trainings met many of the financial literacy effective practices we selected, two of the trainings fell short in assessing servicemembers financial literacy, which could allow DOD to better pinpoint how to provide assistance and help servicemembers set priorities. Servicemembers were required to pass a test to complete the BRS Opt-In Course; DOD data show that only 32 percent of servicemembers passed on their first attempt. However, DOD did not revise course material to provide additional information in topic areas where post-test results indicated servicemembers may have needed further training. DOD officials said that the agency consciously avoided making significant changes to its BRS trainings to ensure consistency and course stability throughout the opt-in enrollment period. DOD officials also told us that they were not surprised by the initial low pass rate because they designed the test to be difficult so that servicemembers could demonstrate mastery of the material. DOD s New Accession Course does not assess individual servicemembers understanding of the material, which is information DOD would need to improve its training to provide assistance and help servicemembers set priorities. The course includes a series of knowledge checks, but because the questions are administered to the group as a whole, DOD cannot assess individual servicemembers understanding and use this information to revise the course material or to provide servicemembers with additional assistance. DOD officials told us that the agency views the course as successful because it gets students to engage in discussion regarding the basics of BRS and financial readiness. DOD does not have a plan to assess individual servicemembers understanding of course material going forward. While servicemember engagement is important, it is not an assessment of their understanding of course material. Servicemembers who do not understand BRS concepts may not save enough for a secure retirement under BRS. Additionally, the BRS Opt-In Course did not meet the financial literacy training effective practice of enabling servicemembers to act on course information directly from the training. For example, the BRS Opt-In Course suggested servicemembers contact PFMs and PFCs if they had further questions about BRS, but the course did not provide direct links for servicemembers to do so. Further, the course did not include forms for servicemembers to enroll in and make contributions to TSP accounts. This standard is considered an effective practice for financial literacy training because research has found that employees who can directly convert their knowledge to immediate action have improved overall financial wellness. DOD addressed this issue in its most recently released training, the BRS New Accession Course, which enables servicemembers to make immediate decisions, such as assigning their initial TSP contribution rates, by providing servicemembers the relevant form within the training. The NDAA for Fiscal Year 2016 included a requirement for DOD to add questions on servicemembers financial literacy to its annual survey and use the results as a benchmark to evaluate and update the continuing financial literacy training DOD will provide to servicemembers in the future. The NDAA for Fiscal Year 2016 also requires DOD to develop ongoing financial literacy training for servicemembers to take at key career and life stages. DOD has the opportunity to ensure that individual knowledge assessments are included in the guidance it provides the service branches on the key objectives that must be met in these trainings. <3.2. DOD Can Learn from Servicemembers Challenges Taking the BRS Opt-In Course to Improve its Ongoing Training> Military personnel cited multiple challenges described by servicemembers in taking the BRS Opt-In Course and seeking financial literacy support. In our interviews at five military installations, military supervisors and financial counselors said they believed servicemembers had difficulty (1) understanding the training due to their low financial literacy; (2) taking, and relating to, optional financial literacy training due to mission and short-term life goals; and (3) setting up online access to their TSP accounts. <3.2.1. Servicemembers Financial Literacy> Many military supervisors and Personal Financial Managers (PFMs) we interviewed said that many servicemembers with whom they interacted misunderstood key BRS concepts and lacked the basic knowledge to make sound financial decisions related to BRS even after completing the mandatory BRS Opt-In Course. Providing basic financial education to junior enlisted servicemembers, who can be as young as 17 years old, may be especially challenging due to their limited life and work experience. These servicemembers score the lowest on measures of financial literacy, according to the 2017 Status of Forces Survey, an annual survey of a sample of servicemembers that covers key issues of military life. Some servicemembers said that the training platforms (e.g., computer- based and large group training), while efficient in providing mandatory training to a large group of servicemembers, were not ideal for a group with very limited baseline financial literacy. For example, several military supervisors said some servicemembers advanced through the computer- based BRS Opt-In Course as quickly as possible, and may not have understood the content. One military supervisor said it may be hard for servicemembers to identify the most critical elements in the computer- based training because they could not interact with the material or ask clarifying or personal questions. For example, one group of military supervisors said the current training addresses what TSP is, but there is a need for more training to answer servicemembers questions about how to manage and optimize their accounts for retirement savings. In response, DOD officials said that while these topics were not covered in depth in the BRS trainings, servicemembers have access to additional resources, such as PFMs and the TSP website for help with personal questions about managing their savings under BRS. Large group trainings, which could have hundreds of servicemembers in attendance, also may have discouraged servicemembers from asking clarifying questions due to the number of participants. DOD officials acknowledged that servicemembers may need more one-on-one help when making personal financial decisions, which is why the agency trained PFMs and PFCs to address servicemembers BRS and financial literacy questions and provide additional support. Some military supervisors said the servicemembers who they directed to optional one- on-one financial counseling sessions asked the PFMs detailed questions their supervisors were not able to answer, ran their own numbers and received personalized information to help them make decisions, and often took action during the session. DOD officials said one challenge to getting servicemembers to seek out more personalized one-on-one financial help is the perception that servicemembers seek PFMs primarily after facing financial hardship. These officials said they are working to shift the military culture so servicemembers seek out PFMs for financial planning purposes similar to how civilians use financial counselors. <3.2.2. Balancing Financial Literacy Education with Competing Priorities> Military supervisors and PFMs told us that servicemembers had difficulty seeking out financial literacy support because of demanding operational schedules and a focus on short-term life and mission goals. This was especially true for junior servicemembers, who may be uncomfortable requesting time away from their mission duties. Further, some military supervisors said junior servicemembers tended not to recognize the importance of saving for retirement when faced with other, more immediate, financial priorities, such as purchasing a car. One group of military supervisors said that since most junior servicemembers do not seek out retirement advice, they try to find opportunities to weave the topic into other discussions, for example, about how taking out a car loan can impact a junior servicemember s saving for retirement. <3.2.3. Setting Up TSP Online Account Access> Servicemembers can manage their TSP accounts online by viewing current plan information and making or changing contribution allocations; however, setting up an online account depends on servicemembers having a stable mailing address. The Federal Retirement Thrift Investment Board (FRTIB), which administers the TSP, mails participants a time-sensitive TSP password required to access their TSP accounts online. Some military supervisors said that servicemembers reported difficulty receiving their initial TSP password because they relocate often and may lack a permanent mailing address. FRTIB officials acknowledged that this fraud prevention measure might make it more difficult for participants to access their TSP accounts, but noted that they must balance security with ease of use and have not yet found any viable options to address this issue. Federal government internal controls standards state that entities should use appropriate methods to communicate so that information is readily available when needed. <4. Additional Information Explaining BRS Lump-Sum Payment Options Needed for Servicemembers to Make Informed Choices> <4.1. BRS Offers a Time- Limited, Partial Lump-Sum Payment Using a Single Discount Rate for All Servicemembers> Under the Blended Retirement System (BRS), military retirees with 20 or more years of service may choose, when they retire, to convert part of their monthly annuity into a lump sum payment, in exchange for a temporarily lower monthly benefit. The lump-sum payment is partial in two ways: 1) servicemembers may convert either 25 or 50 percent of their annuity payments to a lump-sum payment, and 2) the lump-sum conversion only applies to annuity payments payable prior to the servicemember s Social Security full retirement age (FRA) age 67 for those born in 1960 or later. After the service member reaches FRA, the annuity payments revert to the full monthly pension. (See fig. 6.) In its final report, the Military Compensation and Retirement Modernization Commission recommended that the new military retirement system should offer a lump-sum payment option to increase flexibility for retiring servicemembers and remain fiscally sustainable. Since many servicemembers retire from the military at a younger age than most civilians in the workplace, DOD officials said that some military retirees might prefer a lump-sum payment to start a business or buy a house. Personal Discount Rates Personal discount rates can be derived from individuals behavior when faced with intertemporal monetary choices. In contrast, more traditional approaches to pension discount rates are based on financial market data or expectations rather than on individual preferences or behavior. In theory, personal discount rates reflect individuals valuation of money received today versus in the future. However, behavioral economic research has shown that people do not always make rational choices related to foregoing current benefits for future payoff. given stream of converted pension payments. The NDAA for Fiscal Year 2016 directed the Secretary of Defense to choose a discount rate for BRS lump sums that (1) uses average personal discount rates that take into consideration applicable and reputable studies of personal discount rates for military personnel and past actuarial experience in the calculation of personal discount rates, and (2) is in accordance with generally accepted actuarial principles and practices. Researchers have sought to quantify personal discount rates by studying personal choices in a variety of contexts involving the tradeoff of payoffs at different times (see sidebar). Two such studies involved military personnel being offered lump-sum payments in lieu of annuity payments. According to the Institute for Defense Analyses (IDA), the studies computed an estimated average personal discount rate for servicemembers who were presented with the offer, based on the choices by servicemembers to either elect the lump-sum payment or the annuity. DOD officials told us that, to comply with the requirements of the NDAA for Fiscal Year 2016, they considered several factors to set the discount rate for BRS lump-sum calculations. DOD officials said they first contracted with a research organization to estimate a range of personal discount rates based on past studies. They said they then adjusted that range based on differences between the specific features of past lump- sum offers and those of BRS lump sums. They also considered how a lump-sum offer could impact the retention of military personnel, since DOD relies on a percentage of experienced servicemembers to continue serving beyond 20 years. DOD officials told us they wanted to reduce the likelihood a lump-sum payment would lead more people to retire earlier than they would otherwise. Finally, even though past studies had found higher personal discount rates (resulting in smaller lump-sum amounts) for enlisted servicemembers than officers, DOD officials told us it would go against core values of military compensation if the agency did not apply the same discount rate to all lump-sum payments, regardless of the servicemember s rank. Considering all of these factors, DOD devised a formula for setting what it termed the Government Discount Rate (GDR) that would be used in calculating BRS lump-sum amounts. DOD constructed the GDR by starting with a market index of high-quality corporate bond rates and then adding an adjustment factor so that the GDR fell within the range of observed personal discount rates. According to DOD, this current method for setting this rate will be reexamined at least every 4 years. The GDR for 2019 is 6.81 percent, which is a real interest rate that does not include an inflation component. To compare the GDR to more common nominal interest rates, an inflation adjustment must be added. For example, if inflation were assumed to be 2.4 percent per year, a GDR of 6.81 percent would be approximately equivalent to a nominal discount rate of 9.37 percent. <4.2. BRS Lump Sums Are Calculated Using a Higher Discount Rate than Private-Sector Pension Plans, Leading to Smaller Lump-Sum Payments by Comparison> The method used to determine BRS lump-sum payment amounts is likely to result in a discount rate that is higher based on recent interest rates, roughly double than that used to calculate minimum lump-sum distributions from private-sector pension plans, when all other factors are equal. The discount rates for determining minimum lump-sum amounts for private-sector pension plans that offer them are governed by ERISA. The Internal Revenue Service (IRS) publishes the discount rates applicable to minimum lump-sum determinations each month, based on ERISA provisions. For 2018, these rates generally fell in the range of 2.5 to 4.9 percent, on a nominal basis, compared to the GDR, which was about 9 percent, on a nominal basis (depending on assumed inflation). We found, based on recent interest rates, holding age and monthly annuity amounts constant, the higher discount rate applied to BRS lump- sum calculations would significantly reduce servicemembers lump-sum payment amounts. Additionally, we found that the percentage difference would be the largest at younger retirement ages, since the difference in discount rates would have an impact over a longer period of time. For a servicemember retiring at age 40, for example, we found BRS lump-sum payments to be about 40 percent smaller, based on recent interest rates, than if calculated following the requirements under ERISA. (See fig. 7.) For more information on ERISA and our methodology for calculating lump-sum payments, as well as sensitivity testing and factors that can affect this comparison at different points in time, please see appendix I. DOD officials told us that the discount rate used for BRS lump-sum payments was different than the rate used in private-sector pension plans for some key reasons. DOD officials said the NDAA for Fiscal Year 2016 required that the BRS discount rate be based on average personal discount rates, which is a different approach to discount rates than that used under ERISA. DOD officials also said the agency relies on maintaining a certain percentage of servicemembers with 20 or more years of service and did not want the offer of a lump-sum offer to entice too large a percentage of servicemembers to leave military service. However, knowledgeable stakeholders expressed some concerns with the higher discount rate used to determine BRS lump-sum payment amounts. For example, the DOD Board of Actuaries stated that a relatively high discount rate, and the lower lump-sum payments that would result, could be perceived as taking advantage of servicemembers. Additionally, the American Academy of Actuaries said those who accept lump-sum payments using higher discount rates are likely to either not understand the financial value of their annuity benefits or have an immediate financial need. On the other hand, stakeholders we interviewed noted that BRS s lump-sum feature was intended to provide options to servicemembers, which was a central component of implementing BRS. <4.3. Servicemembers Could Benefit From More Information on Lump-Sum Distributions> Although current active-duty servicemembers eligible to choose a lump- sum payment are not scheduled to retire until 2026, at the earliest, DOD can take certain steps to help them better understand the tradeoffs associated with the lump-sum option. Decisions about lump-sum options are complicated, and stakeholders knowledgeable about financial literacy have pointed out the importance of providing sufficient information about the tradeoffs involved for those making such decisions. In a 2015 report, we identified key information to help individuals in the private sector make an informed decision when considering a lump-sum payment versus an annuity. (See table 2.) Without this key information, service personnel will be unable to prudently weigh the advantages and disadvantages of the lump-sum option in their retirement decisions. DOD officials said they posted a training video on the BRS lump-sum option to the BRS website in July 2019. Servicemembers also have access to other descriptive material on the BRS website, such as a fact sheet on the BRS lump sum, and the BRS calculator to estimate their lump-sum payment with some assumptions about future pay. <5. Conclusions> The shift from the legacy retirement system to BRS marks a significant change in retirement benefits for an estimated 1.7 million military servicemembers. While more servicemembers will receive retirement benefits under BRS than under the legacy retirement system, BRS will require servicemembers to more actively and continuously manage their retirement decisions throughout their military career and in retirement. As an employer, DOD is well positioned to provide financial literacy training and support to servicemembers as they make retirement decisions. DOD has designed a multi-faceted approach to provide resources over time and in a variety of formats, increasing the likelihood that servicemembers will be able to find guidance when they need it. DOD completed a large undertaking in educating servicemembers about the choice they faced in deciding whether to opt into BRS, but this was only the first step in educating servicemembers about how to maximize and manage their retirement savings under BRS. In educating servicemembers about saving for retirement, DOD would benefit from applying the financial literacy training effective practices identified by experts, especially periodically assessing employees financial understanding and using these assessments to revise and tailor ongoing training. Given that young servicemembers are often stationed in multiple locations for short amounts of time and that BRS places increased responsibility on servicemembers to save for retirement through TSP contributions, it is important that servicemembers receive the necessary information to access their TSP accounts online in a timely manner. The current TSP password process has limited some servicemembers ability to manage their accounts. It is important for FRTIB to expeditiously address this issue. Of additional concern is how DOD will ensure that servicemembers understand the tradeoffs associated with BRS s lump-sum feature. BRS lump-sum payments are calculated using a higher discount rate than private-sector pension plans, which results in lower lump-sum payments, by comparison. While the BRS lump sum is limited, and the full annuity amount would resume at servicemembers Social Security full retirement age, the reduced annuity paid until then could still have a significant impact on some servicemembers financial security. A fundamental element of BRS is the greater responsibility and choice placed on individuals. To work well, such a system requires that sufficient, clear, and accurate information be provided so that servicemembers can make the prudent choices best suited to their personal financial situations. Consistent with this principle, DOD should ensure that the information and tools that it provides to eligible servicemembers about the lump sum clearly lay out the tradeoffs of this decision and allows those eligible to make a well-informed prudent choice that best meets their individual financial circumstances. <6. Recommendations for Executive Action> The Secretary of Defense should evaluate the results of its financial literacy training assessments to determine where gaps in servicemembers financial knowledge exist and revise future trainings to address these gaps. (Recommendation 1) The Secretary of Defense should provide servicemembers disclosures that explain key pieces of information about the lump-sum payment, including some measure of its relative value, the potential positive and negative financial ramifications of choosing the lump-sum payment option, and a description of how it was calculated. (Recommendation 2) The Executive Director of the Federal Retirement Thrift Investment Board should work with the Secretary of Defense to explore alternative options (including online resources) for servicemembers to receive their initial Thrift Savings Plan password so that servicemembers can access and manage their online accounts without added delays. (Recommendation 3) <7. Agency Comments> We provided a draft of this report to the Secretary of Defense and the Executive Director of the Federal Retirement Thrift Investment Board for review and comment. In its letter, which is reproduced in appendix II, DOD concurred with the report s recommendations and offered comments on some of our findings. For recommendation 1, regarding the evaluation of its financial literacy training assessments, DOD stated that in 2017 it added questions to its annual Status of Forces Survey to assess the military population s understanding of basic financial concepts. While these survey results will allow DOD to respond to identified gaps in servicemembers financial literacy, Status of Forces survey results have taken years to compile in the past. Assessing servicemembers financial literacy as part of mandatory trainings will allow DOD to more promptly identify gaps in servicemembers knowledge and adjust trainings to address those gaps. For recommendation 2, regarding the provision of information on the BRS s lump-sum payment options, DOD stated that it has developed a training course, published information to help educate servicemembers on the BRS s lump-sum option, and included a lump-sum section in its BRS calculator. While we are encouraged by DOD s efforts to develop various tools for educating servicemembers on the BRS s lump-sum option, in this report we identified additional information that is important to include in lump sum disclosures. In its letter, DOD expressed concern that the title of our report focused only on one aspect of our findings. We believe that the title accurately reflects our report s key findings, conclusions, and recommendations. DOD also said that the agency did not intend for the BRS Opt-In Course to be financial literacy training, and thus were concerned that we evaluated this training based on the effective practice identified in prior GAO work of assessing employees financial literacy to provide assistance and help set priorities. However, we believe that our use of this effective practice to evaluate the BRS Opt-in Course is consistent with our prior findings that employers are well-suited to provide financial education and help individuals improve their financial decision making. We compared the BRS Opt-In Course to this effective practice because the course provided DOD an opportunity to assess whether servicemembers understood key aspects of BRS, undoubtedly a key aspect of servicemembers financial well-being. In addition, DOD stated that the agency viewed servicemembers initial low pass-rate of the BRS Opt-In Course as a positive result because they designed the course to be rigorous and it forced servicemembers to retake the parts of the training where they were failing to comprehend the course material. DOD also stated that revising the training during the 2017 training period was not practical because it would have resulted in some servicemembers receiving disparate training formats and materials. We understand DOD s concerns; however as DOD continues to develop additional financial literacy training we encourage the agency to consider that low pass rates on post-training tests often indicate a gap in knowledge and a possible need to revise the training. In its final comment, DOD agreed with us that there is a lack of reliable data for comparing the BRS lump-sum feature with those provisions offered by state and local government pension plans. DOD also stated that the BRS lump-sum feature was unique and therefore not comparable to private-sector pension plans governed by ERISA. Although there are differences between BRS and ERISA, the BRS and ERISA lump-sum provisions are the only defined benefit lump sum conversion provisions that are specified under federal law. Further, the lump-sum provisions for both reflect a participant choice that can have important consequences for a participant s financial security. Our recommendation is premised on the principle that regardless of which particular features a pension plan offers, participants need clear, complete, and accurate information to make prudent decisions regarding their retirement security. The FRTIB also provided comments, reproduced in appendix III, and generally agreed with the report s findings and conclusions. The FRTIB also concurred with our recommendation regarding the provision of TSP passwords to military personnel and said that they will continue to explore avenues to address how servicemembers receive their initial TSP password while continuing to emphasize the need for security. DOD and FRTIB provided technical comments, which we incorporated into the report as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the report date. We are sending copies of this report to the Secretary of Defense, the Executive Director of the Federal Retirement Thrift Investment Board, the Director of the Consumer Financial Protection Bureau, and other interested parties. This report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Charles Jeszeck at (202) 512-7215 or jeszeckc@gao.gov or Frank Todisco at (202) 512-2700 or todiscof@gao.gov. Mr. Todisco meets the qualification standards of the American Academy of Actuaries to address the actuarial issues contained in this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This appendix discusses in detail our methodology for addressing (1) what actions the Department of Defense (DOD) has taken to help servicemembers understand the Blended Retirement System (BRS) and, more generally, educate servicemembers on saving for retirement; (2) what DOD can learn from financial literacy training effective practices and the implementation of BRS training to continue supporting servicemembers in saving for retirement; and (3) how lump-sum payment amounts are determined under BRS and how they compare to the methods used for private-sector pension plans that offer them. To answer all of these questions, we interviewed officials at DOD, the Federal Retirement Thrift Investment Board (FRTIB), the Consumer Financial Protection Bureau (CFPB), and other organizational stakeholders knowledgeable about the military and retirement. We also reviewed relevant agency documents and federal laws and regulations. To understand how DOD helped servicemembers understand BRS, we reviewed DOD s centralized training and outreach material. We also conducted group interviews with senior officers and enlisted servicemembers on military installations to learn about some of the informal training and mentorship provided by military leaders. We used the following criteria to select military installations to visit: 1. Sufficient number of BRS-eligible personnel available to participate 2. High number of active-duty servicemembers stationed at the 3. Availability of a Personal Financial Manager (PFM) on the installation 5. Mix of single service versus joint bases 6. Proximity to an urban center 7. Primary mission of the installation is operational (versus training) We selected five military installations to visit: Camp Pendleton (Marine Corps), Fort Sam Houston (Army), Naval Base San Diego (Navy), and Randolph Air Force Base and Scott Air Force Base (Air Force). At each installation, we met with separate groups of 8 to 12 senior enlisted servicemembers and senior officers. These senior servicemembers supervise junior servicemembers who, as a group, were most likely to have had to make a decision on whether to opt into BRS. We also met with the groups installation-level financial management professionals Personal Financial Managers (PFM), Personal Financial Counselors (PFC), or Command Financial Specialists (CFS) who provide servicemembers additional financial literacy training and one-on-one financial counseling. We asked questions of all group interview participants related to: 1. Information provided to servicemembers about BRS 2. Common needs of servicemembers in making decisions about BRS 3. Common questions servicemembers had about BRS 4. Challenges experienced in providing training and/or support 5. Anticipated future needs for training and/or support These interviews provided insights into senior officers and enlisted servicemembers experiences facilitating the rollout of BRS training to junior servicemembers, but did not yield information that was generalizable to all senior officers and enlisted servicemembers. We also reviewed and compared DOD s financial literacy trainings to financial literacy training effective practices. To identify financial literacy effective practices, we reviewed published articles and reports on the topic. Our review included a March 17, 2015 forum GAO convened with 20 financial literacy leaders and experts focusing on financial education in the workplace, and the subsequent report, Financial Literacy: The Role of the Workplace, GAO-15-639SP (Washington, D.C.: July 2015). The report provided the best single compilation of financial literacy effective practices from a diverse set of experts from the private, non-profit, governmental, and academic sectors. The report summarizes forum participants discussions across seven topic areas. Of these seven, we selected two that were most germane to DOD s BRS training: (1) Employers should address the needs of traditionally underserved workplace populations, and (2) Effective practices can include automatic enrollment in retirement plans, financial health checks, and personalization. Across these two topic areas, we selected the five financial literacy training effective practices that were most relevant to the type of trainings DOD developed for BRS. Specifically, we determined if BRS trainings (1) contain unbiased information, (2) contain links to one- on-one financial help, (3) leverage trusted messengers, (4) assess participants financial literacy so DOD can provide assistance and help set priorities, and (5) enable participants to take action directly from the course. To understand how BRS lump-sum payments are determined, we interviewed DOD officials to learn about the issues they considered when designing BRS s lump-sum feature, how DOD determines the discount rate it uses for lump-sum payments, and how the BRS discount rate used to calculate lump sums relates to personal discount rates. To understand discount rate issues applicable to lump-sum payments in other pension plans, we interviewed stakeholders knowledgeable about other pension plans, consulted with our internal actuarial experts, and reviewed relevant prior work. We also consulted with actuaries at DOD to clarify our technical understanding of the calculation of lump-sum amounts under BRS. We created a lump-sum payment calculator to run simulations of various lump-sum calculations including those used in private-sector pension plans to show the effect that varying certain calculation methods and assumptions can have on the value of the lump-sum payment. We calculated and compared illustrative lump-sum amounts under BRS to what those lump-sum amounts would have been under federal laws and regulations applicable to private-sector pension plans. We did not do a similar comparison to public-sector pension plans because of a lack of reliable, generalizable data on the prevalence of lump sums offered by the many state and local government plans and the applicable discount rates used. Some lump-sum options under state and local government plans do not require a discount rate at all because they return employee contributions with interest or are a deferred retirement option provision (DROP) rather than lump sums that involve discounting future promised payments. Different state or local governments might set their own rules regarding any lump sums. In contrast, the lump-sum provisions applicable to both BRS and private-sector pension plans under the Employee Retirement Income Security Act of 1974, as amended (ERISA) are in federal law. The following section provides additional technical detail regarding the methods used to determine the lump-sum discount rate (the Government Discount Rate, or GDR) under BRS; the methods used to determine discount rates for determining minimum lump-sum amounts under ERISA; a discussion of key differences between BRS and ERISA approaches; and the methods and assumptions we used to compare BRS lump-sum amounts to minimum lump sums under ERISA, along with a discussion of how the comparison could vary over time. <8. Comparison of Lump-Sum Amounts under BRS and Private-Sector Pension Plans> DOD s construction of the GDR begins with a 7-year average of estimated high-quality corporate bond real interest rates for maturities of about 23 years, and then adds an add-on factor to bring the discount rate up to a level consistent with applicable studies of personal discount rates, subject to possible adjustments for DOD concerns about retention of servicemembers. DOD officials told us that the 23-year maturity was intended to reflect the average time between a servicemember s retirement from the military until Social Security full retirement age (FRA). The 7-year averaging is for the purpose of smoothing out short-term fluctuations in interest rates. The add-on for 2018 and 2019 is 4.28 percentage points. The GDR for 2019 is 6.81 percent, which is a real discount rate that does not include an inflation component. Interest rates are often regarded, economically, as consisting of two components: a portion to cover expected inflation (the inflation component), plus a portion to provide a return in excess of inflation (the real return component). For example, if inflation expectations are 2.50 percent per year, and the interest rate on a bond is 4.50 percent, then the bond is expected to provide a real return (in excess of inflation) of approximately 1.95 percent ( x 100). In this case, 4.50 percent would be referred to as the nominal interest rate and 1.95 percent would be referred to as the real interest rate. In order to convert the GDR into an equivalent nominal discount rate (for comparison to ERISA discount rates), an inflation assumption is needed. We used an inflation assumption of 2.40 percent per year, which is the inflation assumption used by the Congressional Budget Office (CBO) in its 2019 long-term budget outlook. As a result, with this inflation assumption, the nominal discount rate equivalent to the GDR of 6.81 percent is 9.37 percent ( x 100). Military pensions (both under legacy and BRS) are increased each year to fully keep up with inflation. The lump-sum equivalent of such a benefit could be calculated in one of two ways, which mathematically would produce the same result: (1) applying the nominal discount rate (in this example, 9.37 percent) to the projected increasing series of monthly annuity benefits, or (2) applying the real discount rate (in this example, 6.81 percent) to a fixed (not inflation indexed) monthly annuity. For determining minimum lump sums under ERISA, the discount rate is actually a combination of three segment rates that reflect bond yields at different maturities: a short-term rate to discount future payments due in the next 5 years, a medium-term rate to discount future payments due between 5 and 20 years out, and a long-term rate to discount future payments due beyond 20 years. These are nominal rates. These rates are published monthly by the Internal Revenue Service (IRS) and are based on an average of high-quality corporate bond rates for the month. Private-sector pension plan sponsors have some flexibility in selecting a method for determining which monthly averages would be used to calculate lump sums offered in a particular plan year. As a result, for a lump sum payable in a particular month, the applicable ERISA segment rates could be those for a month up to 16 months prior to the month of the lump-sum payment, depending on the provisions of the plan. Minimum lump sums under ERISA also include a mortality discount, which means that the lump sum is reduced to reflect the fact that for any future scheduled pension payment, there is a probability that the retiree will no longer be alive to receive it. We included this mortality discount in our ERISA calculations. DOD decided not to include a mortality discount in the BRS lump-sum methodology. DOD officials told us that mortality rates from age 44 to age 67 are relatively small, such that the impact of including mortality would be overwhelmed by minor changes in the discount rate. As a result, for simplicity, they decided not to include a mortality discount. Not including a mortality discount has the effect of making the BRS lump sum somewhat more generous than it would be if it included a mortality discount. Thus, key differences in the determination of lump-sum amounts under BRS and for ERISA minimums include the following: The development of the GDR starts with corporate bond rates for a 23-year maturity, whereas the ERISA segment rates are based on corporate bond rates for many maturities that are summarized into three segment rates for three different ranges of maturities. Thus, the comparison at any point in time will be affected by the shape of the yield curve. The development of the GDR starts with a 7-year average of corporate bond rates, whereas the ERISA segment rates are based on more current corporate bond rates. Thus, the comparison at any point in time will be affected by movements in interest rates in the prior 7 years. The GDR includes an add-on, currently 4.28 percentage points, to bring the GDR in line with applicable studies of personal discount rates. According to DOD, the add-on also takes into account considerations of retention of military personnel. Thus, the comparison at any point in time will be affected by any changes DOD makes to the magnitude of the GDR add-on. The determination of the minimum lump sum under ERISA includes a mortality discount; the determination of lump sums under BRS does not. The GDR applies over an entire calendar year, whereas the segment rates change month to month, and the segment rates applicable to a particular month s lump sum could be the published rates for up to 16 months prior, depending on the plan provisions. For our comparison, we assumed a lump sum payable in June 2019. As noted earlier, the applicable GDR for 2019 is 6.81 percent, and the nominal equivalent rate, based on our inflation assumption of 2.40 percent, is 9.37 percent. For the ERISA minimum lump sum, we used the May 2019 segment rates published by IRS, which are 2.72 percent for the first 5 years scheduled payments, 3.76 percent for the next 15 years payments, and 4.33 percent for the scheduled payments beyond 20 years. We also included the mortality discount in the ERISA calculation. As noted in the body of this report, the result was that the BRS lump sum was 42 percent smaller than it would have been under ERISA rules for an age-40 retirement, and 32 percent smaller for an age-50 retirement. We also looked at the range of ERISA segment rates over the 16-month period from February 2018 through May 2019 to determine the range of potential results depending on which month s ERISA rates might apply for a particular plan. The BRS lump sum ranged from 38 percent smaller to 42 percent smaller than on an ERISA basis for an age-40 retirement and from 28 percent smaller to 32 percent smaller for an age-50 retirement. We also calculated sensitivities from varying the inflation assumption. As noted earlier, we used an inflation assumption of 2.4 percent, the inflation assumption used by the CBO in its 2019 long-term budget outlook. If instead we used an inflation assumption of 2.0 percent (and the May 2019 ERISA segment rates), the BRS lump sum would have been 39 percent smaller than on an ERISA basis for an age-40 retirement, and 30 percent smaller for an age-50 retirement. The other key differences, noted earlier, in the determination of lump-sum amounts under BRS and for ERISA minimums could also affect the comparison at any point in time. However, we believe the comparisons presented in this report are a reasonable representation of the general magnitude of the differences in lump-sum amounts under BRS compared to the minimum amount required under ERISA. We conducted this performance audit from March 2018 to September 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: Comments from the Federal Retirement Thrift Investment Board Appendix IV: GAO Contacts and Staff Acknowledgments <9. GAO Contacts> <10. Staff Acknowledgments> In addition to the contacts named above, Mark M. Glickman (Assistant Director), Anjali Tekchandani (Analyst-in-Charge), Cynthia Nelson, and Stephen C. Yoder made key contributions to this report. Also contributing to this report were Vincent Balloon, Alicia Cackley, Virginia Chanley, Sheila R. McCoy, Mimi Nguyen, Stacy Ouellette, Joseph Silvestri, Adam Wendel, and Seyda Wentworth. | Why GAO Did This Study
DOD's new retirement system, BRS, provides automatic and matching DOD contributions to servicemembers' individual Thrift Savings Plan accounts but reduces the retirement annuity paid to those who serve at least 20 years. BRS also offers servicemembers the option of taking part of their retirement annuity as a lump-sum payment.
GAO was asked to describe DOD's financial education efforts under BRS. This report examines (1) actions DOD has taken to help servicemembers understand BRS and saving for retirement, (2) what DOD can learn from financial literacy training effective practices and its implementation of BRS training to continue supporting servicemembers in saving for retirement, and (3) how BRS lump-sum payment amounts are determined.
GAO reviewed DOD's efforts to educate servicemembers on retirement decisions, conducted group interviews with senior officers and enlisted servicemembers at five military installations on facilitating the rollout of BRS training to junior servicemembers, and created a lump-sum payment calculator to compare different calculation methods and assumptions on the value of the lump-sum payment.
What GAO Found
In 2016, the Department of Defense (DOD), along with the military service branches, began a multi-year effort to provide training to help servicemembers make informed decisions about saving for retirement through DOD's new retirement system, the Blended Retirement System (BRS). DOD provided computer-based training to help military supervisors, financial counselors, and eligible servicemembers understand the new retirement system, implemented in 2018, and its impact on saving for retirement. DOD trained financial counselors to provide servicemembers in-person, one-on-one financial counseling and classroom courses on BRS and related topics. In addition, DOD prepared ongoing financial literacy training that servicemembers will take upon reaching specific career and life stages.
BRS trainings met many of the effective practices for financial literacy training identified in prior GAO work, but some DOD trainings do not incorporate the practice of assessing servicemembers' financial literacy. DOD could use such assessments to modify course material to bolster training in areas where servicemembers' comprehension was weaker. Without assessing whether its financial literacy training is effectively conveying course information, DOD may be missing opportunities to better support servicemembers' retirement decisions. Servicemembers also reported challenges in taking the Opt-In Course for BRS that may inform ongoing and future DOD training.
Examples of Servicemembers' Financial Literacy Challenges on Retirement
DOD determines BRS lump-sum payment amounts at retirement by applying an interest rate (or discount rate) to calculate the present value of annuity payments servicemembers forego by taking a lump sum. The BRS discount rate exceeds the rate used by private-sector pension plans, resulting in a lower lump sum than if private-sector rates applied. DOD can take certain steps to help servicemembers understand how to compare the BRS lump-sum payment option with the full annuity option. Without this information, servicemembers may not make informed decisions and potentially risk their retirement savings.
What GAO Recommends
GAO recommends 1) DOD assess its course evaluations to improve its financial literacy training on retirement for servicemembers, 2) DOD provide key information on the calculation of retirement lump-sum payments, and 3) Federal Retirement Thrift Investment Board explore alternatives for servicemembers to receive their TSP passwords. Both agencies agreed with their respective recommendations. |
gao_GAO-20-659T | gao_GAO-20-659T_0 | <1. Key Federal Actions to Respond to and Recover from COVID- 19 and Our Recommendations for Executive Action and Matters for Legislative Action> In response to the national public health and economic threats caused by COVID-19, four relief laws were enacted as of June 2020, including the CARES Act in March 2020. These laws have appropriated $2.6 trillion across the government. Six areas the Paycheck Protection Program (PPP), Economic Stabilization and Assistance to Distressed Sectors, unemployment insurance, economic impact payments, the Public Health and Social Services Emergency Fund, and the Coronavirus Relief Fund account for 86 percent of the appropriations (see fig. 1). Total federal spending data are not planned to be readily available until July 2020. It is unfortunate that the public will have waited more than 4 months since the enactment of the CARES Act for access to comprehensive obligation and expenditure information published by federal agencies about the programs funded through these relief laws. In the absence of comprehensive data, we collected obligation (government financial commitments) and expenditure data from agencies, to the extent practicable, as of May 31, 2020. For the six largest spending areas, we found that obligations totaled $1.3 trillion and expenditures totaled $643 billion. The majority of the difference was due to PPP, for which the Small Business Administration (SBA) obligated $521 billion. The amounts for loan guarantees will not be considered expenditures until the loans are forgiven, and, for those that are not forgiven, whether they are timely repaid. We also collected spending data on other programs affected by the federal response. For example, we found that the Department of Health and Human Services (HHS) has provided $7 billion in COVID-19 Medicaid funding related to a temporary increase in the Federal Medical Assistance Percentage (FMAP), the statutory formula the federal government uses to match states Medicaid spending. Based on the information we collected, government-wide spending totaled at least $677 billion, as of May 31, 2020. Given the sweeping and evolving public health and economic crisis, agencies from across the federal government were called on for immediate assistance, requiring an unprecedented level of dedication and agility among the federal workforce, including those serving on the front lines, to quickly establish services for those infected with the virus. Consistent with the urgency of responding to serious and widespread health issues and economic disruptions, agencies have given priority to moving swiftly where possible to distribute funds and implement new programs. In moving quickly, however, agencies made trade-offs, and they have made only limited progress so far in achieving transparency and accountability goals. In particular, we identified several challenges related to the federal response to the crisis, as well as recommendations to help address these challenges, including the following: Viral testing. The Centers for Disease Control and Prevention (CDC) reported incomplete and inconsistent data from state and jurisdictional health departments on the amount of viral testing occurring nationwide, making it more difficult to track and know the number of infections, mitigate their effects, and inform decisions on reopening communities. However, HHS issued guidance on June 4, 2020, to laboratories that identifies required data elements to collect and how to report them to CDC. Distribution of supplies. The nationwide need for critical supplies to respond to COVID-19 quickly exceeded the quantity of supplies contained in the Strategic National Stockpile, which is designed to supplement state and local supplies during public health emergencies. HHS has worked with the Federal Emergency Management Agency and the Department of Defense to increase the availability of supplies. However, concerns remain about the distribution, acquisition, and adequacy of supplies. Paycheck Protection Program. As of June 12, 2020, the Small Business Administration (SBA) had rapidly processed over $512 billion in 4.6 million guaranteed loans through private lenders to small businesses and other organizations adversely affected by COVID-19. As of May 31, 2020, SBA had expended about $2 billion in lender fees. SBA moved quickly to establish a new nationwide program, but the pace contributed to confusion and questions about the program and raised program integrity concerns. First, borrowers and lenders raised a number of questions about the program and eligibility criteria. To address these concerns, SBA and the Department of the Treasury (Treasury) issued a number of interim final rules and several versions of responses to frequently asked questions. However, questions and confusion remained. In June 2020, Congress passed, and the President signed into law, the Paycheck Protection Program Flexibility Act of 2020, which modified key program components. Second, to help quickly disburse funds, SBA allowed lenders to rely on borrower certifications to determine borrowers eligibility, raising the potential for fraud. We recommend that SBA develop and implement plans to identify and respond to risks in PPP to ensure program integrity, achieve program effectiveness, and address potential fraud. SBA neither agreed nor disagreed, but we believe implementing this recommendation is essential. Economic impact payments. The Internal Revenue Service (IRS) and Treasury moved quickly to disburse 160.4 million payments worth $269 billion. The agencies faced difficulties delivering payments to some individuals, and they face additional risks related to making improper payments to ineligible individuals, such as decedents, and fraud. For example, according to the Treasury Inspector General for Tax Administration, as of April 30, almost 1.1 million payments totaling nearly $1.4 billion had gone to decedents. We recommend that IRS consider cost-effective options for notifying ineligible recipients how to return payments. IRS agreed. Unemployment insurance (UI). States are implementing three new, federally funded UI programs created by the CARES Act and, as of May 2020, states had received 42 million UI claims. The Department of Labor (DOL) has taken steps to help states manage demand, but DOL is developing its approach to overseeing the new UI programs. We will be evaluating DOL s monitoring efforts in future reports. Further, the UI program is generally intended to provide benefits to individuals who have lost their jobs; under PPP, employers are generally required to retain or rehire employees. According to DOL, no mechanism currently exists that could capture information in real time about UI claimants who may receive wages paid from PPP loan proceeds. We recommend that DOL, in consultation with SBA and Treasury, immediately provide help to state unemployment agencies that specifically addresses PPP loans and the risk of improper payments associated with these loans. DOL neither agreed nor disagreed with the recommendation, but noted it was planning forthcoming guidance. Contract obligations. Government-wide contract obligations in response to the COVID-19 pandemic totaled about $17 billion as of May 31, 2020. Goods procured include ventilators, and services contracted for include vaccine development. In addition, the CARES Act provided $1 billion for Defense Production Act purchases $76 million of which was awarded to increase production of N95 respirators. <2. Evolving Lessons Learned from Initial COVID-19 Response and Past Emergencies Highlight Areas for Continued Attention> The nation has made some progress in fighting COVID-19. However, the virus continues to pose risks to all Americans, and there is a concern of another wave of infection this fall. This could coincide with the seasonal influenza and hurricane season further straining federal agencies responsible for responding to these events, as well as the health care system. Additionally, the nation s initial response to COVID-19 highlights the challenges presented by an inherent fragmentation across responsibilities and capabilities in the federal biodefense response and health care system, which includes private, public (local, state, and federal governments), and nonprofit entities. Lessons from the initial response, as well as experience from past economic crises, disasters, and emergencies, highlight areas where continued attention and oversight are needed with the focus on improving ongoing response efforts and preparing for potential additional waves of infection. These lessons include the following: Establishing clear goals and defining roles and responsibilities for the wide range of federal departments and other key players are critically important actions when preparing for pandemics and addressing an unforeseen emergency with a whole-of-government response. Providing clear, consistent communication in the midst of a national emergency among all levels of government, with health care providers, and to the public is key. Collecting and analyzing adequate and reliable data can inform decision-making and future preparedness and allow for midcourse changes in response to early findings. Establishing transparency and accountability mechanisms early on provides greater safeguards and reasonable assurance that federal funds reach the intended people and are used for the intended purposes. Such mechanisms also help ensure program integrity and address fraud risks. While Congress has taken a number of actions to help address the pandemic, it continues to consider additional actions both to improve ongoing efforts and implement new ones and develop plans for congressional oversight of the nation s response to and recovery from COVID-19. Congressional oversight plays a vital role in spurring agency progress on matters of national importance. On the basis of our work on past large-scale government responses to economic downturns and other crises, we recommend Congress consider taking legislative action in the following areas: Aviation-preparedness plan. In 2015, we recommended that the Department of Transportation (DOT) work with federal partners to develop a national aviation-preparedness plan for communicable disease outbreaks. DOT agreed, but as of May 2020, maintained that HHS and DHS should lead the effort. Thus far, no plan exists. We recommend that Congress take legislative action to require DOT to work with relevant agencies and stakeholders to develop a national aviation-preparedness plan to ensure safeguards are in place to limit the spread of communicable disease threats from abroad while at the same time minimizing any unnecessary interference with travel and trade. Full access to death data. The number of economic impact payments made to decedents highlights the importance of consistently using key safeguards in providing government assistance to individuals. IRS has access to the Social Security Administration s full set of death records, but Treasury and its Bureau of the Fiscal Service, which distribute payments, do not. We recommend that Congress provide Treasury with access to the Social Security Administration s full set of death records and require that Treasury consistently use it to help reduce similar types of improper payments. Medicaid. We previously found that during economic downturns when Medicaid enrollment can rise and state economies weaken the FMAP formula does not reflect current state economic conditions. We previously developed a formula that offers an option for providing temporary automatic, timely, and targeted assistance. We recommend that Congress use this formula for any future changes to the FMAP during the current or any future economic downturn to help ensure that the federal funding is targeted and timely. In the report we issued yesterday, we also describe potential indicators that could be used to monitor public health and economic recovery. The report also contains 41 enclosures that contain information about a wide range of federal programs or initiatives that were created, expanded, or funded in the COVID-19 relief laws. In conclusion, both Congress and the administration have acted to respond to public health and economic threats posed by COVID-19. Federal agencies and personnel acted quickly to stand up new programs or expand existing programs to, among other things, aid individuals, states, and businesses. But much work remains in protecting the health and well-being of Americans, both today and in the coming months, as the nation may be forced to simultaneously confront new waves of COVID-19 infections and seasonal influenza. In our initial report we make recommendations to help improve the effectiveness of the federal government s ongoing response. Our ongoing oversight will continue to focus on improving the government s response and recovery efforts as well as the nation s preparedness for future outbreaks. Chairman Clyburn, Ranking Member Scalise, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. <3. GAO Contact> For further information about this testimony, please contact A. Nicole Clowers, Managing Director, Health Care, at (202) 512-7114 or clowersa@gao.gov; Katherine Siggerud, Chief Operating Officer, at (202) 512-5600 or siggerudk@gao.gov; or Orice Williams Brown, Managing Director, Congressional Relations, at (202) 512-4400 or williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
The outbreak of COVID-19 quickly spread around the globe. As of June 17, 2020, the United States had over 2 million reported cases of COVID-19, and over 100,000 reported deaths, according to federal agencies. Parts of the nation have seen severely strained health care systems. The country has also experienced a significant and rapid downturn in the economy. Four relief laws, including the CARES Act, were enacted as of June 2020 to provide appropriations to address the public health and economic threats posed by COVID-19. In addition, the administration created the White House Coronavirus Task Force.
The CARES Act includes a provision for GAO to report regularly on its ongoing monitoring and oversight efforts related to the COVID-19 pandemic. Yesterday, GAO issued its first report ( GAO-20-625 ).
Like the report, this testimony focuses on key actions the federal government has taken to address the COVID-19 pandemic, GAO recommendations for improvement, and evolving lessons learned relevant to the nation’s response to pandemics, among other things. GAO reviewed data and documents from federal agencies about their activities and interviewed federal and state officials as well as industry representatives. GAO also reviewed available economic, health, and budgetary data.
What GAO Found
In response to the national public health and economic threats caused by COVID-19, four relief laws were enacted as of June 2020 that appropriated $2.6 trillion. This funding provided support to individuals, health care providers, businesses, and state and local government.
While complete government-wide data will not be available until July, GAO determined that as of May 31, 2020, a total of about $1.2 trillion of assistance has been provided—close to $700 billion in expenditures and over $500 billion in loan guarantees. Consistent with the urgency of responding to widespread health issues and economic disruptions, agencies have worked hard to give priority to moving swiftly. In moving quickly, however, agencies made trade-offs; thus, only limited progress has been made so far in achieving transparency and accountability goals.
GAO also identified challenges with the federal response to the crisis, including:
Paycheck Protection Program (PPP). The Small Business Administration (SBA) moved quickly to establish a new nationwide program, but the pace contributed to confusion and questions and raised program integrity concerns. GAO recommends that SBA develop and implement plans to identify and respond to risks in PPP to better ensure program integrity. SBA neither agreed nor disagreed. Implementing GAO’s recommendation is essential.
Economic impact payments. The Internal Revenue Service (IRS) and the Department of the Treasury (Treasury) faced difficulties delivering payments to some individuals, and made some payments to ineligible individuals, such as decedents. GAO recommends that IRS should consider cost-effective options for notifying ineligible recipients how to return payments. IRS agreed.
Unemployment Insurance (UI). The program could have an unintentional overlap with benefits provided under PPP. GAO recommends that the Department of Labor (DOL) immediately provide help to state unemployment agencies that specifically addresses PPP loans, and the risk of improper payments associated with these loans. DOL is planning additional guidance.
Aviation-preparedness plan. In 2015, GAO recommended that the Department of Transportation (DOT) work with federal partners to develop a national aviation-preparedness plan for communicable disease outbreaks. Thus far, no plan exists. GAO recommends Congress require DOT to produce a plan.
Full access to death data. It is important to consistently use safeguards when providing assistance to individuals. The Treasury and Bureau of Fiscal Service do not have access to the Social Security Administration’s full set of death records. GAO recommends that the Congress give Treasury that access and require that Treasury consistently use it.
Medicaid. GAO previously found that during economic downturns, the Federal Medical Assistance Percentage (FMAP) formula does not reflect current state economic conditions. GAO recommends that, during an economic downturn, Congress use a formula to provide timely and targeted assistance during economic downturns.
What GAO Recommends
In the report, GAO makes three new recommendations for agencies and three matters for consideration for Congress that address these issues. |
gao_GAO-19-456T | gao_GAO-19-456T_0 | <1. F-35 Modernization, Reliability, and Sustainment and Supply Chain Efforts Face Risks and Challenges> The F-35 plays a key role in DOD s modernization efforts. However, it faces concerns in several areas that will inform the program s cost and performance in the future. These include the risk in its modernization efforts, its aircraft not meeting all reliability targets, and sustainment and supply chain challenges. Specifically, the F-35 program plans to award Block 4 development contracts before it has key business case documents that would normally inform this decision. Also, the program is not meeting all of its Reliability and Maintainability (R&M) targets. Finally, the F-35 program s sustainment costs are rising as it also faces significant supply chain challenges. <1.1. The F-35 Program Will Start Block 4 Development without a Full Business Case> The F-35 baseline aircraft program completed development in April 2018. It started formal operational testing of the baseline aircraft in December 2018 after a 3-month delay. This testing was delayed for two main reasons: (1) to resolve critical deficiencies identified in developmental testing, and (2) to accommodate an unexpected grounding following the crash of an F-35B in September 2018. According to a test official, the program expects to complete testing in December 2019, about 3 months later than planned due to delays with the simulator that is used for more complex testing. Until that testing is complete, there is still a risk that additional deficiencies may be identified. With the program wrapping up development of the baseline program, it is transitioning to early development and testing activities for the Block 4 modernization efforts, which the F-35 Joint Program Office estimates will cost about $10.5 billion. With Block 4, DOD plans to add new capabilities and modernize the F-35 aircraft to address evolving threats. In April 2019, we found that DOD will not have a complete business case for Block 4 before it plans to award development contracts in 2019. Section 224 of the National Defense Authorization Act for Fiscal Year 2017 required DOD to submit a report containing certain elements of an acquisition program baseline in essence, a business case to include cost, schedule, and performance information and independent estimates for Block 4. In 2018, we found that DOD s report to Congress was incomplete. In its report, DOD stated that the acquisition program baseline would continue to be refined over the next year. DOD officials stated that the updated F-35 program baseline, with the Block 4 efforts included, will be released in April 2019. Over the past year, the program has already invested over $1.4 billion, in part to gain the knowledge it needs to develop that business case, such as a preliminary design review, as well as to establish Block 4 testing facilities and support early capabilities development. The program incorporated some Block 4 activities into its acquisition strategy, which was approved in October 2018. However, we found that three key Block 4 business case documents will not be ready before the program s planned development contract awards in May 2019: Independent Technology Readiness Assessment: A Technology Readiness Assessment is a systematic, evidence-based process that evaluates the maturity of hardware and software technologies critical to the performance of a larger system or the fulfillment of the key objectives of an acquisition program. The program office plans to conduct a partial independent assessment of initial capabilities sometime between October and December 2019 with additional assessments to follow. A program official stated that technologies will not be integrated into the aircraft until they are adequately mature. However, without a complete independent Technology Readiness Assessment, the program will not have identified potential critical technology elements and, as a result, may be at risk of delaying the delivery of new capabilities. Test and Evaluation Master Plan: Although the F-35 program has begun testing Block 4 capabilities, it does not have an approved Test and Evaluation Master Plan. The Test and Evaluation Master Plan documents the overall structure, strategy, and objectives, as well as the associated resources needed for execution. Developmental and operational test officials have expressed concerns about the lack of an approved test plan, uncertain funding, the number of test aircraft available, and the draft test schedule, among other things. An approved, properly resourced test plan is essential for planning and preparing for adequate testing of the Block 4 capabilities. According to these officials, without an approved plan, the F-35 program is providing the test authorities with capabilities to be tested without giving them the necessary direction on how to adequately prepare to conduct the tests, making it difficult to execute testing. While this is still a concern, F-35 program officials explained that over the past 3 months they have been providing the test authorities with the direction needed to conduct testing. Independent Cost Estimate: The Block 4 Independent Cost Estimate, which details the program s total estimated life cycle cost, is not complete. In August 2017, we reported that DOD estimated the development funding needed for the first phase of Block 4 was projected to be over $3.9 billion through 2022. Since then, the program incorporated more fidelity and specific Block 4 efforts that were not in the original estimate into its Block 4 cost estimate. Based on the program office s latest estimate, the cost of Block 4 capabilities is expected to be $10.5 billion through 2024. According to OSD s Cost Assessment and Program Evaluation office, it will provide the Independent Cost Estimate between October and December 2019 to support the F-35 program s pending full-rate production decision, but this would occur several months after the program plans to award the Block 4 development contracts. According to the GAO Cost Guide, an Independent Cost Estimate is considered one of the best and most reliable estimate validation methods as it provides an independent view of expected program costs that tests the program office s estimate for reasonableness. Without an Independent Cost Estimate, Congress does not have insight into the full potential cost of Block 4. The expected completion dates for these documents are between October and December 2019, at the earliest. Figure 1 shows key Block 4 dates such as the Block 4 re-plan, which included revising the cost estimate for Block 4 that DOD established in 2017, the planned development contract awards, and planned completion dates for the three remaining critical business case documents. As seen in figure 1, the program office plans to award Block 4 development contracts in May 2019, at least five months before any of the critical business case documents will be available. Based on best practices identified by GAO, without an independent Technology Readiness Assessment, Test and Evaluation Master Plan, or an Independent Cost Estimate, program officials cannot have a high level of confidence that the requirements are firm and that risk has been adequately reduced before beginning efforts estimated to cost $10.5 billion in funding to develop Block 4. If program officials move ahead with Block 4 contracts without gaining the knowledge that a full business case would provide, Block 4 modernization efforts will be at risk of experiencing the same kind of cost and schedule growth the baseline development program experienced. To address this risk, in April 2019, we recommended to the DOD that it should ensure the F-35 program office complete its business case, to include the three documents discussed above, at least for the initial Block 4 capabilities under development before initiating additional development work. DOD did not concur with this recommendation. In its comments, DOD stated that the F-35 program office has adequate knowledge to begin Block 4 development. We maintain, however, that completing its business case before awarding its Block 4 development contracts would put DOD and the program in a better position to effectively and successfully develop Block 4 capabilities. <1.2. The F-35 Program Is Still Not Meeting All Reliability and Maintainability Targets> As we reported in April 2019, the program has made slow, consistent progress in improving the F-35 s R&M metrics performance but half of the metrics are not achieving targets. All F-35 variants are generally performing near or above targets for four of the eight R&M metrics, while still falling short for the other four. Each F-35 aircraft variant is measured against eight R&M metrics, four of which are in part of the contract. All eight R&M metrics are described in the program s Operational Requirements Document (ORD) the document that outlines the targeted performance levels for these metrics that DOD and the military services agreed the F-35 should meet in 2000. Based on our analysis, while the program is on track to meet half of the targets, the program office has not taken adequate steps to ensure the others will be met. Additionally, in December 2018, the Director, Operational Test & Evaluation reported that, although performance for the four under-performing metrics has shown slow growth over the years, none of these metrics are meeting interim goals needed to reach requirements at each variant s maturity. Each F-35 variants R&M performance against these metrics is shown in table 1. Since the program began tracking R&M performance in 2009, it has seen small, annual improvements. Over the past year, all variants showed a slight improvement in targeted performance levels for one metric, the mean flight hours between failure design controlled, but saw little or no discernable improvement for the four metrics not meeting targets. However, based on current performance, the program does not expect to meet those targets by full aircraft maturity. According to F-35 program officials, the ORD R&M metrics should be re-evaluated to determine more realistic R&M performance metrics, but the program has not yet taken actions to do so. Until the program office does so, it remains accountable for ensuring those ORD R&M metrics are achieved. In June 2018, we recommended that the F-35 program identify steps it needs to take to ensure the F-35 aircraft meet R&M requirements before each variant reaches maturity and update its R&M Improvement Program (RMIP) DOD s action plan for improving R&M with these steps. DOD concurred with our recommendation but has yet to take substantive actions to address it. DOD did, however, complete 16 improvement projects since we last reported on this. Despite completing these projects, there were not significant gains in the R&M metrics not meeting targets. Program officials advised, however, that measurable improvements in R&M can take time to manifest. To speed this process, the program is accelerating planned upgrades to older aircraft where appropriate, which officials stated should translate to an overall improvement in the program s R&M performance. <1.3. The F-35 Program Office s Improvement Plan Does Not Address Under- Performing Targets> The F-35 program office has estimated that implementing all of the identified improvement projects currently contained in its RMIP could result in potential life cycle cost savings of over $9.2 billion by improving the F-35 s R&M. However, we found that, as of December 2018, the guidance the F-35 program office has used to implement the RMIP does not define specific, measurable objectives for what the desired goals for the F-35 s R&M performance should be or align improvement projects with R&M goals. Furthermore, the RMIP has not been a funding priority. Federal internal control standards state that programs should define objectives when implementing programs such as the RMIP. Although the F-35 program RMIP s guidance has a general goal of improving R&M, it does not identify achieving the eight R&M targets listed in the ORD as an objective. Program officials acknowledged that the RMIP s guidance does not include such an objective. Instead, officials stated they are using the RMIP to prioritize and fund projects that will improve aircraft availability and mission capability neither of which are included in the eight R&M metrics, but are necessary and important initiatives. The program is focusing on these two areas in part because a September 2018 memorandum from the Secretary of Defense to the Secretaries of the military departments included a goal for the F-35 fleet to attain a mission capable rate of 80 percent by the end of fiscal year 2019. According to program officials, improving these two areas will translate into improvements in the F-35 overall R&M. However, we found that the RMIP s guidance does not discuss these priorities or align how any improvement projects would ensure targets under all eight R&M targets will be met. In our prior work on weapon system acquisitions, we have identified a number of best practices for improving program outcomes if implemented, such as clearly establishing well-defined requirements and securing stable funding that matches resources to requirements. We found that the program office has not prioritized or dedicated funding in its budget to improve R&M, in part because program officials explained that they were focused on initiatives intended to lower the cost of the aircraft. In addition, any current funding for R&M improvement projects comes from the program s operation and maintenance funds, which are only available for one fiscal year. Officials explained that, if the funding runs out or is used by the program for other efforts, then R&M projects will go unfunded or be suspended until new funding is available. In fiscal year 2018, for example, while some projects were completed, several other projects were suspended when that year s funding ran out. As of December 2018, according to a contractor representative, all of the identified improvement projects currently unfunded in the program s RMIP would cost about $30 million to implement, but were not funded. Program officials also stated that they are in the process of revising the RMIP and have considered including more specific objectives in addition to improving aircraft availability and mission capability, such as more focus on improving R&M performance where ORD R&M targets are not currently being met. According to the program, any revisions to the RMIP and changes to how it will be funded, however, will not be complete until April 2019. By not defining objectives in its RMIP guidance for meeting all eight R&M metrics, aligning which improvement projects will ensure those metrics are met, and prioritizing funding for those projects, the program is at risk of not fully meeting its R&M targets. As a result, the warfighter may accept aircraft that are less reliable than originally planned, and whose operation and sustainment costs may raise affordability questions. In addition, the military services recently identified the need to cut sustainment costs by 43 percent in the case of the Air Force to improve the F-35 s affordability in sustainment. Increasing costs from less reliable aircraft will add strain to an already unaffordable program. To address these issues, in April 2019, we recommended to DOD that it should ensure that the F-35 program office 1. assess whether the ORD R&M targets are still feasible and revise the 2. as it revises its RMIP, identify specific and measurable R&M objectives in its RMIP guidance; 3. as it revises its RMIP, identify and document which RMIP projects will achieve the identified objectives of the RMIP guidance; and 4. prioritize funding for the RMIP. DOD concurred with these recommendations and stated that it will take actions to address them. <1.4. Continued Concerns with F-35 Sustainment Costs and Supply Chain, and Logistics System> We have previously reported on the F-35 program s rising estimated sustainment costs and challenges maintaining an expanding fleet. In October 2017, we reported that estimated F-35 life-cycle sustainment costs increased by 24 percent from fiscal years 2012 through 2016 due to an increase in projected flying hours and other factors. We also reported that sustainment costs were not fully transparent to the military services. For example, the Marine Corps received an initial funding requirement for fiscal year 2017 sustainment of $293 million, which then increased to $364 million in the execution without a full explanation from the program office. We recommended that DOD take steps to improve communication with the services and provide more information about how F-35 sustainment costs they are being charged relate to the capabilities received. DOD concurred with the recommendation and has begun taking actions to address it. In addition, DOD faces substantial supply chain challenges that are lowering F-35 aircraft performance. In April 2019, we reported that F-35 aircraft performance was falling short of warfighter requirements that is, aircraft could not perform as many missions or fly as often as required. Specifically, F-35A aircraft were mission capable only 52 percent of the time from May through November 2018 far short of the 80 percent target set by the former Secretary of Defense. This lower-than-desired aircraft performance is due largely to F-35 spare parts shortages and limited part repair capabilities. For example, during this time period, F-35 aircraft were unable to fly about 30 percent of the time due to spare parts shortages. Additionally, DOD s capabilities to repair F-35 spare parts at its depots are years behind schedule, which has resulted in an average of 188 days to repair an F-35 part and a backlog of about 4,300 spare parts awaiting repair at military depots or manufacturers. We also reported that DOD faces challenges managing, moving and maintaining accountability of F- 35 parts within the supply chain. We made eight recommendations to DOD, including that DOD determine what actions are needed to close the gap between warfighter requirements for aircraft performance and F-35 supply chain capabilities. DOD concurred with the recommendations and identified actions that it was taking or planned in response. Finally, the F-35 s Autonomic Logistics Information System (ALIS) has the potential to lead to increased costs for the program if key issues are not addressed. ALIS is the F-35 s central logistics system intended to support operations, mission planning, supply-chain management, maintenance, and other processes. In April 2016, we identified several risks, including that ALIS (1) was not initially designed to be deployable, (2) lacked redundant infrastructure, (3) did not communicate well with legacy aircraft systems, (4) had data accuracy and accessibility issues, and (5) had security risks. In addition, DOD had not included certain analyses and information, such as historical cost data, to increase the credibility and accuracy of ALIS s estimated costs. Further, a 2013 DOD-commissioned study found that schedule slippage and functionality problems with ALIS could lead to between $20 billion and $100 billion in additional costs. We have made several recommendations to DOD to improve ALIS planning and cost estimates, and to develop a performance measurement process for ALIS to better address problems based on actual system performance and user requirements. DOD generally concurred with our recommendations and has taken some actions, including developing a plan that identifies and prioritizes key ALIS risks. However, more work remains. We are currently conducting a review examining DOD s progress in implementing our ALIS-related recommendations, addressing concerns from ALIS users, identifying emergent financial and operational risks associated with ALIS, taking near-term actions to improve ALIS functionality, and assessing DOD s actions regarding the long-term viability of ALIS to ensure capable sustainment of the F-35 fleet. We plan to issue a report based on our current work later in 2019. <2. Air Force s Advanced Battle Management System Acquisition Strategy Is in the Early Planning Stages> Based on our ongoing work, ABMS is early in the acquisition process, as the specific capabilities and overarching acquisition strategy are still to be determined by the Air Force. As a result, the Air Force has not yet established a cost and technical baseline for ABMS. When ABMS planning began in 2017, program officials stated that the intent of the program was to replace and modernize the capabilities of the AWACS system which provides the warfighter with the capability to detect, identify, and track airborne and maritime threats. But changes in Air Force expectations for how it would fight during future conflicts led the department to assess options for developing a more robust and survivable air, land, and sea battle management system that can operate in contested environments. In July 2018, the ABMS Initial Capabilities Document which describes capability needs and associated gaps was approved by the DOD Joint Requirements Oversight Council. Our ongoing work also found that, in December 2018, the Air Force determined it would not continue its planned JSTARS Recapitalization program which was intended to provide surveillance and information on moving ground targets well into the future, as initially expected. As a result of a recent study, the Air Force has extended the estimated service life of the JSTARS fleet, and will incorporate its capabilities into the ABMS in the short term, and retire JSTARS in the 2030s. Our preliminary observations indicate that the details about ABMS are still to be determined. The Air Force expects to fully define ABMS through an Analysis of Alternatives (AOA) that it plans to complete by the summer of 2019, as shown in figure 2. The ABMS AOA, led by the Air Force s Air Combat Command, will assess how ABMS will deliver air-centric capabilities, such as those currently provided by AWACS. Air Force officials explained that they plan to utilize an existing AOA completed for the JSTARS Recapitalization program, approved in May 2012, to identify and assess ABMS s potential ground target tracking capabilities. Originally planned as a 9-month study, Air Force officials stated that the ABMS AOA was shortened to a 6-month effort. As a result, the Air Force received conditional approval to reduce the number of alternatives studied from five to three. Our ongoing work indicates that the Air Force plans to develop ABMS over three phases. The first phase began in fiscal year 2018 and goes through 2023. In this phase, the Air Force plans to integrate existing sensors, improve battle management systems, and upgrade communication networks across 10 existing acquisition programs. Table 2 includes information on three existing programs the Air Force plans to enhance during the first phase of ABMS. According to an Air Force acquisition official, the technologies associated with the first phase are considered to be mature but there may be risks as the Air Force integrates technologies. Air Force officials explained that their approaches to the second and third phases of ABMS are not fully developed, but noted that the phases would be informed by the AOA results. That said, the Air Force expects to start phase 2 in 2024 by integrating advanced sensors and software into its existing battle management command and control platforms while at the same time retiring JSTARS. Air Force officials have reported that the third phase, planned for the mid-2030s, is expected to provide multi-sensor, resilient battle management command and control capability using multiple types of communications methods, with an initial operational capability planned for 2035. The Air Force estimates that ABMS s acquisition cost through fiscal year 2024 will be $3.8 billion. Because ABMS is composed of many different defense acquisition programs, the Air Force intends to manage it as a family of systems directed by a Chief Architect and not a traditional acquisition program manager. According to the Air Force, the ABMS Chief Architect is the first of its kind, and the Air Force believes the position will be instrumental in integrating the various programs and technologies into an overall system. Based on our preliminary analysis, the roles and responsibilities of the Chief Architect have not been fully defined. However, according to the Air Force, the Chief Architect is expected to be responsible for (1) leading a high-level analysis and determining the overall design of ABMS, (2) coordinating with the service-level commands and the acquisition programs involved to make sure they are aligned with the ABMS development, and (3) identifying the enabling technologies for integration into ABMS. Chairman Norcross, Ranking Member Hartzler, and members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you may have. We look forward to continuing to work with the Congress as we to continue to monitor and report on the progress of the F-35 program and the ABMS. <3. GAO Contact and Staff Acknowledgments> If you or your staff have any questions about this testimony, please contact Michael J. Sullivan at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement are Justin Jaynes (Assistant Director), Diana Maurer, Jennifer Baker, Desir e E. Cunningham, Alissa Czyz, Stephanie Gustafson, Kasea Hamar, Jeff Hubbard, Jessica Karnis, Matt Metz, Robin Wilson, and Lauren Wright. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
In 2018, the F-35 program began operational testing. Also in 2018, the Air Force continued planning for the acquisition of ABMS, intended to modernize how DOD maintains command and control over and manages the future battlefield. Both the F-35 and ABMS are expected to play key roles in DOD's modernization efforts.
This testimony statement discusses (1) the F-35 program's development and modernization efforts, and progress in improving the aircraft's R&M and (2) DOD's current planning efforts for ABMS. This statement is based on two GAO reports on the F-35 published in April 2019 and on GAO's ongoing work examining ABMS. To conduct this work, GAO analyzed DOD management reports; discussed the efforts with program and contractor officials; and compared both efforts to DOD policy and GAO acquisition best practices.
What GAO Found
The Department of Defense (DOD) wrapped up the F-35 development program in April 2018 and expects to complete operational testing in December 2019. DOD has turned its attention to modernization efforts—referred to as Block 4—to add new capabilities to address evolving threats. The program office estimates Block 4 to cost at least $10.5 billion through 2024. DOD plans to start Block 4 development without a complete business case identifying baseline cost and schedule estimates. Key documents for establishing a business case, such as an independent cost estimate, will not be ready before the program plans to award Block 4 development contracts in May 2019 (see figure).
Without a business case—consistent with acquisition best practices—program officials cannot be confident that the risk of committing to development has been reduced adequately prior to planned contract awards.
The program made slow, sustained progress in improving the F-35's reliability and maintainability (R&M). F-35 aircraft are assessed against eight R&M metrics, which inform how much time the aircraft will be in maintenance rather than operations. Half of these metrics are not meeting targets. While the program office has a plan for improving R&M, its guidance is not in line with GAO's acquisition best practices or internal control standards as it does not include specific, measurable objectives, align improvement projects to meet those objectives, and prioritize funding to match resources to R&M requirements. If the R&M requirements are not met, the warfighter will have to settle for a less reliable and more costly aircraft than originally planned. This contributes to the F-35's $1.12 trillion estimated sustainment costs and challenges with maintaining an expanding fleet that also has supply chain and logistics system problems.
GAO's ongoing work indicates that the Air Force's Advanced Battle Management System (ABMS)—intended to provide battle management command and control and surveillance across air, land, and sea—is in the early stages of planning. The capabilities and the strategy to deliver those capabilities are still to be determined. The Air Force plans to manage ABMS as a family of systems, integrating sensors from existing and future weapons programs, and overseen by a Chief Architect—whose role is still to be determined. The Air Force expects to further define ABMS after analyzing different options for delivering the capability. That analysis is expected to be complete in summer 2019.
What GAO Recommends
In April 2019, GAO recommended that the F-35 program office complete its Block 4 business case before making more contract awards. DOD did not concur, citing that it has adequate knowledge to begin Block 4 development. GAO maintains that completing its business case before awarding its Block 4 development contracts would put DOD and the program in a better position to successfully develop Block 4 capabilities. GAO also recommended that DOD take action to improve its R&M performance. DOD concurred and noted the actions it would take. |
gao_GAO-19-471 | gao_GAO-19-471_0 | <1. Background> Historically, the federal government has had difficulties acquiring, developing, and managing IT investments. Further, federal agencies have struggled with appropriately planning and budgeting for modernizing legacy systems; upgrading underlying infrastructure; and investing in high quality, lower cost service delivery technology. The consequences of not updating legacy systems has contributed to, among other things, security risks, unmet mission needs, staffing issues, and increased costs. Security risks. Legacy systems may operate with known security vulnerabilities that are either technically difficult or prohibitively expensive to address. In some cases, vendors no longer provide support for hardware or software, creating security vulnerabilities and additional costs. For example, in November 2017, the Department of Education s (Education) Inspector General identified security weaknesses that included the department s use of unsupported operating systems, databases, and applications. By using unsupported software, the department put its sensitive information at risk, including the personal records and financial information of millions of federal student aid applicants. Unmet mission needs. Legacy systems may not be able to reliably meet mission needs because they are outdated or obsolete. For instance, in 2016, the Department of State s (State) Inspector General reported on the unreliability of the Bureau of Consular Affairs legacy systems. Specifically, during the summers of 2014 and 2015, outages in the legacy systems slowed and, at times, stopped the processing of routine consular services such as visa processing. For example, in June 2015, system outages caused by a hardware failure halted visa processing for 13 days, creating a backlog of 650,000 visas. Staffing issues. In order to operate and maintain legacy systems, staff may need experience with older technology and programming languages, such as the Common Business Oriented Language (COBOL). Agencies have had difficulty finding employees with such knowledge and may have to pay a premium to hire specialized staff or contractors. For example, we reported in May 2016 that the Social Security Administration (SSA) had to rehire retired employees to maintain its COBOL systems. Further, having a shortage of expert personnel available to maintain a critical system creates significant risk to an agency s mission. For instance, we reported in June 2018 that the Internal Revenue Service (IRS) was experiencing shortages of staff with the skills to support key tax processing systems that used legacy programming languages. These staff shortages not only posed risks to the operation of the key tax processing systems, but they also hindered the agency s efforts to modernize its core tax processing system. Increased costs. The cost of operating and maintaining legacy systems increases over time. The issue of cost is linked to the three previously described consequences either because the other issues directly raise costs or, as in the case of not meeting mission needs, the agency is not receiving a favorable return on investment. Further, in an era of constrained budgets, the high costs of maintaining legacy systems could limit agencies ability to modernize and develop new or replacement systems. During the course of our review, agencies reported that they consider several factors prior to deciding whether to modernize a legacy system. In particular, agencies evaluate factors, such as the inherent risks, the criticality of the system, the associated costs, and the system s operational performance. Risks. Agencies consider the risks associated with maintaining the legacy system as well as modernizing the legacy system. For instance, agencies may prioritize the modernization of legacy systems that have security vulnerabilities or software that is unsupported by the vendor. However, limited system accessibility may also reduce the need to modernize a legacy system. For example, air-gapped systems, which are systems that are isolated from the internet, may mitigate a legacy system s cybersecurity risk by preventing remote hackers from having system access. Conversely, we have also reported that air-gapped systems are not necessarily secure: they could potentially be accessed by other means than the internet, such as through Universal Serial Bus devices. Even so, removing the threat of remote access is a mitigation technique used by agencies such as the Nuclear Regulatory Commission (NRC). According to NRC, the agency reduced the riskiness of using computers with unsupported operating systems by putting these computers on isolated networks or by disconnecting them from networks entirely. Criticality. Agencies consider how critical the system is to the agency s mission. Several agencies stated that they would consider how essential a legacy system is to their agencies missions before deciding to modernize it. For example, the Department of Health and Human Services (HHS) stated that, when deciding to modernize a legacy system, it considers the degree to which core mission functions of the agency or other agencies are dependent on the system. Similarly, Department of Energy (Energy) officials noted that the department is required to maintain several legacy systems associated with the storage of its nuclear waste. Costs. Agencies consider the costs of maintaining a legacy system and modernizing the system. For example, according to the Department of Veterans Affairs (VA), there are systems for which a life-cycle cost analysis of the legacy system may show that the cost to modernize exceeds the projected costs to maintain the system. Similarly, the Department of Defense (DOD) noted that, before deciding on a modernization solution, it is important to assess the costs of the transition to a new or replacement solution. An agency also may decide to modernize a system when there is potential for cost savings to be realized with a modernization effort. For example, HHS stated that it may pursue the modernization of a legacy system if the department anticipates reductions in operations and maintenance costs due to efficiencies gained through the modernization. Performance. Before making the decision to modernize, agencies consider the legacy system s operational performance. Specifically, if the legacy system is performing poorly, the agency may decide to modernize it. For example, the Department of Transportation (Transportation) stated that, if a legacy system is no longer functioning properly, it should be modernized. In addition, HHS noted that the ability to improve the functionality of the legacy system could be a reason to modernize it. <1.1. GAO Has Reported on the Need to Improve Oversight of Legacy IT> As previously mentioned, in May 2016, we reported that federal legacy IT investments were becoming increasingly obsolete. In this regard, agencies had reported operating systems that used outdated languages and old parts, which were difficult to replace. Further, we noted that each of the 12 selected agencies had reported using unsupported operating systems and components, which could create security vulnerabilities and additional costs. At the time, five of the selected agencies reported using 1980s and 1990s Microsoft operating systems that stopped being supported by the vendor more than a decade ago. We concluded that agencies were, in part, maintaining obsolete investments because they were not required to identify, evaluate, and prioritize investments to determine whether the investments should be kept as-is, modernized, replaced, or retired. We pointed out that the Office of Management and Budget (OMB) had created draft guidance that would require agencies to do so, but OMB had not committed to a firm time frame for when the guidance would be issued. As such, we made 16 recommendations to OMB and the selected federal agencies to better manage legacy systems and investments. Most agencies agreed with the recommendations or had no comment. However, as of May 2019, 13 recommendations had not been implemented. In particular, OMB has not finalized and issued its draft guidance on legacy systems. Until this guidance is finalized and issued, the federal government will continue to run the risk of maintaining investments that have outlived their effectiveness and are increasingly difficult to protect from cybersecurity vulnerabilities. <1.2. Congress and the Executive Branch Have Made Efforts to Modernize Federal IT> Congress and the executive branch have initiated several efforts to modernize federal IT, including: Identification of High Value Assets. In a December 2016 memorandum, OMB observed that continued increases in computing power combined with declining computing and storage costs and increased network connectivity had expanded the government s capacity to store and process data. However, OMB noted that this rise in technology and interconnectivity also meant that the federal government s critical networks, systems, and data were more exposed to cyber risks. As a result, OMB issued guidance to assist federal agencies covered by the Chief Financial Officers Act in managing the risks to these assets, which it designated as High Value Assets. Subsequently, in December 2018, OMB issued a memorandum that provided further guidance regarding the establishment and enhancement of the High Value Asset program. It stated that the program is to be operated by DHS in coordination with OMB. Further, the new guidance expanded the program to apply to all agencies (i.e., agencies covered by the Chief Financial Officers Act, as well as those not covered by the act) and expanded the definition of High Value Assets. The guidance required agencies to identify and report these assets (which may include legacy systems), assess them for security risks, and remediate any weaknesses identified, including those associated with obsolete or unsupported technology. Assessment of federal IT modernization. On May 11, 2017, the President signed Executive Order 13800, Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure. This executive order outlined actions to enhance cybersecurity across federal agencies and critical infrastructure to improve the nation s cyber posture and capabilities against cybersecurity threats. Among other things, the order tasked the Director of the American Technology Council to coordinate a report to the President from the Secretary of DHS, the Director of OMB, and the Administrator of the General Services Administration (GSA), in consultation with the Secretary of Commerce, regarding modernizing federal IT. As a result, the Report to the President on Federal IT Modernization was issued on December 13, 2017, and outlined the current and envisioned state of federal IT. The report focused on modernization efforts to improve the security posture of federal IT and recognized that agencies have attempted to modernize systems but have been stymied by a variety of factors, including resource prioritization, ability to procure services quickly, and technical issues. The report provided multiple recommendations intended to address these issues through the modernization and consolidation of networks and the use of shared services. In particular, the report recommended that the federal government prioritize the modernization of legacy IT by focusing on enhancing security and privacy controls for those assets that are essential for agencies to serve the American people and whose security posture is most vulnerable (i.e., High Value Assets). Enactment of the Modernizing Government Technology (MGT) Act. To help further agencies efforts to modernize IT, in December 2017, Congress and the President enacted a law to authorize the availability of funding mechanisms to improve, retire, or replace existing IT systems to enhance cybersecurity and to improve efficiency and effectiveness. The law, known as the MGT Act, authorizes agencies to establish working capital funds for use in transitioning from legacy systems, as well as for addressing evolving threats to information security. The law also created the Technology Modernization Fund, within the Department of the Treasury (Treasury), from which agencies can borrow money to retire and replace legacy systems, as well as acquire or develop systems. Subsequently, in February 2018, OMB issued guidance for agencies to implement the MGT Act. The guidance was intended to provide agencies additional information regarding the Technology Modernization Fund, and the administration and funding of the related IT working capital funds. Specifically, the guidance allowed agencies to begin submitting initial project proposals for modernization on February 27, 2018. In addition, in accordance with the MGT Act, the guidance provides details regarding a Technology Modernization Board, which is to consist of (1) the Federal CIO; (2) a senior official with IT technical expertise from GSA; (3) a member of DHS s National Protection and Program Directorate; and (4) four federal employees with technical expertise in IT development, financial management, cybersecurity and privacy, and acquisition, appointed by the Director of OMB. As of February 2019, the Technology Management Fund Board had approved funds for seven IT modernization projects across five agencies: the Department of Agriculture, Energy, the Department of Housing and Urban Development (HUD), the Department of Labor, and GSA. For example, the board approved $20 million for HUD to modernize a mainframe and five COBOL-based applications that are expensive to maintain. According to the board s website, without these funds, HUD would not have been able to pursue this project for several years. Issuance of the President s Management Agenda. In March 2018, the Administration issued the President s Management Agenda, which lays out a long-term vision for modernizing the federal government. The agenda identifies three related drivers of transformation IT modernization; data, accountability, and transparency; and the workforce of the future that are intended to push change across the federal government. The President s Management Agenda identifies 14 related Cross- Agency Priority goals, many of which have elements that involve IT. In particular, the Cross-Agency Priority goal on IT modernization states that modern technology must function as the backbone of how government serves the public in the digital age. Further, the goal on IT modernization provides three priorities that are to guide the Administration s efforts to modernize federal IT: (1) enhancing mission effectiveness by improving the quality and efficiency of critical services, including the increased utilization of cloud-based solutions; (2) reducing cybersecurity risks to the federal mission by leveraging current commercial capabilities and implementing cutting edge cybersecurity capabilities; and (3) building a modern IT workforce by recruiting, reskilling, and retaining professionals able to help drive modernization with up-to-date technology. <2. GAO Identified 10 Critical Federal Legacy Systems; Agencies Often Lack Complete Plans for Their Modernization> As determined by our review of 65 critical federal legacy systems (see appendix II), the 10 most critical legacy systems in need of modernization are maintained by 10 different federal agencies whose missions are essential to government operations, such as emergency management, health care, and wartime readiness. These legacy systems provide vital support to the agencies missions. According to the agencies, these legacy systems range from about 8 to 51 years old and, collectively, cost approximately $337 million annually to operate and maintain. Several of the systems use older languages, such as COBOL and assembly language code. However, as we reported in June 2018, reliance on assembly language code and COBOL has risks, such as a rise in procurement and operating costs, and a decrease in the availability of individuals with the proper skill sets. Further, several of these legacy systems are also operating with known security vulnerabilities and unsupported hardware and software. For example, DHS s Federal Emergency Management Agency performed a security assessment on its selected legacy system in September 2018. This review found 249 reported vulnerabilities, of which 168 were considered high or critical risk to the network. With regard to unsupported hardware and software, Interior s system contains obsolete hardware that is not supported by the manufacturers. Moreover, the system s original hardware and software installation did not include any long-term vendor support. Thus, any original components that remain operational may have had long-term exposure to security and performance weaknesses. Table 1 provides a generalized list of each of the 10 most critical legacy systems that we identified, as well as agency-reported system attributes, including the system s age, hardware s age, system criticality, and security risk. (Due to sensitivity concerns, we substituted a numeric identifier for the system names and are not providing detailed descriptions). Appendix III provides additional generalized agency- reported details on each of these 10 legacy systems. <2.1. The Majority of Agencies Lack Complete Plans for Modernizing the Most Critical Legacy Systems> Given the age of the hardware and software in legacy systems, the systems criticality to agency missions, and the security risks posed by operating aging systems, it is imperative that agencies carefully plan for their successful modernization. Documenting modernization plans in sufficient detail increases the likelihood that modernization initiatives will succeed. According to our review of government and industry best practices for the modernization of federal IT, agencies should have documented modernization plans for legacy systems that, at a minimum, include three key elements: (1) milestones to complete the modernization, (2) a description of the work necessary to modernize the legacy system, and (3) details regarding the disposition of the legacy system. Of the 10 identified agencies with critical systems most in need of modernization, seven (DOD, DHS, Interior, Treasury, the Office of Personnel Management (OPM), the Small Business Administration (SBA), and SSA) had documented modernization plans for their respective critical legacy systems and three did not have documented plans. The three agencies that did not have documented modernization plans for their critical legacy systems were: (1) Education, (2) HHS, and (3) Transportation. Of the seven agencies with documented plans, DOD and Interior had modernization plans that addressed each of the three key elements. For example, Interior submitted documentation of both completed and forthcoming milestones leading to the deployment of the modernized system. The department also provided a list of the mandatory requirements for the updated system, as well as the work that needed to be performed at each stage of the project, including the disposition of the legacy system. Likewise, DOD provided documentation of the milestones and the work needed to complete the modernization of its legacy system. In addition, the documentation discussed the department s plans for the disposition of the legacy system. While the other five agencies Treasury, DHS, OPM, SBA, and SSA had developed modernization plans for their respective legacy systems, their plans did not fully address one or more of the three key elements. For instance, DHS s Federal Emergency Management Agency s modernization plan for its selected legacy system described the work that the department needed to accomplish, but did not include the associated milestones or the disposition of the legacy system. Similarly, SBA included milestones and a plan for the disposition of the legacy system, but did not include a description of the work necessary to accomplish the modernization. Treasury, OPM, and SSA partially included one or more of the key elements in their modernization plans. For instance, OPM s and SSA s plans included upcoming milestones for one part of the initiative, but not the entire effort. Similarly, OPM s modernization plans only described a portion of the work necessary to complete each modernization initiative. Further, none of these four agencies modernization plans included considerations for the disposition of legacy system components following the completion of the modernization initiatives. While agencies may be using development practices that minimize initial planning, such as agile, agencies should have high-level information on cost, scope, and timing. Table 2 identifies the seven agencies with documented modernization plans for their critical systems, as well as the extent to which the plans were sufficiently detailed to include the three key elements. (Due to sensitivity concerns, we substituted a numeric identifier for the system names.) The agencies provided a variety of explanations for the missing modernization plans. For example, according to the three agencies without documented modernization plans: Education s modernization plans were pending the results of a comprehensive IT visualization and engineering project that would determine which IT systems and services could be feasibly modernized, consolidated, or eliminated; HHS had entered into a contract to begin a modernization initiative but had not yet completed its plans; and Transportation had solicited information from industry to determine whether the agency s ideas for modernization were feasible. Of the five agencies which had plans that lacked key elements, officials within SSA s office of the CIO stated that the agency has yet to complete its modernization planning, even though modernization efforts are currently underway. The officials said that they will update the planning documentation and make further decisions as the modernization effort progresses. Officials within DHS s Federal Emergency Management Agency s Office of the CIO stated that its plans for modernizing the system we reviewed (System 4) are contingent on receiving funding and being able to allocate staffing resources to planning activities. According to the officials, the agency is also integrating its plans for modernizing System 4 with the management of the rest of the agency s systems. Similarly, Treasury officials stated that IRS s efforts to complete planning for the remaining modernization activities have been delayed due to budget constraints. In addition, officials within OPM s Office of the CIO stated that its modernization plan did not extend to fiscal year 2019 because there were changes in leadership during the creation of the plan, and because of uncertainty in funding amounts. While we recognize that system modernizations are dependent on funding, it is important for agencies to prioritize funding for the modernization of these critical legacy systems. In addition, Congress provided increased authority for agencies to fund such modernization efforts through the MGT Act s Technology Modernization Fund and the related IT working capital funds. Until the agencies establish complete legacy system modernization plans that include milestones, describe the work necessary to modernize the system, and detail the disposition of the legacy system, the agencies modernization initiatives will have an increased likelihood of cost overruns, schedule delays, and overall project failure. Project failure would be particularly detrimental in these 10 cases, not only because of wasted resources, but also because it would prolong the lifespan of increasingly vulnerable and obsolete systems, exposing the agency and system clients to security threats and potentially significant performance issues. Further, agencies may not be effectively planning for the modernization of legacy systems, in part, because they are not required to. As we reported in May 2016, agencies are not required to identify, evaluate, and prioritize existing IT investments to determine whether they should be kept as-is, modernized, replaced, or retired. We recommended that OMB direct agencies to identify legacy systems needing to be replaced or modernized. As of April 2019, OMB had not implemented this recommendation. OMB staff stated that agencies were directed to manage the risk to High Value Assets associated with legacy systems in OMB s December 2018 guidance. While OMB s guidance does direct agencies to identify, report, assess, and remediate issues associated with High Value Assets, it does not require agencies to do so for all legacy systems. Until OMB requires agencies to do so, the federal government will continue to run the risk of continuing to maintain investments that have outlived their effectiveness. <3. Agencies Reported a Variety of IT Modernization Successes> The 24 Chief Financial Officers Act agencies in our review identified a total of 94 examples of successful modernizations of legacy systems undertaken in the last 5 years. The initiatives were of several types, including those aimed at transforming legacy code into a more modern programming language, migrating legacy services (e.g., email) to the cloud, and re-designing a legacy mainframe to a cloud-based application. Among these examples, the five that we selected reflect a mix of different agencies, types of system modernization initiatives, and types of benefits realized from the initiatives. Table 3 provides details on the five examples of successful IT modernization initiatives, as reported by their respective agencies, as well as the reported benefits related to those initiatives. The five agencies attributed the success of their modernization initiatives to various factors, including: using automated technologies to examine programming code and perform testing (DOD and Treasury); testing the system thoroughly (SSA and Treasury); actively engaging the end users and stakeholders throughout the modernization process (SSA and Treasury); cultivating a partnership between industry and government (DOD); following management practices on change and life cycle management (Education); developing and implementing an enterprise-wide cost collection and data analysis process for commodity IT to track and measure progress against consolidation, optimization, and savings targets (DHS); creating an interface that was consistent across systems (SSA); having strong executive leadership and support (Treasury); and using agile principles to facilitate the team s ownership of the project (Treasury). These factors are largely consistent with government and industry best practices. For example, we reported in 2011 on critical success factors associated with major acquisitions, including engaging stakeholders and having the support of senior executives. Similarly, OMB s guidance on High Value Assets calls for agencies plans to address change management and life cycle management. Likewise, the Software Engineering Institute s Capability Maturity Model Integration for Development recommends that organizations engage stakeholders, practice effective change and life cycle management, and thoroughly test systems, among other practices. Further, our Information Technology Investment Management framework recommends involving end users, implementing change and life cycle management processes, and obtaining the support of executive leadership. Agencies that follow such practices are better positioned to modernize their legacy systems. Doing so will also allow the agencies to leverage IT to successfully address their missions. <4. Conclusions> The 10 most critical federal legacy systems in need of modernization are becoming increasingly obsolete. Several agencies are using outdated computer languages, which can be difficult to maintain and increase costs. Further, several of these legacy systems are also operating with unsupported hardware and software and known security vulnerabilities. Most agencies did not have complete plans to modernize these legacy systems. Due to the criticality and possible cybersecurity risks posed by operating aging systems, having a plan that includes how and when the agency plans to modernize is vital. In the absence of such plans, the agencies increase the likelihood of cost overruns, schedule delays, and overall project failure. Such outcomes would be particularly detrimental because of the importance of these systems to agency missions. Successfully modernizing legacy systems is possible, as demonstrated by the five highlighted examples. Agencies attributed the success of their modernization initiatives to a variety of management and technical factors that were consistent with best practices. <5. Recommendations for Executive Action> In the LOUO report that we are issuing concurrently with this report, we are making a total of eight recommendations to eight federal agencies to identify and document modernization plans for their respective legacy systems, including milestones, a description of the work necessary, and details on the disposition of the legacy system. <6. Agency Comments and Our Evaluation> We requested comments on a draft of this report from OMB and the 24 agencies included in our review. The eight agencies to which we made recommendations in the LOUO report agreed with our findings and recommendations. In addition, OMB and the 16 agencies to which we did not make recommendations either agreed with our findings, did not agree or disagree with the findings, or stated that they had no comments. Further, multiple agencies provided technical comments, which we have incorporated, as appropriate. The following eight agencies agreed with our recommendations: In written comments from Education, the agency stated that it concurred with the recommendation and indicated its intent to address it. Education s comments are reprinted in appendix IV. In written comments from HHS on the LOUO version of this report, the agency stated that it concurred with the recommendation and intends to evaluate ways to provide its modernization plan, including milestones and a description of the work necessary to modernize the system. HHS also provided technical comments that we incorporated, as appropriate. HHS deemed some of the information in its original agency comment letter pertaining to particular legacy systems to be sensitive, which must be protected from public disclosure. Therefore, we have omitted the sensitive information from the version of the agency comment letter that is reprinted in appendix V of this report. In written comments, DHS stated that it concurred with our recommendation. DHS s comments are reprinted in appendix VI. In comments received via email from Transportation s Director of Audit Relations and Program Improvement on May 9, 2019, the agency stated that it agreed with our recommendation. In comments from Treasury s Supervisory IT Specialist/Performance and Governance Analyst, received via email on May 17, 2019, the department stated that it agreed with our recommendation. In addition, Treasury s component agency, IRS, provided written comments which stated that it agreed with the recommendation. The agency said it intends to develop a multiyear retirement strategy for its system to address the recommendation. In its written comments, IRS also stated that our draft report did not accurately convey that the legacy system replacement project is intended to only replace core components of its selected legacy system. The agency said that, even when the entire replacement project is completed, it will only address a portion of the work required to retire the legacy system. In response, we modified our discussion of this project in the report. IRS s comments are reprinted in appendix VII. In written comments from OPM on the LOUO version of this report, the agency stated that it concurred with the recommendation and indicated its plans to address the recommendation. OPM also provided technical comments that we incorporated, as appropriate. OPM deemed some of the information in its original agency comment letter pertaining to particular legacy systems to be sensitive, which must be protected from public disclosure. Therefore, we have omitted the sensitive information in the version of the agency comment letter that is reprinted in appendix VIII. In written comments, SBA concurred with our recommendation and stated that it intends to include a description of the work necessary to modernize the legacy system in the initiative s project plan. The agency estimated that it will address the recommendation by July 31, 2019. SBA deemed some of the information in its original agency comment letter pertaining to particular legacy systems to be sensitive, which must be protected from public disclosure. Therefore, we have omitted the sensitive information from the version of the agency comment letter that is reprinted in appendix IX. In written comments from SSA, the agency stated that it agreed with our recommendation. The agency added that it is modernizing its legacy system using agile software methods and a multiyear roadmap of development activities. The agency further stated that, as it completes its modernization work, it expects to retire most of the legacy software associated with System 10. SSA also provided technical comments that we incorporated, as appropriate. SSA s comments are reprinted in appendix X. In addition, we received responses via email from 14 agencies to which we did not make recommendations. Of these agencies, three agreed with our findings and 11 stated that they did not have comments on the report. Two other agencies HUD and the U.S. Agency for International Development provided written comments in which they expressed appreciation for the opportunity to review the report, but did not state whether they agreed or disagreed with our findings. These agencies comments are reprinted in appendixes XI and XII, respectively. Further, in an email from OMB staff on May 22, 2019, the agency did not state whether it agreed or disagreed with our findings, but provided technical comments that we incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretaries of the Departments of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, Labor, State, the Interior, the Treasury, Transportation, and Veterans Affairs; the U.S. Attorney General (Department of Justice); the Administrators of the Environmental Protection Agency, General Services Administration, National Aeronautics and Space Administration, Small Business Administration, and the U.S. Agency for International Development; the Commissioner of the Social Security Administration; the Directors of the National Science Foundation and the Office of Personnel Management; and the Chairman of the Nuclear Regulatory Commission; and other interested parties. This report is also available at no charge on the GAO website at http://www.gao.gov. Should you or your staffs have any questions on information discussed in this report, please contact me at (202) 512-4456 or harriscc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XIII. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) identify the most critical federal legacy systems in need of modernization and evaluate plans for modernizing them, and (2) identify examples of information technology (IT) legacy system modernization initiatives in the last 5 years that agencies considered successful. The scope of our review included the 24 agencies covered by the Chief Financial Officers Act of 1990. This report presents a public version of a limited official use only (LOUO) report that we are also issuing today. The Department of Homeland Security and the Department of the Interior determined that certain information in our original report should be protected from public disclosure. Therefore, we will not release the LOUO report to the general public because of the sensitive information it contains. The LOUO report includes eight recommendations that we made to eight agencies to document modernization plans for particular legacy systems, including milestones, a description of the work necessary, and details on the disposition of the legacy system. In this public version of the report, we have omitted sensitive information regarding particular legacy systems. Specifically, we have deleted systems names and other information that would identify the particular system, such as specific descriptions of the systems purposes and vulnerabilities. Although the information provided in this report is more limited, the report addresses the same objectives as the LOUO report and is based on the same audit methodology. We provided a draft of this report to agency officials to obtain their review and comments on the sensitivity of the information contained herein. We confirmed with the agency officials that this report can be made available to the public without jeopardizing the security of federal agencies legacy systems. To identify the most critical legacy systems in need of modernization, we first reviewed the agencies 2017 responses to congressional committees requests for information that identified the agencies top three legacy systems in need of modernization. We then asked the agencies to either confirm that those systems were still considered their top systems in need of modernization or update their lists to include the three systems most in need of modernization. All 24 agencies either confirmed or updated their lists of legacy systems most in need of modernization. This resulted in a collective list of 65 systems. However, due to sensitivity concerns, we are not disclosing the names of the systems in this report. Appendix II provides a generalized list of the systems. To develop a set of attributes for determining systems obsolescence and their need for modernization, we reviewed available technical literature, such as: General Services Administration s Unified Shared Services Management s Modernization and Migration Management (M3) Playbook and M3 Playbook Guidance, American Technology Council s Report to the President on Federal IT Modernization, Office of Management and Budget s Management of Federal High IBM Center for The Business of Government s A Roadmap for IT Modernization in Government, and American Council for Technology-Industry Advisory Council s Legacy System Modernization: Addressing Challenges on the Path to Success. We also consulted with system development experts within GAO and reviewed our prior report on federal legacy systems. Using these sources, we developed a set of 14 total attributes for determining systems obsolescence and their need for modernization. We then asked the agencies in our review to provide the associated details for the selected systems. We considered these details to rank the systems against the attributes that we compiled. We assigned point values to each system based on the systems agency-reported attributes. Table 4 details the nine attributes and associated point values and ranges we used to initially rank the legacy systems. We then totaled the assigned points for each legacy system and ranked the results from highest to lowest number of assigned points. While we had planned to select the top 20 systems with the most points for more detailed analysis, three systems were ranked in nineteenth place. As a result, we selected 21 systems for our review. We collected additional information on the 21 selected systems and performed a second round of analysis, scoring, and ranking. Based on the second set of scores, we identified the 10 systems with the highest scores as being the most critical legacy systems in need of modernization. We also supplemented our review with interviews of officials in the agencies offices of the Chief Information Officer and program offices for the selected legacy systems. Table 5 details the five attributes and associated point values and ranges we used to rank the legacy systems in the subsequent round of analysis. Table 6 lists these 10 selected systems according to their designated identifiers. However, due to sensitivity concerns, we substituted a numeric identifier for the name of each system. To evaluate agencies plans for modernizing the 10 federal legacy systems most in need of modernization, we requested that agencies provide us with the relevant plans. These modernization plans could have been contained within several types of documentation, since a system modernization could be a new system development, a system acquisition, or a renovation of the legacy system. For example, if an agency was acquiring a new system from a vendor, the plans for modernization could have been contained within an acquisition plan or a statement of work in a contract. Likewise, if an agency was developing a new system on its own, the modernization plans could have been within a project plan or design document. We reviewed government and industry best practice documentation on the identification and modernization of legacy systems, including: General Services Administration s Unified Shared Services Management s Modernization and Migration Management (M3) Playbook and M3 Playbook Guidance, American Technology Council s Report to the President on Federal Office of Management and Budget s Management of Federal High IBM Center for The Business of Government s A Roadmap for IT Modernization in Government, and American Council for Technology-Industry Advisory Council s Legacy System Modernization: Addressing Challenges on the Path to Success. Based on our reviews of these sources, we determined that agencies documented plans for system modernization should include, at a minimum, (1) milestones to complete the modernization, (2) a description of the work necessary to modernize the system, and (3) details regarding the disposition of the legacy system. We then analyzed agencies documented modernization plans for the selected systems to determine whether the plans included these elements. If an agency s plans included milestones for only a portion of the initiative or only described a portion of the work necessary to complete the modernization, we assigned the agency a partial rating. Appendix III provides details on each of the selected systems and the agencies plans for modernizing them. To identify examples of successful IT legacy system modernization initiatives, we first asked each of the 24 agencies to provide us with examples of their successful modernization initiatives completed between 2014 and 2018. The agencies reported 94 examples of successful modernization initiatives. We also reviewed the agencies responses to congressional committees requests for information to determine other possible successful modernization initiatives at these agencies. Using the examples discovered in this process and the agency-provided examples, we then collected and reviewed documentation describing the modernization initiatives, such as case studies and the agencies written responses to our questions about the initiatives. We used our professional judgment to select examples that reflected a mix of different agencies, types of system modernization initiatives, and types of benefits realized from the initiatives. We ultimately included in our review those modernization initiatives that two or more members of our audit team selected as examples that reflected a mix of different agencies, types of system modernization initiatives, and types of benefits realized from the initiatives. We also coordinated with the selected agencies Offices of Inspector General to determine whether those offices had any past or current audit work that would contradict the agencies determination that the selected initiatives were successful. We conducted this performance audit from January 2018 to June 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: The 24 Chief Financial Officers Act Agencies Most Critical Legacy Systems in Need of Modernization Each of the 24 Chief Financial Officers Act agencies identified their agency s most critical legacy systems in need of modernization. The agencies identified a total of 65 such systems. The agencies also identified various attributes of the legacy systems, including the systems age, hardware age, system criticality, and security risk. Table 7 provides a generalized list of the most critical legacy systems in need of modernization, as identified by the agencies, as well as selected factors related to each system s age and criticality. (Due to sensitivity concerns, we substituted alphanumeric identifiers for the names of the agencies systems. Specifically, we assigned a number to identify each of the 10 most critical legacy systems in need of modernization that we discuss in this report and we assigned a letter or letters to identify the remaining 55 systems.) Appendix III: Profiles of the 10 Most Critical Legacy Systems in Need of Modernization This appendix describes the 10 most critical legacy systems in need of modernization, as identified during our review. The profiles of each system describe (1) the system s purpose, (2) the reason that the system needs to be modernized, (3) the agency s plans for modernization, and (4) possible benefits to be realized once the system is modernized. <7. System 1> The Department of Defense (DOD) U.S. Air Force s System 1 provides configuration control and management to support wartime readiness and operational support of aircraft, among other things. See figure 1 for a photograph of airmen maintaining an aircraft. The Department of Education s (Education) System 2 processes and stores student information and supports the processing of federal student aid applications. Education first implemented System 2 in 1973. Agency officials stated that the system runs approximately 1 million lines of Common Business Oriented Language (COBOL) on an IBM mainframe. COBOL is a legacy language that can be costly to maintain. The department noted that 18 contractors are employed to maintain the COBOL programming language for this and another system. Education officials stated that the agency would like to modernize System 2 to eliminate reliance on COBOL, simplify user interactions, improve integration with other applications, respond to changing business requirements more quickly, and decrease development and operational costs. Education officials stated that the agency intends to modernize System 2 as part of its Next Generation Financial Services Environment initiative. This initiative is to modernize Federal Student Aid s technical and operational architecture and improve the customer experience. The agency expects to consolidate all customer-facing websites and implement a new loan servicing platform to benefit federal student loans. Education has not developed a plan for the modernization of System 2. According to agency officials, these plans are pending the results of a comprehensive information technology (IT) visualization and engineering project that will determine which IT systems and services could be feasibly modernized, consolidated, or eliminated. While Education has not calculated the specific cost savings associated with modernizing System 2, the department anticipates potential cost savings, including decreased hardware and software licensing costs and decreased costs associated with changes to business rules. According to the agency, other potential benefits of modernizing this system include integration across the enterprise, improved cybersecurity and data protection, reduced system complexity, and improved system efficiency. <8. System 3> The Department of Health and Human Services (HHS) System 3 is a clinical and patient administrative information system. HHS s component, Indian Health Service s (IHS) uses the system to gather, store, and display clinical, administrative, and financial information on patients seen in a clinic, hospital, or remotely through the use of telehealth and home visit practices. HHS officials stated that the modernization of System 3 is imperative. Specifically, the agency noted that the system s technical architecture and infrastructure were outdated. This has resulted in challenges in developing new capabilities in response to business and regulatory requirements. Further, System 3 is coded in C++ and MUMPS. MUMPS is a programming language that HHS considers to be a legacy language. The agency noted that it has become increasingly difficult to find programmers proficient in writing code for MUMPS. Lastly, the system s more than 50 modules were added over time to address new business requirements. The software is installed on hundreds of separate computers, which has led to variations in the configurations at each site. According to IHS, this type of add-on development becomes detrimental over time and eventually requires a complete redesign to improve database design efficiency, process efficiency, workflow integration, and graphical user interfaces. While the agency does not yet have modernization plans, in September 2018, HHS awarded a contract to conduct research for modernizing IHS s health information technology (IT) infrastructure, applications, and capabilities. According to the department, the research will be conducted in several stages over the next year, and a substantial part of the research will be an evaluation of the current state of health IT across IHS s health facilities. Once the research is conducted, in consultation with IHS and its stakeholders, the contractor will use the findings and recommendations to propose a prioritized roadmap for modernization. According to HHS, the agency will be completing the modernization initiative over the next 5 years, but anticipated that it may be able to begin to execute an implementation plan as early as 2020. With regards to potential cost savings, HHS noted that the modernization will take significant capital investment to complete and it is unknown whether the modernization will lead to cost savings. HHS officials stated that this modernization could improve interoperability with its health care partners, the Department of Veterans Affairs and the Department of Defense, and significantly enhance direct patient care. <9. System 4> The Department of Homeland Security Federal Emergency Management Agency s (FEMA) System 4 consists of routers, switches, firewalls, and other network appliances (all referred to as devices) to support the connectivity of FEMA sites. According to the agency, System 4 needs to be modernized because there are significant cyber and network vulnerability risks associated with its end of life (i.e., no longer supported or manufactured by the vendor) devices. In particular, the system s devices typically require replacement every 3 to 5 years from the date of purchase. Despite this, the majority of the hardware was purchased between 8 and 11 years ago. As of December 2018, about 545 of these devices were at the end of life. In a security assessment report performed in September 2018, System 4 received 249 security findings, of which 168 were high or critical risk to the system. Further compounding this issue, the agency is not certain exactly how many devices make up the system. In particular, FEMA officials stated that the vendor completed an inventory of devices in May 2018, but that inventory did not align with other inventory counts. As a result, the agency plans to develop an inventory reconciliation strategy and process to address this issue. FEMA intends to replace System 4 s devices in two phases. The first phase will target the agency s smaller facilities, while the second phase is to address the larger facilities, which may require more complex installations. FEMA s Office of the Chief Information Officer is conducting site surveys to better define requirements and cost estimates. While the agency has yet to develop finalized modernization plans for this initiative with milestones, DHS officials and contract information technology staff developed a list of future recommended activities that would help modernize the system as part of their November 2018 quarterly business review. Despite the lack of finalized plans, FEMA intends to replace 240 of the 545 devices that are at the end of support, if funds are available. The agency also intends to upgrade the remaining 305 devices in the future, if funds are available. The agency has not calculated the exact amount of cost savings. Once the system is completely updated and a lifecycle replacement operations and maintenance support plan is in place and funded, FEMA and DHS expect to realize cost savings based on new technology and increased throughput. Further, the agency stated that with new equipment, it would be able to meet mission requirements and take advantage of new technologies. In addition, replacing these unsupported devices would significantly reduce downtime and increase network availability. <10. System 5> The Department of the Interior s (Interior) System 5 is an Industrial Control System (ICS) Supervisory Control and Data Acquisition (SCADA) System that supports the general operation of dams and power plants on a particular river and its tributaries. The system serves its customers by, among other things, starting and stopping the generators, adjusting the output of electricity to assure electric grid stability, and monitoring the operating conditions of dam and power plant equipment. Figure 2 shows an example of an Interior dam. The system is approximately 18 years old and contains obsolete hardware that is not supported by the manufacturers. Further, according to a program official, the system s original hardware and software installation did not include any long-term vendor support. Thus, any original components that remain operational may have had long-term exposure to security and performance weaknesses. In January 2014, the Director of National Intelligence testified that ICS and SCADA systems used in electrical power distribution provided an enticing target to malicious actors and that, although newer architectures provide flexibility, functionality, and resilience, large segments of the systems remain vulnerable to attack, potentially causing significant economic or human impact. Further, according to Interior s system modernization plans, the agency needs to modernize the system in order to increase data collection capabilities and security. Specifically, the system is expected to interface with more plant equipment and collect and report on more data than it has in the past. According to Interior s plans, the modernized system is expected to accommodate future growth requirements. The plans also support the complete replacement of the system s obsolete hardware and software. The modernization plans also outline goals, milestones, and the work to be accomplished. The agency plans to complete the modernization by January 2020. By replacing the legacy system, Interior plans to realize a number of potential benefits, including annual cost savings of $152,000. In addition, the system will no longer run on obsolete, unsupported hardware. Furthermore, newer software and hardware are expected to allow for the automation of compliance tasks, increase system security, and expand system availability. According to the system s fiscal year 2017 operational analysis, these benefits should create a more reliable system for both the agency and the customers of the networked hydroelectric dams. <11. System 6> The Department of the Treasury s Internal Revenue Service s (IRS) System 6 contains taxpayer data. Many IRS processes depend on output, directly or indirectly, from this data source. System 6 was written in a now outdated assembly language code and Common Business Oriented Language (COBOL). The department and we have raised a number of concerns related to this system s reliance on assembly language code and COBOL, the maintainability of the system, and staff attrition. For example, in May 2016, we reported that legacy systems using outdated languages may become increasingly more expensive and agencies may pay a premium to hire staff or contractors with the knowledge to maintain these systems. IRS plans to address these concerns by modernizing core components of System 6. The new system is intended to provide improved functionality. However, IRS is having trouble fully staffing the modernization effort, resulting in significant delays. While the agency has developed modernization plans, they are incomplete. For example, the plans milestones do not go past the current project and their descriptions of the work necessary to complete the project are at a higher level when outlining the goals of future stages. In May 2019, the agency stated that even when the current modernization effort is fully implemented, only a portion of the work required to retire the legacy system will have been completed. The agency has not provided a target date for decommissioning the legacy system. While IRS does not anticipate cost savings associated with the modernization of this system, it anticipates many internal and external benefits for both the taxpayer and the agency. In particular, according to the IRS s Fiscal Year 2019 Capital Investment Plan, the benefits of modernizing this system include: (1) increased agility of agency response to changing taxpayer priorities and legislation; (2) reduced IT costs and complexity; (3) enhanced analytics and reporting to greatly improve compliance and issue resolution; and (4) reduced burden of manually intensive processes on IRS employees, by enabling automated calculations that currently are not possible. <12. System 7> The Department of Transportation s (Transportation) Federal Aviation Administration s (FAA) System 7 contains information on aircraft and pilots. The system also provides information to other government agencies, including those responsible for homeland security and investigations of aviation accidents. According to Transportation, the system is DOS-based and needs to be updated to continue to efficiently meet its mission. Specifically, some of the core system components are mainframe applications that have been in operation since 1984. In addition, the system is running unsupported software, including one operating system that was last supported by the vendor in 2010. FAA is planning to implement a new system to streamline processes, allow for the submission of electronic applications and forms, automate registration processes, improve data availability, and implement additional security controls. However, the agency does not currently have a documented modernization plan. Officials stated that the agency is seeking alternatives to modernize the system and meet legislative requirements. FAA has asked interested vendors to respond to a request for information. According to the agency, the responses to this request are intended to inform strategic decisions about the modernization, and are planned to ultimately lead to proposed solutions from industry. While FAA has not calculated the specific cost savings associated with modernizing the system, the agency stated that it anticipates potential cost savings. Agency officials stated that they plan to have information on the anticipated cost savings in November 2019. The agency also expects that the modernized system will provide enhanced security. <13. System 8> The Office of Personnel Management s (OPM) System 8 consists of the hardware, software, and service components that support OPM s information technology (IT) applications and services. This system supports the agency s business functions and supports the agency in providing investigative products and services for more than 100 federal agencies. Modernizing this system is especially important due to past security incidents and persistent security concerns. Specifically, according to OPM, segments of the agency s infrastructure were allowed to age beyond end of life and now pose a significant risk in performance and security to IT operations. Further, in October 2017, OPM s Office of the Inspector General (OIG) reported that the agency s IT environment contained many instances of unsupported software and hardware, where the vendor no longer provided patches, security fixes, or updates for the software. As a result, the OIG noted that there was increased risk that OPM s IT environment contained known vulnerabilities that would never be patched, and could have been exploited to allow unauthorized access to data. In June 2015, OPM reported that an intrusion into its systems had affected the personnel records of about 4.2 million current and former federal employees. Then, in July 2015, the agency reported that a separate but related incident had compromised its systems and the files related to background investigations for 21.5 million individuals. At a June 2015 Congressional hearing, OPM s Director stated that the modernization of the IT infrastructure was critical to protecting the agency s data from adversaries. The Director also stated that it was not feasible to implement encryption on networks that were too old, but noted that OPM was taking other steps to secure the networks. OPM plans to modernize System 8 by upgrading hardware at the end of life, migrating off of legacy operating systems and support software, and augmenting the agency s established policies and procedures. In fiscal year 2018, OPM completed software and hardware upgrades, including replacement of core switches, network end points, and laptops. In fiscal year 2019, the agency plans to continue its focus on refreshing aged IT infrastructure, so that its hardware components will have the proper vendor support. OPM developed multiple documents related to the planning of this modernization effort, including a modernization schedule, and its fiscal year 2019 budget justification. However, the modernization plans contained in these documents did not include details for the entire modernization effort. The milestones in these documents, for instance, were either no longer current or only contained milestones regarding one part of the project. While the budget justification did outline what it planned to accomplish in fiscal years 2018 and 2019, it did not mention the rest of the work needed to complete the infrastructure modernization. Similarly, the OIG has reported concerns regarding the agency s plans to modernize its infrastructure. Most recently, in June 2018, the OIG reported that OPM was generally continuing in the right direction toward modernizing its IT environment, but the OIG had concerns with the agency s plan for modernization and its overall approach to IT modernization. For example, the OIG was concerned that OPM s planning documents did not identify the full scope of the modernization effort or contain cost estimates for the individual initiatives or the effort as a whole. The OIG planned to monitor and continue to report on the agency s progress in modernizing its infrastructure. OPM anticipates realizing both financial and nonfinancial benefits with the modernization of its infrastructure. For example, as a part of its overall infrastructure modernization, the agency avoided approximately $16 million in costs as part of its data center consolidation efforts for fiscal year 2018. The agency also expects that cybersecurity and operational risks associated with end of life hardware will be reduced. To that end, the agency stated that remediating end of life hardware also should allow OPM the ability to address identified security vulnerabilities and avoid operational downtime, as support is more readily available. <14. System 9> The Small Business Administration s (SBA) System 9 is a system that, according to the agency, provides identification, authentication, and authorization services for several of the agency s applications. According to the agency, the system was developed by SBA and originally implemented in 2002. Agency officials stated that System 9 s hardware and software are no longer supported by the associated vendors. Consequently, according to the agency, it is paying for extended support contracts that have increased operating costs for the system. Further, agency officials stated that the system resides on a platform that is scheduled to be decommissioned within the next year. In addition, the system is coded using a programing language that the agency considers to be a legacy programming language (among others). The agency s documented modernization plan includes milestones to complete the modernization and plans for the disposition of the legacy system following system modernization; however, the plan does not include a description of the work necessary to complete the modernization. However, agency officials stated that it intends to replace the system s functionality with login.gov. Login.gov was developed and is maintained by the General Services Administration as a single sign-on trusted identity platform. Login.gov provides identification and authentication for applications and is intended to offer the public secure and private online access to participating government programs. However, according to the agency, since login.gov does not provide authorization controls, SBA intends to develop additional software to provide authorization controls beginning in March 2019. According to the agency, it does not anticipate any cost benefits from modernizing System 9. However, the agency expects that the security and stability of the system will increase. <15. System 10> The Social Security Administration s (SSA) System 10 supports the provision of particular Social Security benefits to eligible people. Currently, SSA collects detailed information from the recipients in person, by telephone, and via the internet on multiple platforms (e.g., desktops and hand-held devices), and from internal and external interface methods. System 10 is comprised of many applications that collect information, make payments, and communicate with SSA s clients. According to SSA s October 2017 information technology modernization plan, the agency needed to modernize its core systems, including System 10, because of complications related to their age and original system design. SSA s modernization plan indicates that, since implementation, these systems had been subjected to constant modifications to incorporate changes in legislation, regulations, and policy. Through the years, new technologies and capabilities had been integrated into the core systems and delivering new capabilities was becoming exorbitantly expensive. Further, most of the agency s systems, including System 10, are generally unconnected to each other, creating functional silos servicing independent lines of business. According to the agency, navigating these systems is challenging, and copying beneficiary data from system to system can result in data becoming out of sync. According to the agency s modernization plan, SSA intends to replace its core systems, including System 10, with new components and platforms, engineered for usability, interoperability, and future adaptability. Work accomplished over several years of incremental modernization has already resulted in moving a substantial portion of System 10 away from old technologies. For instance, according to SSA officials in the Office of the Deputy Commissioner, Systems, SSA moved System 10 to a modern, relational database platform and modernized aspects of the user interface. According to an SSA 5-year modernization roadmap, the agency is currently working to modernize and create web services as a part of the effort to consolidate SSA s initial claims processes; however, the roadmap does not offer specific information about these efforts. As for its modernization planning efforts, SSA s plans include overall modernization goals, a high-level overview of the planned system architecture, milestones for fiscal year 2018, and a description of the work that it had planned to accomplish in fiscal year 2018. However, the plans do not include either System 10-specific milestones or a description of the work necessary to modernize the legacy system beyond fiscal year 2018. Further, the document does not include plans for the disposition of the legacy system after modernization. According to officials in the Office of the Deputy Commissioner, Systems, the agency will update the planning documentation and make further decisions as the modernization effort progresses. SSA expects that modernizing System 10 will result in cost savings in addition to many other benefits. For instance, the agency expects that it will be able to save approximately $38 million from modernizing System 10 and other systems running in the agency s mainframe environment. In addition, increased staff access to benefit recipients data will enable staff to review medical evidence faster and process claims more accurately, among other things. According to the agency s modernization plan, the improvements to the system should improve productivity and service to the public, as well as reduce the number of improper payments due to technician error. Appendix IV: Comments from the Department of Education Error! No text of specified style in document. Appendix V: Comments from the Department of Health and Human Services Appendix VI: Comments from the Department of Homeland Security Appendix VII: Comments from the Internal Revenue Service Appendix VIII: Comments from the Office of Personnel Management Appendix VIII: Comments from the Office of Personnel Management Error! No text of specified style in document. Appendix IX: Comments from the Small Business Administration Appendix IX: Comments from the Small Business Administration Error! No text of specified style in document. Appendix X: Comments from the Social Security Administration Error! No text of specified style in document. Appendix XI: Comments from the Department of Housing and Urban Development Error! No text of specified style in document. Appendix XII: Comments from the U.S. Agency for International Development Error! No text of specified style in document. Appendix XIII: GAO Contact and Staff Acknowledgments Appendix XIII: GAO Contact and Staff Acknowledgments Error! No text of specified style in document. <16. GAO Contact> <17. Staff Acknowledgments> In addition to the contact name above, the following staff made key contributions to this report: Dave Powner (Director), Kevin Walsh (Assistant Director), Jessica Waselkow (Assistant Director), Chris Businsky, Rebecca Eyler, Angel Ip, and Meredith Raymond. | Why GAO Did This Study
The federal government plans to spend over $90 billion in fiscal year 2019 on IT. About 80 percent of this amount is used to operate and maintain existing IT investments, including aging (also called legacy) systems. As they age, legacy systems can be more costly to maintain, more exposed to cybersecurity risks, and less effective in meeting their intended purpose.
GAO was asked to review federal agencies' legacy systems. This report (1) identifies the most critical federal legacy systems in need of modernization and evaluates agency plans for modernizing them, and (2) identifies examples of legacy system modernization initiatives that agencies considered successful.
To do so, GAO analyzed a total of 65 legacy systems in need of modernization that 24 agencies had identified. Of these 65, GAO identified the 10 most in need of modernization based on attributes such as age, criticality, and risk. GAO then analyzed agencies' modernization plans for the 10 selected legacy systems against key IT modernization best practices.
The 24 agencies also provided 94 examples of successful IT modernizations from the last 5 years. In addition, GAO identified other examples of modernization successes at these agencies. GAO then selected a total of five examples to highlight a mix of system modernization types and a range of benefits realized.
This is a public version of a sensitive report that is being issued concurrently. Information that agencies deemed sensitive has been omitted.
What GAO Found
Among the 10 most critical legacy systems that GAO identified as in need of modernization (see table 1), several use outdated languages, have unsupported hardware and software, and are operating with known security vulnerabilities. For example, the selected legacy system at the Department of Education runs on Common Business Oriented Language (COBOL)—a programming language that has a dwindling number of people available with the skills needed to support it. In addition, the Department of the Interior's system contains obsolete hardware that is not supported by the manufacturers. Regarding cybersecurity, the Department of Homeland Security's system had a large number of reported vulnerabilities, of which 168 were considered high or critical risk to the network as of September 2018.
Of the 10 agencies responsible for these legacy systems, seven agencies (the Departments of Defense, Homeland Security, the Interior, the Treasury; as well as the Office of Personnel Management; Small Business Administration; and Social Security Administration) had documented plans for modernizing the systems (see table 2). The Departments of Education, Health and Human Services, and Transportation did not have documented modernization plans. Of the seven agencies with plans, only the Departments of the Interior and Defense's modernization plans included the key elements identified in best practices (milestones, a description of the work necessary to complete the modernization, and a plan for the disposition of the legacy system). Until the other eight agencies establish complete modernization plans, they will have an increased risk of cost overruns, schedule delays, and project failure.
What GAO Recommends
In the sensitive report, GAO is making a total of eight recommendations—one to each of eight agencies—to ensure that they document modernization plans for the selected legacy systems.
The eight agencies agreed with GAO's findings and recommendations, and seven of the agencies described plans to address the recommendations. |
gao_GAO-20-266 | gao_GAO-20-266_0 | <1. Background> The federal government has long recognized the need to protect itself by ensuring contractors have appropriately allocated costs on cost-based contracts. In terms of what is potentially covered by CAS, cost-based contracts include cost-type contracts and certain fixed-price contracts where the contractor s estimated or actual costs play a role in determining the amount the government pays. The total amount obligated annually by the government on these types of contracts is significant. For example, in fiscal year 2018, the federal government obligated approximately $172 billion on cost-type contracts alone, according to our analysis of Federal Procurement Data System information. <1.1. Need for Uniform Cost Accounting Standards> In 1968, the House Banking and Currency Committee held hearings to determine whether to renew the Defense Production Act of 1950. A witness at the hearings, U.S. Navy Admiral Hyman G. Rickover, testified that defense suppliers could make excessive profits and disguise them as overhead costs or hide them in other ways in the absence of a set of uniform cost accounting standards. Witnesses at the time testified that it was difficult to compare costs among prospective contractors cost estimates or even to assess costs incurred on contracts with the same contractor without a set of uniform and consistent standards. Congress subsequently directed us to study the feasibility of establishing such standards. In January 1970, we reported one of many examples of mischarges involving a contractor that had charged the government for costs above the allowed cost ceiling by moving them under a separate contract cost category. We concluded that then-existing financial reporting standards were neither created nor adequate for contract cost purposes. In addition, we concluded that it was feasible to create a set of cost accounting standards and recommended doing so. <1.2. Creation of Board and Cost Accounting Standards> In August 1970, Congress created the Board as an independent board within the legislative branch. The Board was initially chaired by the Comptroller General, who appointed four other members. The Board was authorized to promulgate standards designed to achieve uniformity and consistency in cost accounting practices used by federal contractors on defense contracts in excess of $100,000. The Board issued 19 cost accounting standards that went into effect between 1972 and 1980 for applicable DOD contracts. These standards covered areas such as consistency between how actual and estimated costs are calculated and reported, and ensuring that costs are not double- counted. The standards were intended to ensure that incurred costs were appropriately allocated to government contracts. <1.3. Generally Accepted Accounting Principles> In contrast, GAAP is a set of U.S. accounting standards, conventions, and rules focused on measuring companies financial performance. GAAP is meant to establish and improve financial accounting and reporting to provide useful information to investors and other users of financial reports, including measurement and recognition of costs in financial statements. Federal endorsement of generally accepted accounting practices or principles dates back to the Securities Act of 1933. Then, the Securities Exchange Act of 1934 created the Securities and Exchange Commission and gave it authority to oversee accounting and auditing methods for publicly traded companies. Subsequently, various professional accounting groups, with oversight by the Securities and Exchange Commission, began working to establish standards and practices for consistent and accurate financial reporting, which became known as GAAP. In 1973, the Securities and Exchange Commission recognized the Financial Accounting Standards Board (FASB) as the designated accounting standard setter for public companies in the United States, and FASB is responsible for GAAP. <1.4. History of the Cost Accounting Standards Board> In fiscal year 1981, Congress stopped funding the Board. However, after a number of disputes arose as to how to interpret various standards, Congress reestablished the Board in 1988. Congress placed the Board under OFPP, which is part of the Office of Management and Budget (OMB) within the executive branch. Congress also broadened the Board s authority by applying CAS to all federal contracts they were previously applicable only to defense contracts. Table 1 provides more information on the differences between CAS and GAAP. The Board met intermittently to address issues associated with interpretations of the standards after it was reestablished in 1988. In the late 2000s, the Board revised two of the standards related to pension contributions by government contractors for their employees. Effective in 2008, Congress changed the minimum contributions required to fund pension plans. This change caused pension contributions to greatly exceed CAS pension costs reflected in contract prices. The Board updated the CAS effective in 2012 to harmonize CAS pension costs with statutory changes to the pension funding requirements. However, the Board s changes did not address how costs were settled when pension plans were curtailed. In January 2013, we recommended that the Board set a schedule to revise parts of the CAS dealing with settlement of pension plan curtailments. Citing our recommendation, the Board began efforts to resolve this issue in July 2013 and the work is on-going. While Board staff have been working to resolve these pension issues, the Board went several years without holding official meetings of the full board. Figure 1 illustrates the Board s activities over time. The current Board is comprised of five members. The Administrator of OFFP is a member and serves as Board Chair. The other members include representatives from DOD, the General Services Administration, industry, and another private sector representative with cost accounting expertise. According to OFPP officials, the Board is also assisted by two OFPP staff one on a full-time basis and one on a part-time basis and a detailee from DCAA. In addition, OFPP officials said that the Board forms interagency working groups to address specific issues, such as pension harmonization. The Board receives its funding from OMB and does not have a separate funding source. According to OFPP officials, the Board s main expenses were salary reimbursement for the non-government employees who serve on the Board and publication costs for Federal Register notices. Other federal agencies also have responsibilities to help administer the standards. For example, according to OFPP officials, most CAS-covered contracts are defense related. As such, DCAA reviews federal contractors disclosure statements for adequacy and compliance that is, whether the statements are current, accurate, and complete. Disclosure statements describe the company s actual or proposed cost accounting practices, including how they distinguish between costs, and how costs are allocated to contracts. DCAA also conducts audits to ensure contractors comply with CAS and with the contractors disclosed and established cost accounting practices and procedures. In addition, DCMA monitors contractor performance and the contractor s business management systems, among other things, to ensure that the contractor is consistently following its cost accounting practices for contracts that are subject to CAS. There is no definitive list of the companies, business segments or units, or contracts that are subject to CAS. Whether a contractor s business segment is required to comply with the standards on a particular contract depends largely on the value of the government contracts it is awarded during the year that are cost-based. Once a contractor s business segment exceeds a certain dollar threshold of these CAS-covered contracts, the business segment is required to comply with either (1) all 19 standards (termed full CAS-coverage ) or (2) four standards (termed modified CAS-coverage ). Full coverage applies to business segments with CAS-covered contracts with a combined value of $50 million or more. Modified coverage may apply to business segments with a single CAS- covered contract of $7.5 million or more, and combined CAS-covered contracts valued at less than $50 million. Table 2 below lists all 19 CAS required under full coverage and the four CAS required under modified coverage. <1.5. Prior GAO Reports and Recent Studies Related to the Board and Cost Accounting Standards> We and congressionally established review panels have previously studied the potential impact of CAS on industry as well as possible changes to the CAS and the Board. For example: In April 1994, we reported that seven of eight companies we reviewed either kept their government contracting work separate from their commercial contracting or assigned additional staff to their government contracting segments due to the increased demands of government contracting, citing, among other things, CAS as a factor in that decision. In January 1997, we reported on DOD s efforts to address acquisition cost drivers based, in part, on a prior DOD-directed study that identified CAS as one of the 10 largest cost drivers on DOD contracts. In that report, a DCMA official noted that, in his opinion, while the annual cost of maintaining a CAS-compliant system is relatively small, the cost to establish a CAS-compliant system may be significant. Congress asked us to lead a panel of experts to assess the future role of the Board. In April 1999, we issued the panel s report focused on the Board and CAS in light of acquisition reforms and the evolution of GAAP. The panel concluded that, among other things, the Board should review CAS and its attendant requirements to determine whether standards could be streamlined to reduce unnecessary burden on affected contractors. In addition, the panel made several recommendations, including moving the Board out of OFPP to ensure autonomy. Congress did not act on this recommendation. The panel also recommended reviewing contract applicability and full-coverage thresholds for CAS. Congress subsequently set the modified coverage ceiling at $7.5 million in October 1999. In July 2017, most of the 12 companies we spoke with that had not done business with DOD told us they chose not to do so because it might trigger a large number of contract terms and conditions that would be expensive to implement. One reason provided by the companies for not competing for certain types of DOD contracts was the requirement to establish a government-unique cost accounting system and to disclose and follow cost accounting practices consistently. In June 2018, the Section 809 Panel having been established to advise Congress on streamlining defense acquisition regulations released the second of three volumes of its report. In its report, the panel made two recommendations related to CAS, which largely reiterated what the GAO-led panel recommended in 1999. In this regard, the Section 809 panel recommended that the Board should be relocated to the General Services Administration as an independent board with a budget sufficient to support at least three full-time, permanent staff. The panel also recommended raising CAS applicability threshold levels again to further reduce burden on contractors. Subsequently, in 2019, OMB submitted a legislative proposal on raising the CAS applicability threshold from $2 million to $15 million. OMB officials also indicated that they would continue analyzing the effects of additional threshold changes. Congress had not enacted the proposal into law at the time of this report. <2. Board Efforts Generally Comply with Recent Legislative Requirements> The CAS Board generally has complied or is in the process of complying with the administrative and reporting requirements prescribed by Section 820 of the National Defense Authorization Act for Fiscal Year 2017, including initial efforts to assess the extent to which CAS can be conformed with GAAP. To do so, the Board is taking steps to follow its statutorily prescribed four-step rulemaking process. The Board s initial efforts focus on the extent to which two of the 19 standards might be modified or eliminated; however, Board members indicated that these efforts may take several more years to complete. <2.1. CAS Board Has Generally Complied with Administrative and Reporting Requirements> The Board has generally complied with the administrative requirements prescribed under Section 820 thus far, including meeting regularly, generally publishing notices and agendas in advance of meetings, and reviewing disputes involving cost accounting-related matters. According to officials from the Office of Federal Procurement Policy, the Board is working on the first of its annual reports on its efforts, including those associated with efforts to conform the standards with GAAP where practicable. Table 3 highlights the steps the Board is taking to address some of the administrative and reporting requirements mandated by Section 820. <2.2. Board Has Undertaken Initial Efforts to Assess How CAS Can Be Conformed with GAAP> The Board has also taken initial steps to address Section 820 s requirement that the Board review the standards and conform them to GAAP, where practicable (see table 4). In carrying out this work, the Board is taking steps to follow a statutorily prescribed four-step rulemaking process for promulgating CAS or interpretations. Figure 2 below outlines these requirements. In line with this process, between March and November 2018, the Board discussed the opportunities and methods available for conforming CAS to GAAP. The Board held informal discussions with its staff, industry representatives, and government agencies, such as DCAA. One of the messages coming from the feedback was for the Board to focus first on those standards that offered the greatest potential for change. By the end of 2018, the Board had completed development of the staff discussion paper. The Board expected to release this document in the Federal Register for public comment in January 2019; however, a partial shutdown of the federal government due to lapsed funding delayed its release until March 2019. The March 2019 staff discussion paper (1) outlined a set of five guiding principles that the Board would use to assess whether proposed CAS changes are necessary and whether those changes would reduce the burden on contractors while protecting the government s interests, (2) identified a roadmap that prioritized the Board s proposed review of standards, and 3) included a preliminary comparison of two standards to GAAP. Guiding Principles. The guiding principles outlined in the staff discussion paper describe the elements the Board will consider when determining whether changes to the CAS will reduce burden on contractors while continuing to protect the interests of the federal government. As stated in the staff discussion paper, the Board will: 1) reduce CAS requirements where practicable; 2) consider whether the proposed action would reduce burden on 3) consider whether other CAS or federal rules would protect the government s interests in case of any gaps created by relying on GAAP; 4) monitor future changes to GAAP and the Federal Acquisition Regulation (FAR) to identify and evaluate their impact on CAS and revise CAS, as necessary; and 5) monitor future significant disputes related to the conformance to GAAP and evaluate whether the Board should address them through clarifying guidance or rulemaking. Prioritization. The Board grouped the 19 CAS into four categories based on the Board s assessment of which standards are most likely to have overlap with GAAP (see figure 3). The Board plans to focus its initial efforts on the seven standards in the first group, which focus on cost measurement and assigning costs to accounting periods. According to its staff discussion paper, the Board s proposed approach is to assess the standards by developing side-by-side comparisons of CAS requirements to corresponding GAAP requirements and identifying any gaps between the two. The Board will then evaluate the potential risk of any gaps identified, taking into account coverage by other CAS requirements and related regulations; for example, the FAR. The Board will also assess whether there is a history of compliance issues for those standards. According to OFPP officials, such assessments will help the Board determine whether they need to update guidance related to a particular CAS. Lastly, the Board plans to assess changes that have occurred in GAAP relative to CAS and to evaluate the need to conform CAS to the updated GAAP. For example, the Board has identified two recent changes in GAAP that it states may not align with CAS. Comparison. The Board has begun this effort by looking at two standards focused on measuring and assigning costs (CAS 408 and CAS 409), since it believes that GAAP potentially provided additional coverage compared to when the two CAS were established in 1975. OFPP officials stated that these two standards provided a good opportunity to modify and potentially eliminate duplicative coverage while testing the soundness of the Board s approach to conform CAS to GAAP where practicable. <2.3. Public Response to the Board s Approach Has Been Mixed> The Board received seven separate comment letters on the staff discussion paper from five industry organizations, one commercial business, and one private individual. Our review of the comments found that they were largely supportive of the Board s guiding principles, but some commenters raised concerns regarding the Board s approach to its conformance effort and questioned whether it would ease the burden on contractors. For example, four respondents commented that the Board should not limit its focus to only revising or eliminating particular CAS when it was clear that GAAP provided adequate coverage. Instead, these industry groups stated that each CAS should be eliminated unless proven to be absolutely necessary due to the barriers to contractors that these groups believe the CAS create. The Board members we met with stated that all options for refining CAS requirements were on the table. However, they also stated that GAAP and CAS are focused on two separate goals the former on a business s high-level financial statement, the latter on individual contract costs. Board members, as well as DCAA and DCMA officials, noted that eliminating CAS requirements to rely purely on GAAP standards would limit the government s ability to compare contract proposals, assess actual costs to avoid overcharges by contractors, and protect its interests. For example, DCMA officials stated that the government has $3.1 billion in pending litigation for identified CAS noncompliances. Recovery of increased costs is accomplished in part through contract clauses that entitle the government to recover specific cost increases on affected CAS-covered contracts. Were CAS and the associated contract clauses eliminated, DCAA and DCMA officials noted that the government s ability to recover these costs would be greatly reduced. In addition, the Board is concerned that in modifying or perhaps even eliminating certain CAS requirements, and instead using GAAP, there is the risk that future GAAP changes would no longer cover the areas of CAS concern. This would leave the government vulnerable to the issues that the modified or eliminated CAS were originally created to address. Members of the Board and staff we spoke with indicated that the Board is reviewing and assessing the public comments on the staff discussion paper to determine whether the Board needs to make changes to the paper s guiding principles or methodology going forward. According to the Board members, the Board will issue a Federal Register notice explaining any changes resulting from public input and its own additional deliberations. Additionally, they said the Board will consult with the Financial Accounting Standards Board which is responsible for GAAP to answer technical questions and ensure that the Board has an accurate understanding of GAAP coverage as they continue to perform side-by- side comparisons of CAS and GAAP. Further, the Board stated that it will publish a notice to address public comments on CAS to GAAP conformance projects and that additional staff discussion papers and associated notices will be published in the Federal Register for public comment as they are completed. In addition to streamlining or eliminating CAS standards, some of the comments in response to the staff discussion paper pointed to other areas that the Board may want to consider to reduce the burden on government contractors. For example, some comments encouraged the Board to consider increasing CAS full-compliance dollar thresholds. Reassessing the CAS full-compliance threshold aligns with findings from congressionally established panel reports from 1999 and 2018. According to both panels findings, increasing compliance thresholds is a way to decrease burden on many government contractors while still protecting the bulk of the government s contracting dollars. As previously noted, OMB recently submitted a legislative proposal to raise the threshold from $2 million to $15 million. OMB also indicated that it intends to continue studying available data to understand the costs and benefits of CAS threshold changes and whether additional changes to the threshold need to be made. <3. Agency Comments> We provided a draft of this report to DOD, the Office of Management and Budget, and the Cost Accounting Standards Board for their review and comment. DOD had no comments on the report. The Office of Management and Budget and the Board provided technical comments, which we incorporated where appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense; the Director, Defense Procurement and Contracting; the Director, Defense Contract Audit Agency; the Director, Defense Contract Management Agency; the Director, Office of Management and Budget; and the Administrator, Office of Federal Procurement Policy. In addition, this report will be available at no charge on the GAO website at https://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or by e-mail at dinapolit@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Description of the 19 Cost Accounting Standards and Their Purpose Appendix II: Cost Accounting Standards Applicability, Exemptions, and Compliance Cost Accounting Standards Applicability. In general, a business segment is not subject to Cost Accounting Standards (CAS) until it receives a non-exempt contract of $7.5 million or more from the federal government. Generally, a non-exempt contract is a contract that does not meet any of the exemptions listed below. Typically, once a business segment receives a non-exempt contract of $7.5 million or more, all of its prospective non-exempt contracts or subcontracts over $2 million are considered CAS-covered. Summary of Exemptions. The following categories of contracts and subcontracts are exempt from all CAS requirements: Sealed bid contracts; Negotiated contracts and subcontracts not in excess of the Truth in Negotiations Act (TINA) threshold, as adjusted for inflation (41 U.S.C. 1908 and 41 U.S.C. 1502(b)(1)(B)). For purposes of this exemption, an order issued by one segment to another segment shall be treated as a subcontract; Contracts and subcontracts with small businesses (Federal Acquisition Regulation (FAR) Subpart 19.3 addresses determination of status as a small business.); Contracts and subcontracts with foreign governments or their agents or instrumentalities or, insofar as the requirements of CAS other than CAS 401 and CAS 402 are concerned, any contract or subcontract awarded to a foreign concern; A contract or subcontract where the price is set by law or regulation; A contract or subcontract authorized in FAR 12.207 for the acquisition of a commercial item; A contract or subcontract with a value of less than $7,500,000 if, at the time of award, the business segment of the contractor or subcontractor that will perform the work has not been awarded at least one contract or subcontract with a value of $7,500,000 or more that is covered by the standards. Subcontracts under the North Atlantic Treaty Organization s Patrol Missile Hydrofoil Ship programs to be performed outside of the United States by a foreign concern; A firm-fixed price contract or subcontract awarded on the basis of adequate price competition without submission of certified cost or pricing data. In addition, in cases where the prime contract is exempt from CAS under any of the exemptions at 48 C.F.R. 9903.201-1 any subcontract under that prime is always exempt from CAS. Also, Title 41 of the U.S. Code was amended effective in 2018 to allow executive agency heads can waive CAS requirements for a contract or subcontract with a value of less than $100 million if the business segment is primarily engaged in commercial work and would not otherwise be subject to CAS, or for exceptional circumstances where waiving CAS is necessary to meet agency needs. Compliance. There are two levels of CAS coverage full and modified. Full coverage applies to business segments with CAS-covered contracts valued at $50 million or more; those business segments must comply with all 19 standards. Modified coverage may apply to business segments with CAS-covered contracts valued less than $50 million. Business segments that have contracts awarded with modified coverage must comply with four of the standards. Business segments with full CAS-covered contracts are also required to submit disclosure statements describing the company s actual or proposed cost accounting practices and procedures, including how they distinguish direct costs from indirect costs and the basis used for allocating indirect costs. The Defense Contract Audit Agency (DCAA) reviews disclosure statements for adequacy and compliance that is, whether the statement is current, accurate, and complete prior to contract award and during contract performance. DCAA may also complete CAS compliance audits at the request of the cognizant federal agency official after contract award. In some circumstances, the Defense Contract Management Agency (DCMA) will review disclosure statements that are not audited by DCAA. According to officials, both DCAA and DCMA provide audit findings to the cognizant federal agency official, who then disposes the audit findings by making the final determination of adequacy and compliance. The purpose of disclosure statement audits is to determine whether the contractor s disclosed or established practices are in compliance with CAS rules, regulations, and standards, as well as appropriate acquisition regulations. A CAS-related noncompliance may be found if a contractor with a CAS-covered contract proposes a practice that will violate CAS or a government acquisition regulations cost principle, or if the contractor s actual practices are either inconsistent with their own disclosure statement or noncompliant with the cost standards or principles. For example, in 1970, we, along with DCAA auditors, found instances where contractors charged costs as both direct and indirect costs to the same contract, resulting in the contractors recovering the same charge twice. If an auditor discovers a noncompliance issue, the auditor will submit an advisory report to the cognizant federal agency official who makes the final determination. The consequences of a CAS noncompliance can range from a contract adjustment to litigation. According to the DCMA s Contract Dispute Resolution Center, there were 15 judicial decisions issued in CAS-related board and court cases in the last five years. Appendix III: GAO Contact and Staff Acknowledgments <4. GAO Contact> <5. Staff Acknowledgments> In addition to the contact named above, Bruce H. Thomas, Assistant Director; Peter Anderson; Jennifer Baker; Miranda Riemer; Jenny Shinn; Ryan Stott; and Roxanna T. Sun made key contributions to this report. | Why GAO Did This Study
Each year, the federal government obligates billions of dollars on contracts for which the final costs depend, in part, on the amount of overhead and other costs charged to the contract.
Congress created the Board in 1970. The standards it created ensure contractors appropriately charge costs to government contracts. In contrast, GAAP is a set of financial reporting principles that commercial firms may use in preparing financial statements and which include the basis for recognizing and measuring costs in such statements . Industry representatives and others have raised concerns that complying with CAS may be burdensome and questioned whether the government could rely on GAAP.
In 2016, Congress included a provision in law that the Board, among other things, conform CAS with GAAP, where practicable. Congress also included a provision for GAO to assess Board efforts. This report assesses the extent to which the Board is taking steps to meet legislative requirements and describes the Board's efforts to conform CAS to GAAP.
GAO reviewed applicable laws, regulations and guidance, Federal Register notices and other documentation on the Board's activities. GAO also examined the Board's methodology for comparing CAS to GAAP and its preliminary analysis of two of the cost accounting standards. Finally, GAO interviewed Board members and federal procurement officials.
What GAO Found
The Cost Accounting Standards Board (the Board) is generally meeting recent legislative requirements and has taken initial steps to assess the extent to which the government's Cost Accounting Standards (CAS) can be conformed with a set of 12commercial financial reporting principles known as Generally Accepted Accounting Principles (GAAP).
Comprising five members representing the government and industry, the Board issued 19 standards between 1972 and 1980. After that point, the Board met intermittently until 2016. At that time, Congress included a provision in the National Defense Authorization Act for Fiscal Year 2017 to require the Board to meet quarterly, to review CAS-related disputes, to conform CAS with GAAP where practicable, and to report annually to Congress on its efforts, among other things.
Since the legislation went into effect, the Board has met regularly, has been briefed on CAS-related disputes, and is preparing its initial report to Congress. The Board has also taken initial steps to assess the extent to which CAS can be conformed with GAAP. The Board summarized its approach in a March 2019 staff discussion paper, which it released for public comment. In it, the Board:
outlined a set of five guiding principles to assess whether proposed CAS changes are necessary and whether those changes would reduce the burden on contractors while protecting the government's interests,
identified a roadmap that prioritized the Board's proposed review of the standards, and
included a preliminary comparison of two of the seven standards identified as having the most overlap with GAAP (see figure).
Some comments submitted in response to the discussion paper by industry groups stated that each of the 19 CAS should be eliminated unless proven to be absolutely necessary. Board members told GAO they were considering all options for refining CAS but noted that GAAP and CAS are focused on two separate goals—GAAP on businesses' high-level financial performance, CAS on allocating costs to individual government contracts. The Board and other government officials said that eliminating CAS requirements to rely purely on GAAP would limit the government's ability to protect its interests. |
gao_GAO-20-12 | gao_GAO-20-12_0 | <1. Background> <1.1. CHIP Variation> States have three options for designing their CHIP programs: Medicaid expansion CHIP, separate CHIP, and combination CHIP. Medicaid expansion CHIP. States may operate CHIP as an extension of their Medicaid programs. Under Medicaid expansion CHIP, states expand income eligibility levels for children beyond those of the state s Medicaid program. Medicaid expansion CHIP programs must follow Medicaid rules, including providing all Medicaid covered benefits to enrolled children. Separate CHIP. States may operate their CHIP programs separate from their Medicaid programs. In so doing, the states are not required to follow the same rules as Medicaid; thus, these states have some additional flexibility in designing CHIP, such as determining which benefits to offer and how, if at all, to charge premiums. Combination CHIP. States may have a combination program, where they operate a separate CHIP program, as well as a Medicaid expansion CHIP program, each for a different population of children. For example, some states that operate combination CHIP programs apply different age or income eligibility requirements for their Medicaid expansion CHIP and separate CHIP programs. Similar to Medicaid, CHIP program expenditures are shared between the states and the federal government, but federal matching rates for CHIP are higher than for Medicaid and federal funding for CHIP is capped, with states receiving annual CHIP allotments. The type of CHIP program a state designs may affect the amount of federal funding available to that state in the event the state exhausts available CHIP funding for the year. A state with a Medicaid expansion CHIP program that exhausts available CHIP funding may apply Medicaid funds at the Medicaid matching rate to remaining expenses for enrolled children for that year. However, a state with a separate CHIP program that exhausts available funding would not have access to such funding. In general, states administer CHIP under broad federal requirements that permit flexibility in how they design their programs, including in the services they cover, their upper income eligibility limits, and the fees they charge to participate. In terms of income eligibility, as of January 2019, 19 states, including the District of Columbia, had CHIP upper income eligibility limits of 300 percent of the FPL or higher compared with 32 states whose CHIP upper income eligibility limits were below 300 percent of the FPL. (See fig. 1.) In addition, states can charge beneficiaries fees for CHIP coverage. These fees can vary depending on whether they are enrollment fees, premiums, or other types of cost sharing. Among the states that charge CHIP premiums, the premiums can vary based on family income and the number of children in CHIP. (See table 1.) Although states may charge premiums or have other cost sharing, according to CMS, CHIP provides more affordable coverage than is generally available in the private health insurance market. <1.2. CHIP Crowd-Out> CHIP crowd-out may occur when employers modify or decide not to offer health insurance to their employees or to their dependents, because of CHIP availability. For example, employers who are aware of CHIP may decide not to offer health insurance to employees or their dependents due to concerns about the costs of providing insurance, especially for smaller sized firms, or as a result of changes in federal or state policies, such as requirements resulting from PPACA. Crowd-out may also occur when employees drop or decide not to enroll in insurance offered by their employers and enroll their children in CHIP, because of CHIP availability. As we have identified in prior work, assessments of the potential for crowd-out must take into account an understanding of the extent to which private health insurance is available and affordable to low-income families who qualify for CHIP. National survey results show that private health insurance is the most prevalent source of insurance for children; however, there is substantial variation across states in coverage rates. Additionally, the extent to which employers offered individuals insurance varies by family income. For additional information on factors that may affect crowd-out, see appendix I. For information on sources of health insurance for children under age 19, including CHIP and employer sponsored insurance, see appendix II. The type of CHIP program a state designs affects its responsibilities for monitoring and mitigating the potential for CHIP crowd-out. The 42 states with separate CHIP programs including those in combination CHIP states are required to submit CHIP plans that describe reasonable procedures to prevent crowd-out and to report annually to CMS on certain crowd-out related indicators, such as the number of CHIP applicants with access to private health insurance; however, CMS provides states flexibility to decide which crowd-out prevention procedures to use. For example, states can require CHIP applicants to undergo a period of uninsurance prior to enrollment, known as a waiting period, to deter families that have access to private health insurance from dropping that insurance to enroll in CHIP. In contrast, states are not required to take steps to prevent crowd-out for their Medicaid expansion CHIP programs and may only do so if consistent with the Medicaid statute, or if under an approved section 1115 demonstration, which allows states to implement policies that waive certain Medicaid requirements. For states with separate and combination CHIP programs, CMS provides general guidance for minimizing crowd-out, which the agency has modified over time. (See table 2 for a description of the crowd-out related responsibilities.) For example, in 2013, CMS issued regulations to align with a PPACA provision for health plans and health insurance issuers that limited waiting periods to a maximum of 90 days, and established mandatory waiting period exemptions. The regulations also eliminated the application of a CHIP policy requiring that states with separate CHIP programs have different crowd-out prevention procedures in place for children at different income levels. In making this change, CMS noted that available research called into question the prevalence of crowd-out. CMS indicated that its policy still required states to monitor crowd-out and, if a high rate of crowd-out were to occur, states should consider implementing prevention procedures, such as public outreach about other health care options available in the state. In response to crowd-out related recommendations we made in 2009, CMS modified its guidance to collect additional information from states in their 2009 through 2013 annual reports on how they assess the availability and affordability of private health insurance for CHIP applicants. For example, from 2009 through 2013, states were required to report to CMS if the state s CHIP application asked if applicants had access to private health insurance. Additionally, states that operated a waiting period without affordability exceptions were asked if the state collected data on the cost of health insurance for an individual or family. However, CMS officials stated that the agency eliminated the questions regarding affordability of private health insurance in 2013, as part of efforts to update the electronic system states use to submit their CHIP annual reports to reflect PPACA enrollment simplification and coordination requirements. CMS officials said some of the questions were duplicative of other state reporting requirements and other questions were deemed irrelevant in light of the establishment of affordability exceptions to waiting periods. <2. Limited Information Exists on the Extent of CHIP Crowd-Out> States reported indicators of potential crowd-out to CMS in their annual reports, although some do not report on these indicators and those that do may calculate them differently. The states also varied in the extent to which they have processes for directly estimating crowd-out; however, CMS officials and officials in selected states told us they understand the occurrence of crowd-out to be low. Further, we identified few published research studies that directly estimated crowd-out; each used different methodologies, resulting in varied estimates. <2.1. Some States Report Information on Two Indicators of Potential CHIP Crowd-Out; One Selected State Directly Measures Crowd-Out> States with separate CHIP programs including those in combination states are required to annually report indicators of potential crowd-out; states must also describe in their CHIP plans other indicators of potential crowd-out they collect. CMS s 2017 CHIP annual report asks these states to report on crowd-out related questions, including two indicators of crowd-out: (1) the percentage of individuals who enrolled in CHIP that have access to private health insurance, and (2) the percentage of CHIP applicants who cannot be enrolled, because they have private health insurance an indicator of potential crowd-out averted. However, not all states with separate CHIP programs track and report information related to these two indicators of potential crowd-out, and those that do may calculate these indicators differently. For example, of the 42 states with separate CHIP programs, the 2017 annual reports showed the following: Four of the 42 states reported that they tracked the number of individuals who have access to private health insurance; the remaining 38 states either did not report tracking this information or did not respond to this question. Of the four states tracking this information, the percentages reported ranged between 0.5 percent and 7 percent of CHIP applicants who have access to private health insurance. Twenty-one of the 42 states reported that they tracked the percentage of applicants who could not be enrolled in CHIP because they were enrolled in private health insurance; the remaining 21 states did not report this percentage to CMS. This is a measure of crowd-out averted due to state oversight of its enrollment process. The percentages reported by the 21 states tracking this information ranged from 0 percent in several states to 18 percent in one state. Among the states that reported they do not track individuals with access to private insurance and did not provide a percentage of applicants not enrolled in CHIP because of enrollment in private health insurance, five states indicated that either their electronic eligibility systems did not allow them to capture this information or the data to report this information were not available. CMS officials acknowledged that not all states report on these indicators; however, they noted that states operating separate CHIPs have other processes in place to prevent children with other health insurance from enrolling in CHIP. Further, some states that operate separate CHIP programs describe approaches for directly estimating crowd-out in their CHIP plan amendments. The results of these estimates are not reported to CMS unless they reach a threshold defined by each state. In 2013, CMS required separate CHIP states to submit state plan amendments to CMS to update their eligibility-related policies, including their crowd-out prevention procedures. In response, 17 of the 42 states submitted these amendments and described approaches they would use to directly measure crowd-out. For example: Colorado reported conducting a biennial survey to estimate the percentage of enrollees who dropped group health insurance without good cause to gain eligibility for CHIP, according to its CHIP plan. Connecticut reported comparing the number of children denied CHIP enrollment because they were enrolled in private health insurance to those same applicants who reapplied for CHIP 6 months later, but did not have private health insurance. The crowd-out threshold defined by Colorado and Connecticut is 10 percent; therefore, if these states crowd-out estimates were to exceed 10 percent, each state would collaborate with CMS to identify other procedures to reduce crowd-out. According to CMS officials, no state using this approach to estimate crowd-out has exceeded the percentages established or expressed concerns with crowd-out. States we interviewed varied in the extent to which they estimate crowd- out; however, most states did not view crowd-out to be of concern. Among our six selected states with separate CHIP programs, one state New York directly measures crowd-out. New York asks applicants that dropped their private insurance in the last three months the reasons why they dropped this coverage, which includes responses such as the family s preference for the child to have CHIP benefits over their previously held private health insurance. New York state officials told us they consider instances of crowd-out to include when individuals drop private insurance because CHIP costs and benefits are more favorable. For the last 9 months of 2014, the officials estimated crowd-out in New York to be about 1.9 percent. If New York estimates crowd-out to be higher than 8 percent, state officials told us they will report this to CMS and work with CMS on implementing additional crowd-out prevention procedures. Officials from the other five selected states said they do not actively measure crowd-out, some of them citing limited resources and difficulties developing estimates, and noted that crowd-out was not a high priority for them, because they did not think crowd-out was prevalent in their states. For example, officials from two states said they had not heard any concerns regarding crowd-out from their state legislature, state insurance agencies, or others. CMS officials also told us that no state had reported concerns about crowd-out. <2.2. Research on CHIP Crowd- Out Is Limited, Used Different Methods, and Resulted in Varied Estimates; Researchers and Others Identified Challenges in Making Such Estimates> Our review identified few research studies that directly estimated CHIP crowd-out. Specifically, we identified three research studies published from 2013 to 2018; each used different methods and arrived at varying estimates of crowd out. One study estimated crowd-out across 15 states that expanded their CHIP income eligibility requirements between 2008 and 2012 by examining health insurance enrollment changes in a sample of children after they became newly eligible for CHIP. This study estimated that public insurance among children under age 19 increased about 2.9 percentage points during this period, and private insurance decreased by 1.8 percentage points. The study reported that 63 percent of the 2.9 percentage point increase in public insurance was due to crowd-out. The researchers also produced state-level estimates for the effects of CHIP income eligibility expansions on insurance coverage in newly eligible children. These estimates varied by state, suggesting that crowd-out also varies by state. In particular, three states had an increase in public insurance ranging from about 4 to 12 percentage points, and three states had a decrease in private insurance that ranged from about 7 to 14 percentage points. The researchers noted they did not account for factors that may have caused privately insured individuals to increase their use of public insurance, such as changes in the affordability of private health insurance. Another study estimated the effect of CHIP income eligibility expansions on crowd-out in Illinois. This study examined the differences in public and private health insurance between children in Illinois, where CHIP income eligibility was expanded, and children from a combination of other states that did not expand CHIP and were chosen to resemble the demographic characteristics and health insurance profile of Illinois. This study found a 6.5 percentage point increase in CHIP enrollment in 2010 among families between 200 percent and 300 percent of the FPL, and estimated that 35 percent of this increase in CHIP enrollment was due to crowd-out. At other income levels higher than 300 percent of the FPL, the study found either no net effect on private health insurance, or an increase. The third study estimated public and private insurance under different CHIP income eligibility thresholds and different premium schedules. While the study estimated that a CHIP expansion from 200 to 400 percent of the FPL with no premium contribution and a 4 month waiting period increased CHIP enrollment by about 4.5 percentage points and decreased private coverage by about 2.2 percentage points, these estimates do not provide evidence of crowd-out, because the differences in these percentage point estimates were not statistically significant. Although not reporting direct estimates of CHIP crowd-out, we identified other studies that provide related information. For example: In one study, researchers surveyed the parents of current and former CHIP enrollees in 10 states to examine access to private coverage for children enrolled in CHIP. This study found that about 13 percent of new CHIP enrollees had private health insurance in the year before enrolling in CHIP. Among the 13 percent, about 18 percent reported that they dropped their private health insurance, because CHIP was more affordable, and about 5 percent dropped their private health insurance, due to a preference for CHIP. The authors noted that access to private coverage among CHIP enrollees is low and when access is available, affordability is a serious concern for parents. The authors concluded that this suggests limited potential for crowd-out. A study published in 2015 that surveyed the parents of about 4,100 new CHIP enrollees to understand why children enrolled in CHIP, among other things, found that 35 percent of these parents reported applying for CHIP, because it was more affordable than the other health insurance options they could obtain for their children. Representatives from national organizations, researchers, and CMS officials we interviewed noted some of the challenges measuring the extent of CHIP crowd-out, including the limitations of available data sources; however, they did not consider crowd-out to be prevalent. For example: Some data sources do not separately collect or categorize CHIP information. For example, the ACS does not specifically ask respondents if their children have health insurance through CHIP; thus, researchers have to manipulate the data to separate CHIP coverage from other forms of public health insurance, such as Medicaid. The methodologies available to separate CHIP from Medicaid respondents have many limitations, according to researchers and U.S. Census Bureau officials we contacted. Accurate crowd-out estimates require researchers to account for the reasons why someone dropped his or her health insurance and enrolled in CHIP, and this information is not captured by national surveys. Researchers may also vary in what they consider to be crowd-out; for example, some may not consider dropping private health insurance and enrolling a child in CHIP because of a job loss or change in employment to constitute crowd-out. Others do not consider it to be CHIP crowd-out when parents drop their private health insurance and enroll in CHIP, because CHIP is more affordable. CMS officials also noted complexities in measuring crowd-out such as variation in definitions of crowd-out and methodologies for measuring it and they said that the agency has not conducted or commissioned its own evaluation. However, CMS officials reiterated that no state has reported concerns with crowd-out and based on their review of studies conducted by researchers understand that its prevalence is likely low. <3. CMS Tracks States Procedures to Address Potential CHIP Crowd-Out; States Ask Applicants about Other Sources of Coverage and Use Cost-Sharing Provisions> CMS monitors states CHIP crowd-out prevention procedures and offers technical assistance, while states ask CHIP applicants about other sources of health care coverage, and use waiting periods and cost- sharing procedures, such as enrollment fees and premiums. Several state officials we interviewed told us that their crowd-out prevention procedures are effective; however, they could not speak to the effectiveness of any particular procedure and few studies have examined the issue. <3.1. CMS Tracks States CHIP Crowd-Out Procedures Primarily to Identify Inconsistencies in States Reporting and Provide Technical Assistance upon Request> CMS officials told us that they track the information states submit about their CHIP crowd-out prevention procedures as part of their annual report review process to identify any inconsistencies between the information contained in their state plans and the information submitted in states annual reports, among other reasons. When CMS officials identify any noticeable differences in the information reported by states from year-to- year in the annual reports such as the percentage of CHIP applicants with access to private insurance they told us they follow-up with the state to obtain additional information about these differences, and, if needed, advise states on ways they can prevent crowd-out. CMS officials also told us they provide technical assistance, when requested, to assist states in developing crowd-out prevention procedures. For example, CMS officials said they provided states with technical assistance after issuing regulations in 2013 on the use of waiting periods that also required states to update their state plan amendments. CMS officials said they have no plans to develop additional strategies for collecting states crowd-out information, because states have not reported crowd-out to be a concern, and there is no need to re-examine states oversight if prevalence as measured in research is likely low. <3.2. All States with Separate CHIP Programs Reported Implementing at Least One CHIP Crowd-Out Prevention Procedure, Such as Cost Sharing> All 42 states with separate CHIP programs reported to CMS that they had implemented at least one of the following six types of procedures to prevent crowd-out: (1) asking about other health insurance and denying CHIP coverage if other sources of health insurance are identified; (2) implementing cost sharing for CHIP coverage; (3) conducting database checks for other health insurance; (4) implementing a waiting period for CHIP coverage; (5) measuring crowd-out and taking steps if certain thresholds are exceeded; and (6) offering premium assistance for private health insurance. The majority of these states (36 of the 42 states with separate CHIP programs) implemented at least three crowd-out procedures. All 42 states with separate CHIP programs asked applicants about other insurance coverage on their CHIP applications to deny applicants CHIP coverage if private insurance coverage was found, and CMS officials told us that 35 of those states required CHIP enrollees to pay premiums or make other financial contributions to the cost of the coverage. (See table 3.) Among our six selected states with separate CHIP programs, there were differences in how some crowd-out procedures were implemented. For example, three states conducted database checks to see if applicants had other sources of health insurance; however, one state checked prior to enrollment, another checked at enrollment and during application renewal, and one state ran weekly checks. Among our six selected states with separate CHIP programs, none planned to change procedures to prevent potential crowd-out. Among the 42 states with separate CHIP programs, some crowd-out prevention procedures vary or have changed over time. For example, while many states use a private company to collect state and national health insurance coverage information to conduct database checks, another state developed a database that contains information on insurance coverage available through over 40,000 employers in the state. Additionally, prior to 2014, 36 states imposed waiting periods, during which applicants could not have health insurance for a specified time before CHIP enrollment, to prevent crowd-out. In 2017, 14 states used waiting periods. Prior to PPACA and the implementation of CMS regulations that limited waiting periods to 90 days, waiting periods could range from 1 to 12 months. After CMS updated its regulation, 21 states eliminated their waiting periods and five states shortened them. Among our four selected states with separate CHIP programs that shortened or eliminated their waiting periods, none of the state officials expressed concerns that this change contributed to CHIP crowd-out. Administering a waiting period may involve the state tracking or determining whether the applicant meets any of the state and federal waiting period exemptions, the number of months for the waiting period before the applicant can be enrolled in CHIP, and informing the federally facilitated exchange if an exemption to the waiting period applies to the applicant. As a result, some officials noted that reducing waiting periods eased their state s administrative burdens, as well as eliminated gaps in children s health insurance. Among the four selected states, officials from New York said they eliminated their waiting periods because, after undergoing the various administrative steps to verify each application and apply the waiting period, the majority of the CHIP applicants met at least one waiting period exemption. However, three of the selected states with separate CHIP programs maintained waiting periods, and state officials from Texas told us that few individuals met the waiting period exemptions. Some state officials told us they attributed waiting periods which require children to go uninsured for a period of time to gaps in health care, and their states eliminated the waiting period in an attempt to provide continuity in children s access to health care. Although not required by law, officials from two of our selected states with Medicaid expansion CHIP programs told us their states previously had approved 1115 demonstration waivers permitting their states to use a CHIP waiting period, but eliminated them in 2013 and 2014 to close gaps in children s health insurance coverage. Currently, these states use similar procedures as separate CHIP states to prevent crowd-out, according to state officials. Of our three selected states with Medicaid expansion CHIP programs, one state monitors CHIP enrollment trends; a second state requires its managed care organizations to check CHIP enrollees for other sources of insurance as part of their claim processing activities; and one state conducts database checks for other health insurance at the time of enrollment and re-enrollment. <3.3. The Effect of States Procedures to Prevent CHIP Crowd-Out is Unclear, as Relatively Few Studies Have Examined the Issue> The effect of some of the states procedures on preventing CHIP crowd- out is unclear and, according to selected state officials and stakeholders, some crowd-out prevention procedures may have unintended consequences. For example, state officials and stakeholders told us waiting periods result in coverage gaps, which, as one stakeholder noted, could be catastrophic for a family with a sick child who would not have coverage during the waiting period. Several CHIP officials we interviewed believed their procedures are effective in preventing crowd- out; however, they either had not studied the effectiveness of their procedures or could not speak to the effectiveness of any particular procedure. Relatively few of the studies we reviewed examined the effectiveness of state procedures for preventing crowd-out. Specifically, two studies looked at this issue. Both studies concluded that cost-sharing procedures, such as premiums, can reduce the potential for crowd-out among higher- income CHIP-eligible families. A 2014 study used CHIP-related data from 2003 and found that CHIP premiums discourage individuals with private health insurance from dropping their insurance to enroll in CHIP. The study compared health insurance outcomes across 19 states for children with incomes slightly above states CHIP income eligibility thresholds with children in families with incomes slightly below the thresholds. The results indicated that there is an association between CHIP premiums and private insurance coverage; that is, a $1 increase in the CHIP premium above the income cut-off is associated with a 2.2 percentage point higher probability of the child being privately insured for families within 15 percent of the upper income level, and a 1.7 percentage point higher probability for families within 25 percent of the upper income level. These findings suggest that private health insurance may be a preferable alternative for CHIP eligible families at higher income levels who face higher CHIP premiums. A 2013 study used survey data from 50 states and the District of Columbia from 2002 to 2009 to estimate the effect CHIP premium contributions have on enrollment in CHIP, private insurance, and rates of uninsurance among children in families with income eligibility levels of 200 to 400 percent of the FPL. The study found that if CHIP programs expand eligibility to those at higher income levels and charge those families a higher premium, the families may be more likely to choose private health insurance, nullifying the effects of CHIP expansion among higher income families. <4. Agency Comments> We provided a draft of this report to HHS for review and comment. The department provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of CMS, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or at yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix III. Appendix I: Crowd-Out and Trends in Children s Health Insurance and Employer Sponsored Health Insurance Crowd-out may occur when employers modify or decide not to offer health insurance to their employees or to their dependents because of Children s Health Insurance Program (CHIP) availability. For example, employers who are aware of CHIP may decide not to offer health insurance to employees due to concerns about the costs of providing insurance, especially for smaller sized firms, or as a result of changes in federal or state policies, such as requirements resulting from the Patient Protection and Affordable Care Act (PPACA). For example, PPACA required employers with a certain number of employees to offer their full-time employees a health insurance option meeting certain criteria, including affordability, or face tax penalties. Some researchers and policymakers expressed concern that this requirement may encourage employers to change how they offer insurance to employees, such as no longer offering family and dependent coverage, instead only offering health insurance to the employees, thereby causing employees with children to seek public insurance or insurance through health insurance exchanges. Other researchers and organizations point to PPACA increasing the availability of private health insurance offered by employers and through health insurance exchanges, particularly in areas and among populations where employer sponsored health insurance may not be as readily available. Crowd-out may also occur when employees drop or decide not to enroll in insurance offered by their employers and enroll their children in CHIP because of CHIP availability; however, as we have reported in the past, assessments of crowd-out should consider the affordability and availability of the employer sponsored insurance. For example, families with access to employer sponsored insurance may find CHIP more affordable or find CHIP benefits more comprehensive than employer sponsored insurance. Alternatively, they may find that CHIP provides better access to services specific to their child s health care needs. For example, an evaluation of CHIP published in 2014 found that CHIP enrollees had better access to dental benefits than children with private insurance, although they were less likely to have a regular source of medical care and nighttime or weekend access to a provider. As we have identified in prior work, assessments of the potential for crowd-out must take into account an understanding of the extent to which private health insurance is available and affordable to low-income families who qualify for CHIP. American Community Survey (ACS) data showed that for 2013 through 2017, the most prevalent source of insurance for children in the United States under the age of 19 was private health insurance available through a parent s employer or union. (See fig. 2.) Although private health insurance is the most prevalent source of insurance for children, there is substantial variation across states in coverage rates. (See fig. 3.) For example, in eight states, fewer than 40 percent of children were insured through an employer in 2017. In contrast, in Utah, more than 60 percent of families with children were insured by an employer in 2017. Medical Expenditure Panel Survey (MEPS) data show that the extent to which employers offered individuals insurance in 2013 through 2015 varied by family income. For example, MEPS Household Component data which includes information on whether individuals were offered insurance by their employers show that over 90 percent of families with incomes greater than 400 percent of the federal poverty level (FPL) were offered insurance by their employers from 2013 through 2015. The percentage of families offered insurance by their employers ranged from about 35 percent for families with incomes less than or equal to 138 percent of the FPL to about 85 percent for families with incomes above 300 and less than 400 percent of the FPL. (See fig. 4.) An Agency for Healthcare Research Quality (AHRQ) analysis of MEPS Insurance Component data which includes information on whether employers offered insurance to their employees and the cost of that insurance shows that in 2017, 24.2 percent of small employers (less than 50 employees) with a predominately lower-wage workforce offered their employees health insurance compared with 57.6 percent for small employers with a higher-wage workforce. In contrast, in 2017, offer rates at larger employers that is, employers with more than 50 employees was 94 percent for those with predominately lower-wage employees and 98.7 percent for large employers with predominately higher wage employees. With regard to affordability, the MEPS Insurance Component data show that average employee premium contributions for family coverage from 2013 through 2017 increased. Over this period, employees who work for employers with a predominantly lower-wage workforce that is, employers that paid 50 percent or more of their workforce $12 or less per hour contributed a larger amount and percentage of premiums to their employer-sponsored insurance than did employees who work for non- low-wage employers. (See fig. 5.) MEPS Insurance Component data also show that employees who work at establishments with a predominately lower-wage workforce enroll in insurance offered by their employers at a lower rate than employees of other establishments, though it is not known if this is due to affordability reasons. Finally, MEPS Insurance Component data show that the percentage of employees with deductibles and the amount of the deductibles have increased from 2004 to 2017. Between 2013 and 2017, average family deductibles increased about 36 percent, from $2,491 in 2013 to $3,396 in 2017. In addition, research published in 2018 on high deductible health insurance plans showed both increasing enrollment in these plans and that larger employers (1,000 or more employees) contributed more toward health insurance premiums for these plans than smaller employers (less than 25 employees). For example, according to this study: From 2006 to 2016, there was a 35 percentage point increase (11.4 percent to 46.5 percent) in enrollees in high-deductible health plans, with enrollees from smaller employers more likely to be enrolled in these plans compared with enrollees from larger employers (56.4 percent of enrollees from small firms compared with 42 percent of enrollees from large firms). A lower percentage of enrollees from the smaller firms had a plan with an employer-funded account, which defray health care costs, compared with enrollees from larger firms. For example, in 2016, only about one-third of enrollees in high-deductible health insurance plans from the smallest employers had an employer funded account to help pay for medical expenses compared with 89.3 percent of enrollees from the largest employers. High-deductible health insurance plan enrollees of the smallest employers were also more likely to not have the choice of an alternative plan type compared with enrollees from the largest employers. Appendix II: Source of Health Insurance for Children under Age 19 by State in 2017 Although private health insurance is the most prevalent source of insurance for children, there is substantial variation across states in coverage rates. Figure 6 provides information on the percentage of children under age 19 insured through employer sponsored insurance, Medicaid, and the Children s Health Insurance Program, as well as those who were uninsured in 2017. Appendix III: GAO Contact and Staff Acknowledgments <5. GAO Contact> Carolyn L. Yocom, (202) 512-7114 or yocomc@gao.gov. <6. Staff Acknowledgments> In addition to the contact named above, individuals making key contributions to this report include Shannon Legeer (Assistant Director), Toni Harrison (Analyst-in-Charge), Mollie Lemon, and Courtney Liesener. Also contributing were Alison Binkowski, George Bogart, Jill Center, Leia Dickerson, Giselle Hicks, Drew Long, Kristeen McLain, Yesook Merrill, Jasleen Modi, Vikki Porter, Lisa Rogers, and Merrile Sing. | Why GAO Did This Study
CHIP is a public insurance program established in 1997 that finances health care for over 9 million low-income children whose household incomes do not qualify them for Medicaid. States have flexibility in structuring their CHIP programs under broad federal requirements, and their income eligibility limits vary. Policymakers have had concerns that some states' inclusion of children from families with higher income levels could result in some families substituting CHIP for private insurance (i.e., crowd-out). Crowd-out may occur when, because of CHIP availability, (1) employers make decisions about offering health insurance; or (2) employees make decisions about enrolling in employer-sponsored health insurance.
GAO was asked to examine CHIP crowd-out. This report describes (1) the information on potential indicators of crowd-out reported by states and estimates of crowd-out; and (2) the procedures CMS and states use to address potential crowd-out.
GAO reviewed federal laws and guidance and state CHIP documentation, including their 2017 annual reports (the latest available at the time of GAO's review); conducted a literature review of studies published between 2013 and 2018; and interviewed CMS officials, stakeholders from national health policy organizations, and researchers. GAO also interviewed a non-generalizable selection of officials from nine states chosen to obtain variation in CHIP programs, such as income eligibility levels and geography.
HHS provided technical comments on a draft of this report, which GAO incorporated as appropriate.
What GAO Found
Limited information exists about Children's Health Insurance Program (CHIP) crowd-out—that is, substituting CHIP for private health insurance. The Centers for Medicare & Medicaid Services (CMS), within the Department of Health and Human Services (HHS), asked the 42 states that have separate CHIP programs to report on two crowd-out indicators for the 2017 annual reports: (1) the percentage of individuals who are enrolled in CHIP that have access to private health insurance and (2) the percentage of CHIP applicants who cannot be enrolled because they have private health insurance. The 2017 reports showed that:
4 states reported 0.5 percent to 7 percent of CHIP applicants had access to private health insurance; and
21 states reported denying CHIP enrollment to 0 percent to 18 percent of applicants because they had private insurance.
Not all of these 42 states reported on these indicators and GAO found that those that do may calculate them differently. CMS officials acknowledged that not all states report on these indicators; however, they noted that states operating separate CHIPs have other processes in place to prevent children with other health insurance from enrolling in CHIP. Further, some states may have other processes for directly measuring CHIP crowd-out. GAO also identified three studies published between 2013 and 2018 that estimated CHIP crowd-out. However, these studies used different methods to calculate crowd-out, and as a result produced varied estimates. For example, one study attributed a portion of increased enrollment in CHIP and other public insurance to crowd-out, while another study found no evidence of crowd-out.
According to CMS's 2017 annual reports and other information, the 42 states with separate CHIP programs reported implementing at least one of six types of crowd-out prevention procedures.
Source: GAO analysis of information from the Centers for Medicare & Medicaid Services, state Children's Health Insurance Programs (CHIP), and a Kaiser Family Foundation and Georgetown Center for Children and Families survey on Medicaid and CHIP programs. │GA O-20-12 |
gao_GAO-19-253 | gao_GAO-19-253_0 | <1. Background> <1.1. Hurricanes Irma and Maria> In September 2017, two Category 5 hurricanes struck the USVI, causing catastrophic damage across the entire territory and neighboring Caribbean islands. On September 6, 2017, Hurricane Irma struck St. Thomas and St. John and on September 19, 2017, Hurricane Maria struck St. Croix (see fig. 1). The storms severely damaged the territory s critical infrastructure, devastating more than 90 percent of aboveground power lines and shutting down electricity and telecommunications for months. Further, 52 percent of the territory s housing units were damaged, ports and airports were closed for weeks, and hundreds of thousands of tons of debris were generated, often blocking roads and making transportation hazardous. In addition, according to a September 2018 report from the USVI Hurricane Recovery and Resilience Task Force, the territory s economic activity especially tourism was severely reduced in the months following the storms, leading to job losses and a total estimated economic impact of $1.54 billion. In response to the request of the Governor of the USVI, the President declared a major disaster the day after each hurricane struck the territory. Major disaster declarations can trigger a variety of federal response and recovery programs for government and nongovernmental entities and households and individuals, including assistance through the Public Assistance program. Under the National Response Framework and National Disaster Recovery Framework, DHS is the federal department with primary responsibility for coordinating disaster response and recovery, and within DHS, FEMA has lead responsibility. The Administrator of FEMA serves as the principal adviser to the President and the Secretary of Homeland Security regarding emergency management. <1.2. FEMA s Public Assistance Program> FEMA s Public Assistance program provides funding to state, territorial, local, and tribal governments as well as certain types of private nonprofit organizations to assist with responding to and recovering from major disasters or emergencies. As shown in figure 2, Public Assistance program funds are categorized broadly as emergency work or permanent work. Within these broad categories are separate subcategories. In addition to the emergency work and permanent work categories, the program includes category Z, which represents indirect costs, administrative expenses, and other expenses a recipient or subrecipient incurs in administering and managing the Public Assistance program that are not directly chargeable to a specific project. FEMA s Public Assistance program also provides funding for cost- effective hazard mitigation measures to reduce or eliminate the long-term risk to people and property from future natural and man-made disasters and their effects. Specifically, FEMA provides funding for hazard mitigation measures in conjunction with the repair of disaster-damaged facilities to enhance their resilience during future disasters. For example, a community that had a fire station damaged by a disaster could use Public Assistance funding to repair the facility and incorporate additional measures such as installing hurricane shutters over the windows to mitigate the potential for future damage. Once the President has declared a disaster, FEMA, the state or territorial government (the recipient), and local or territorial entities (the subrecipient) work together to develop damage assessments and formulate project worksheets for eligible projects. Project worksheets detail the scope of work and estimated cost for repairing or replacing disaster-damaged infrastructure as well as any hazard mitigation measures that may help to increase the resilience of this infrastructure during future disasters. After a project has completed FEMA s review process and is approved, FEMA obligates funding for the project by placing money into an account where the recipient has the authority to draw down or withdraw funding to pay the subrecipient for eligible work upon completion (see fig. 3). In addition, a state or territorial governor may designate a governor s authorized representative (GAR) to oversee all aspects of disaster assistance, including Public Assistance funding. Specifically, the GAR is responsible for ensuring compliance with program requirements by providing oversight into how goods and services are procured for projects, such as construction materials or modular school units. The GAR also confirms that subrecipients submit complete documentation demonstrating that all work completed is in accordance with a project s approved scope of work and Public Assistance program requirements. The GAR then approves the paperwork and the recipient can draw down funding from the account holding the obligations to reimburse subrecipients for completed work. When a project has been completed, FEMA conducts a close-out process to certify that all work has been completed and reconciles the actual cost incurred. If the actual cost of the completed work is greater than the amount of money FEMA obligated for the project, FEMA will reimburse the subrecipient for these additional costs. <1.3. FEMA s Public Assistance Alternative Procedures Program> The Sandy Recovery Improvement Act of 2013 authorized the use of alternative procedures in administering the Public Assistance program, thereby providing new flexibilities to FEMA, states, territories, and local governments for debris removal, infrastructure repair, and rebuilding projects using funds from this program. Unlike in the standard Public Assistance program where FEMA will fund the actual cost of a project, the Public Assistance alternative procedures allow awards for permanent work projects to be made on the basis of fixed-cost estimates to provide financial incentives for the timely and cost-effective completion of work. Under these procedures, if the actual cost of the project exceeds the fixed-cost estimate agreed upon by FEMA and the recipient, the recipient or subrecipient is responsible for the additional costs at the time of the close-out process. However, if the actual cost of completing eligible work for a project is below the estimate, the recipient may use the remaining funds for additional cost-effective hazard mitigation measures to increase the resilience of public infrastructure. In addition, these funds may also be used for activities that improve the recipient s or subrecipient s future Public Assistance operations or planning. <2. FEMA Had Obligated More Than $1.4 Billion and the USVI Had Expended About $587 Million in Public Assistance Funding as of October 1, 2018> As of October 1, 2018, FEMA had obligated more than $1.4 billion through the standard Public Assistance program for 475 projects across the USVI. As shown in figure 4, FEMA obligated funding for both emergency work and permanent work projects. As of October 1, 2018, of the more than $1.4 billion FEMA obligated, the USVI had expended approximately $586.9 million about 41 percent of total Public Assistance program obligations to the USVI to reimburse subrecipients for completed work. Of this $586.9 million, the USVI had expended about $532.8 million (91 percent) for emergency work projects in categories A and B and $49.1 million (8 percent) for permanent work projects in categories C through G. The majority of FEMA s obligations and the funding the USVI expended as of October 1, 2018, are for emergency work because these projects began soon after the disasters struck and focused on debris removal and providing assistance to address immediate threats to life and property. In contrast, permanent work projects take time to identify, develop, and ultimately complete as they represent the longer-term repair and restoration of public infrastructure. Emergency work. Of the more than $1.4 billion FEMA had obligated as of October 1, 2018, about $873.8 million (60 percent) was obligated for 322 emergency work projects in Public Assistance categories A and B. Category A: Debris Removal. FEMA obligated about $94.0 million for 71 projects focused on debris removal activities across the territory. For example, FEMA obligated $45.0 million to the USVI Department of Public Works for territorywide debris removal efforts and $39.1 million to the USVI Water and Power Authority for these activities in St. Croix (see fig. 5). Of the $94.0 million FEMA obligated for debris removal, the USVI had expended about $54.6 million (58 percent) as of October 1, 2018. Category B: Emergency Protective Measures. FEMA obligated about $780 million for 251 projects focused on emergency measures. For example, FEMA obligated about $187 million for the Sheltering and Temporary Essential Power program, which is intended to provide essential repairs or restore power to private residences to allow affected individuals to return or remain in their homes, thereby reducing the demand for other shelter options. In addition, FEMA obligated approximately $101 million for the purchase and installation of modular units to be used as temporary classrooms and other facilities while permanent school buildings are repaired or replaced (see fig. 6). Of the $780 million FEMA obligated for emergency protective measures, the USVI had expended about $478 million (61 percent) as of October 1, 2018. Permanent work. Of the more than $1.4 billion in Public Assistance funding FEMA had obligated as of October 1, 2018, about $516.3 million (36 percent) was obligated for 153 permanent work projects across categories C through G. These permanent work projects include about $349.4 million for cost-effective hazard mitigation measures aimed at reducing the future risk of disaster-damaged facilities in conjunction with their repair. Further, of the $516.3 million FEMA obligated for permanent work in the USVI, approximately $500.4 million or 97 percent of all permanent work obligations was obligated to the USVI Water and Power Authority for the permanent repair of electrical distribution systems and other utilities across the territory. Category C: Roads and Bridges. FEMA obligated about $5.2 million for 35 projects focused on repairing roads and bridges in the territory, 18 of which included hazard mitigation measures totaling about $1.5 million. For example, FEMA obligated about $410,000 for one project to repair a road on St. Thomas damaged by floodwaters. This project included approximately $227,000 for hazard mitigation measures, such as replacing the damaged road surface with reinforced concrete and building a retaining wall. As of October 1, 2018, the USVI had not expended funding in this category. Category D: Water Control Facilities. As of October 1, 2018, FEMA did not have any projects in this category. According to FEMA officials, the USVI does not have water control infrastructure that would fall under category D, such as dams, levees, or berms. Category E: Buildings and Equipment. FEMA obligated $6.0 million for 77 projects focused on repairing and rebuilding damaged buildings and equipment, 16 of which included hazard mitigation measures totaling about $1.8 million. For example, FEMA obligated about $1.5 million to repair damage to the airport terminal building in St. Thomas a project where hazard mitigation measures comprised 87 percent of the project s total cost (see fig. 7). These measures include replacing the terminal s roof with materials designed to withstand higher wind speeds to increase the building s resilience during future storms. Of the $6.0 million FEMA obligated for category E, the USVI had expended about $148,000 (2.5 percent) as of October 1, 2018. Category F: Utilities. Of the $516.3 million FEMA obligated for permanent work projects, $502.2 million (97 percent) was obligated for 15 projects focused on repairing utilities, 7 of which included hazard mitigation measures totaling about $346.0 million. For example, FEMA obligated $286.1 million and $50.2 million for permanent electrical distribution system repairs in St. Croix and St. John, respectively. This includes replacing damaged wooden utility poles with more resilient composite fiberglass poles that can withstand 200 mile per hour winds as well as power transmission lines and transformers (see fig. 8). Of the $502.2 million FEMA obligated for category F, the USVI had expended about $49.0 million (10 percent) as of October 1, 2018. Category G: Parks, Recreational, and Other Facilities. FEMA obligated about $2.9 million for 26 projects focused on repairing parks, playgrounds, and other recreational facilities, 1 of which included hazard mitigation measures. Specifically, FEMA obligated about $453,000 to repair the Lindbergh Park and Water Playground in St. Thomas a project that included about $18,000 for hazard mitigation measures. As of October 1, 2018, the USVI had not expended funding in this category. Future projects. In addition to the more than $1.4 billion in Public Assistance funding FEMA had obligated as of October 1, 2018, FEMA expected to review an additional 900 future projects for eligibility representing an estimated $779.4 million in potential funding. Of this estimated total amount, FEMA anticipates $128.5 million (16 percent) in costs for future emergency work projects and $650.9 million (84 percent) in costs for future permanent work projects. <3. FEMA and the USVI Are Transitioning From the Standard Public Assistance Program to the Public Assistance Alternative Procedures Program> In July 2018, FEMA approved a June 2018 request from the Governor of the USVI to transition to using the Public Assistance alternative procedures program for permanent work in the territory. The alternative procedures provide new flexibilities to FEMA and the USVI that are not available through the standard Public Assistance program. In September 2018, FEMA issued the Public Assistance Alternative Procedures Permanent Work Guide for the USVI to provide guidance on the implementation of the program in the territory. FEMA and USVI officials stated that a section of the Bipartisan Budget Act of 2018 and the flexibilities provided by the program itself influenced the USVI s decision to transition to using the alternative procedures. First, Section 20601 of the Bipartisan Budget Act of 2018 authorized FEMA, when using the Public Assistance alternative procedures, to provide assistance to fund the replacement or restoration of disaster- damaged infrastructure that provide critical services to industry standards without regard to pre-disaster condition. FEMA and USVI officials told us that the territory therefore has a valuable opportunity to use the alternative procedures to repair and rebuild its critical services infrastructure including the USVI s education system, electrical grid, and emergency medical care system, among others so it is in a better condition than it was prior to the 2017 hurricanes. Second, USVI officials stated that under the standard Public Assistance program currently being used in the USVI, the territorial government is responsible for providing the initial funding to reimburse subrecipients for completed work prior to drawing down funds from the account holding the FEMA- obligated amounts of money for each project. They explained that because of the financial liquidity challenges facing the territory, this process was problematic and required USVI officials to prioritize projects based on the availability of the territory s funding. USVI officials stated that the Public Assistance alternative procedures will help to address this challenge by providing the territory with more flexibility regarding when and how to fund projects. For example, in certain cases, the USVI is able to consolidate permanent work projects approved under the alternative procedures and share obligated funding across these projects. In addition, the USVI is able to use any excess funds for cost-effective hazard mitigation measures or for activities that improve the recipient s or subrecipient s future Public Assistance operations or planning. As of November 2018, FEMA and USVI officials stated they were working to identify and develop permanent work projects using the Public Assistance alternative procedures and discussing the process for developing the fixed-cost estimate for each project. Specifically, unlike in the standard Public Assistance program where FEMA will fund the actual cost of a project, the Public Assistance alternative procedures use a fixed-cost estimate which is agreed to prior to obligation and the USVI will be financially responsible for any actual costs that exceed this amount. Given the USVI s difficult fiscal situation, FEMA and USVI officials stated that ensuring these fixed-cost estimates are as accurate as possible will be critical. However, FEMA officials also noted that if FEMA and the territory cannot come to an agreement on a fixed-cost estimate for any given project, the USVI does have the option to move forward through the standard Public Assistance program. According to FEMA s Public Assistance Alternative Procedures Permanent Work Guide for the USVI, all cost estimates for projects using these procedures must be finalized by March 2020. We will continue to monitor the USVI s plans for using the alternative procedures as part of our broader work assessing disaster recovery efforts in the USVI and will issue a follow-on report later this year. <4. Agency Comments> We provided a draft of this report to DHS and the USVI government. We requested comments from DHS and the USVI government, but none were provided. DHS did provide technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, the Administrator of FEMA, the USVI government, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you and your staff have any questions, please contact me at (202) 512- 8777 or curriec@gao.gov. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: GAO Contact and Staff Acknowledgments <5. GAO Contact:> Chris Currie, 202-512-8777 or curriec@gao.gov. <6. Staff Acknowledgments:> In addition to the contact named above, Joel Aldape (Assistant Director), Bryan Bourgault, Leanna Diggs, Aaron Gluck, Eric Hauswirth, Brian Lipman, Amanda Miller, Heidi Nielson, and Kevin Reeves made key contributions to this report. | Why GAO Did This Study
In September 2017, two major hurricanes—Irma and Maria—struck the USVI, causing billions of dollars in damage to its infrastructure, housing, and economy. FEMA—a component of the Department of Homeland Security—is the lead federal agency responsible for assisting the USVI as it recovers from these natural disasters. Among other responsibilities, FEMA administers the Public Assistance program in partnership with the USVI territorial government, providing the USVI grant funding for response and recovery activities, including debris removal efforts, life-saving emergency protective measures, and the repair, replacement, or restoration of public infrastructure.
GAO was asked to review the federal government's response and recovery efforts related to the 2017 hurricanes. This report describes (1) the status of FEMA's Public Assistance program funding provided to the USVI in response to the 2017 hurricanes as of October 1, 2018, and (2) the USVI's transition to implementing the Public Assistance alternative procedures in the territory. GAO reviewed program documents and data on obligations and expenditures as of October 1, 2018, and interviewed officials from FEMA and the USVI regarding the Public Assistance program specifically and disaster recovery efforts more generally. GAO also conducted site visits to the USVI islands of St. Croix, St. Thomas, and St. John.
GAO is not making any recommendations in this report, but will continue to monitor the progress of the USVI's recovery as part of its ongoing work.
What GAO Found
The Federal Emergency Management Agency (FEMA) obligated more than $1.4 billion in grant funding for Public Assistance projects in the U.S. Virgin Islands (USVI) as of October 1, 2018, in response to the 2017 hurricanes. FEMA obligated about $873.8 million for emergency work—debris removal activities and emergency measures to lessen the immediate threat to life, public health, and safety—and about $516.3 million for permanent work—including the repair or replacement of public infrastructure such as roads, electrical utilities, and schools. For example, FEMA obligated about $101 million for the purchase and installation of modular units to be used as temporary classrooms and other facilities while permanent school buildings are repaired or replaced. FEMA's obligations for permanent work also included funding for hazard mitigation measures to reduce the risk of damage during future storms—for example, by replacing wooden utility poles with composite fiberglass poles (see figure).
FEMA and the USVI are transitioning from using the standard Public Assistance program in the territory to using the Public Assistance alternative procedures program. Unlike in the standard Public Assistance program where FEMA will fund the actual cost of a project, the alternative procedures allow awards to be made on the basis of fixed-cost estimates to provide financial incentives for the timely and cost-effective completion of permanent work projects. FEMA and USVI officials stated that the alternative procedures will give the USVI more flexibility in determining when and how to fund projects and provide an opportunity to repair and rebuild the USVI's critical services infrastructure—such as its education system and electrical grid—so it meets industry standards without regard to pre-disaster condition. As of November 2018, FEMA and USVI officials were discussing the process for developing projects under the Public Assistance alternative procedures. GAO will continue to monitor the USVI's plans for using the alternative procedures as part of its broader review assessing the USVI's disaster recovery efforts and will issue a follow-on report later this year. |
gao_GAO-20-110 | gao_GAO-20-110_0 | <1. Background> <1.1. DOD Roles and Responsibilities Related to Child Abuse> There are a number of organizations within DOD with responsibility for preventing, responding to, and resolving incidents of child abuse, including child-on-child abuse, as described below. Under Secretary of Defense for Personnel and Readiness. The Under Secretary of Defense for Personnel and Readiness collaborates with DOD component heads to establish programs and guidance to implement the FAP, among other things; it also programs, budgets, and allocates funds and other resources for the FAP. The Assistant Secretary of Defense for Manpower and Reserve Affairs, under the authority of the Under Secretary of Defense for Personnel and Readiness, provides policy, direction, and oversight to the FAP. The Assistant Secretary of Defense for Manpower and Reserve Affairs, through the Deputy Assistant Secretary of Defense for Military Community and Family Policy, is also responsible for collaborating with service Secretaries to monitor compliance with FAP standards. The Defense State Liaison Office, located within the Office of the Deputy Assistant Secretary of Defense for Military Community and Family Policy, is responsible for assisting with the passage of state bills that affect key issues within the department, such as the reporting of child abuse. DOD Family Advocacy Program. DOD FAP serves as the policy proponent for, and a key element of, DOD s coordinated community response system to prevent and respond to reports of child abuse, domestic abuse, and problematic sexual behavior in children and youth in military families. The FAP, among other things, provides trauma- informed assessment, rehabilitation, and treatment to persons who are involved in alleged incidents of child abuse, domestic abuse, and problematic sexual behavior in children and youth who are eligible to receive treatment in a military treatment facility. To execute these responsibilities, DOD funds over 2,000 positions in the department to deliver FAP services, including credentialed and licensed clinical providers. The department prescribes uniform standards for all service FAPs through DOD Manual 6400.01, Volume 1, FAP Standards. DOD uses these standards to promote public awareness; aid in prevention, early identification, reporting, and coordinated, comprehensive intervention and assessment; and to support victims of child abuse and domestic abuse. DOD revised these standards in July 2019 to include the same support and services for children exhibiting or affected by problematic sexual behavior. Military Service Family Advocacy Programs. Each military department Secretary is responsible for developing service-wide FAP policy that addresses any unique requirements for their respective installation FAPs. The department Secretaries are also responsible for requiring that all installation personnel receive the appropriate training to implement the FAP standards. In addition, each service has a FAP headquarters entity that develops and issues implementing guidance for the installation FAPs for which they provide oversight. At the installations, commanders are to establish an installation Family Advocacy Committee with a chairperson that serves as the policy implementing, coordinating, and advisory body to address child abuse and domestic abuse at the installation. Military Criminal Investigative Organizations and Military Police. The Department of Defense Inspector General establishes policy, provides guidance, and monitors and evaluates program performance for all DOD activities relating to criminal investigations and military law enforcement programs, including coordination with DOJ. Military law enforcement organizations include both military police and military criminal investigative organizations. Each military department has established a military criminal investigative organization that may initiate investigations on incidents with a DOD nexus, such as if a crime occurred on a military installation or involved military personnel or dependents. The military departments military criminal investigative organizations are the Army Criminal Investigation Command, Naval Criminal Investigative Service, and Air Force Office of Special Investigations. Each military criminal investigative organization provides an element of DOD s special victim investigation and prosecution capability. DOD defines special victims as adults or children who are sexually assaulted or suffer aggravated assault with grievous bodily harm. A special victim investigation and prosecution designation allows the military criminal investigative organizations to assign specially trained investigators who work collaboratively with other relevant trained personnel, such as Judge Advocates and FAP managers, to provide services to the victim. While military criminal investigative organizations can investigate any crime with a DOD nexus within their investigative purview officials from each organization stated that they primarily investigate serious felony-level offenses and any type of sexual offense. Military police that provide services at military installations primarily serve as first responders to incidents and will notify a military criminal investigative organization for more serious incidents requiring an investigation, according to service officials. DOD Office of the General Counsel and Service Judge Advocates. The DOD Office of General Counsel provides advice to the Secretary of Defense regarding all legal matters and services performed within, or involving, DOD. The DOD Office of General Counsel also provides for the coordination of significant legal issues, including litigation involving DOD and other matters before DOJ. Each military department also has a Judge Advocate General s Corps that establishes legal offices (Offices of the Staff Judge Advocate) which, among other things, serve as prosecutors and defense counsel at courts-martial; provide legal assistance to eligible personnel on personal, civil, and legal matters; advise commanders on military justice and disciplinary matters; and provide legal advice to military investigative agencies. In addition, any person identified as the victim of an offense under the Uniform Code of Military Justice (or in violation of the law of another jurisdiction if any portion of the investigation is conducted primarily by the DOD components) is to be notified of their rights under DOD s Victim and Witness Assistance Program, informed about the military justice process, and provided other services to support the victim or witness and their family. DOD Education Activity. DODEA operates as a DOD field activity under the Office of the Under Secretary of Defense for Personnel and Readiness. It is a federally-operated school system that is responsible for planning, directing, and managing prekindergarten through 12th grade educational programs for DOD. All DODEA personnel are designated as mandatory reporters of child abuse and are required to participate in the early identification of child abuse and the protection of children, including the prompt reporting of alleged child abuse or any information that gives reason to suspect child abuse. <1.2. DOD Child Abuse Prevention Efforts> FAP is responsible for several child abuse prevention programs across the services. For example, the New Parent Support Program offers intensive home visiting services on a voluntary basis to expectant parents and parents with young children. Officials target the program toward families who display some indicators of being at risk for child abuse or who have been assessed and determined as at risk for child abuse. All FAP personnel are mandated reporters to state child welfare service agencies for all allegations of child abuse. In addition, the service FAPs, at every military installation where families are located, work with the other entities within the coordinated community response, including civilian social services agencies and law enforcement, to provide comprehensive prevention and response to maltreatment. According to service FAP officials, while each service FAP has a domestic abuse victim advocate program that serves domestic abuse victims as well as non-offending parents in child abuse incidents, specific prevention efforts vary across installations and services. For example, the Air Force FAP is taking steps to track the effectiveness of FAP treatment programs to strengthen prevention efforts. Through the Navy FAP s victim advocate program, non-offending parents are connected with resources from initial referral to case closure or until the non-offending parent no longer desires services that include potential prevention techniques, such as establishing a strong support system. The Marine Corps initiated evaluation of prevention programs and uses evidence-informed curricula to provide parenting education and support, according to Marine Corps officials. The Army has begun to operationalize combined parent-child cognitive behavior therapy to address the needs of children and families at risk for child physical abuse through child interventions, parent strategies to address child trauma, and family interventions. At one Army installation, a FAP official described a puppet show aimed at teaching children about appropriate and inappropriate behaviors as part of prevention efforts related to problematic sexual behavior in children and youth. Other DOD organizations also have roles related to prevention. For example, child development centers located on installations have a number of child abuse prevention measures, including visual access throughout activity rooms used for care, closed circuit television, identification checks and badges for all visitors, and a system to indicate which staff members are cleared to be alone with children, such as a system of colored smocks. In addition, all personnel on military installations who work with children, including those at DODEA schools, child development centers, and child and youth centers, must pass a background check as a condition of employment, among other things. <1.3. Child Abuse Incident Determination Process> Each military installation with a FAP has an Incident Determination Committee (IDC) that reviews reported incidents of child abuse and domestic abuse to determine whether they meet DOD s criteria for abuse. Per DOD guidance, every reported incident of abuse or neglect must be presented to the IDC unless there is no possibility that the incident could meet any of the criteria for abuse or neglect. Physical abuse, emotional abuse, and neglect each have two primary associated criteria: (a) an act or failure to act, and (b) physical injury or harm, or the reasonable potential for physical injury or harm; psychological harm, or the reasonable potential for psychological harm; or stress-related somatic symptoms resulting from such act or failure to act. Any act of child sexual abuse that is found to have occurred under part (a) is automatically considered to have had a significant impact on the child, which is the criterion for part (b); therefore, the IDC only considers part (a) for incidents of child sexual abuse, and if the IDC determines the act occurred, then the incident is found to have met criteria. Voting members of the IDC include: the deputy to the installation commander (Chair); the senior noncommissioned officer advisor to the installation commander; representatives from the servicemember s command, the Staff Judge Advocate s office, and military police; and the FAP manager or FAP supervisor of clinical services. According to DOD policy, the IDC may request that additional personnel, such as medical personnel and military criminal investigative organizations, attend the IDC when necessary to provide input on incidents and to answer any questions about the results of a medical examination or an investigation. IDC members review what is known about the incident, and then the voting members vote to determine if an incident meets each of DOD s criteria for abuse. The final incident determination is made by a simple majority vote, and the IDC Chair serves as the tiebreaker in the event of a tie. The IDC s decision is communicated to the servicemember via the servicemember s command. IDC determinations may be reconsidered. The appeal request and response processes vary by service. In August 2016, DOD issued guidance standardizing the IDC process across the services. According to DOD officials, prior to this, each service had a similar but distinct process for determining whether abuse occurred. According to a DOD report, the IDC is to be a clinical, not a disciplinary, process. The IDC is separate and distinct from any law enforcement or military criminal investigative organization process. Each incident that is presented to the IDC is also discussed at a clinical case staff meeting, which is made up of personnel from the FAP, among others. During the clinical case staff meeting which can occur before or after the IDC makes its determination, according to DOD officials attendees generate clinical recommendations for support services and treatment for victims and offenders of child abuse who are eligible for treatment at a military medical treatment facility, and ongoing coordinated case management. DOD FAP officials stated that treatment is not dependent on an IDC s determination, meaning that the FAP may still provide support services to the family even if the IDC finds that a reported incident does not meet DOD s criteria for abuse. <1.4. DOJ Roles and Responsibilities in Addressing DOD-Related Incidents of Child Abuse> The Executive Office for United States Attorneys provides general executive assistance and supervision to the Offices of the United States Attorneys, including evaluating their performance, making appropriate reports and inspections, and taking corrective action when needed. The Executive Office for United States Attorneys also serves as a liaison between DOJ and the 93 United States Attorneys located across the 50 states, the District of Columbia, and some U.S. territories. United States Attorneys serve as the nation s principal litigators and work under the direction of the Attorney General to prosecute crimes, including some crimes that occur on some military installations. When cases from military installations are referred to a United States Attorney s office for prosecution, they can be accepted, referred, or declined. The case can be declined for prosecution for several reasons: (1) it may not constitute a federal offense, (2) there is insufficient evidence to obtain a conviction, (3) prosecution would not serve a substantial federal interest, (4) the individual may be prosecuted in another jurisdiction, or (5) there is another adequate noncriminal alternative to prosecution. DOJ s Criminal Division comprises multiple sections, including the Child Exploitation and Obscenity Section and the Human Rights and Special Prosecutions Section, both of which have responsibility for resolving crimes occurring on overseas military installations. The mission of the Child Exploitation and Obscenity Section is to protect child welfare and communities by enforcing federal criminal statutes relating to the exploitation of children and obscenity. The Human Rights and Special Prosecutions Section primarily investigates and prosecutes cases against human rights violators and other international criminals. The Office of Juvenile Justice and Delinquency Prevention within DOJ s Office of Justice Programs provides national leadership, coordination, and resources to prevent and respond to juvenile delinquency and victimization. The Office supports the efforts of states, tribes, and communities to develop and implement effective and equitable juvenile justice systems that enhance public safety, ensure youth are held appropriately accountable to both crime victims and communities, and empower youth to live productive, law-abiding lives. <1.5. Community Partner Roles and Responsibilities> In addition to DOD and DOJ, there are also community partners that assist in responding to and resolving incidents of child abuse, including child-on-child abuse. Depending on the military installation, there may be local memorandums of agreement or understanding between the installation and community partners, such as CACs, child welfare agencies, and civilian law enforcement that help guide the response to and reporting of these incidents. The National Children s Alliance and Children s Advocacy Centers. The National Children s Alliance is the national association and accrediting body for a network of approximately 900 CACs with locations in all 50 states and the District of Columbia. CACs provide a child-focused environment to conduct child forensic interviews and medical exams, which are then reviewed by a multi-disciplinary team that includes medical personnel, law enforcement, mental health personnel, legal personnel, victim advocates, and state child welfare agencies. The purpose of the multi-disciplinary team is to determine how to best support the child, such as through therapy, courtroom preparation, and victim advocacy. State and local child welfare agencies and civilian law enforcement. Each state or locality has a public child welfare agency that is responsible for receiving and investigating reports of child abuse, as well as assessing the needs of children and their families. This could include removing a child from an abusive home or providing support services to families in need. These agencies are governed by state laws that define child protection roles and processes. The administrative framework for child welfare services and programs vary by state, but all are responsible for compliance with state and applicable federal requirements. For example, states that accept federal funding under the Child Abuse Prevention and Treatment Act must meet the statutory requirements of the Act. Civilian law enforcement organizations are also key to ensuring the welfare of children. In general, civilian law enforcement organizations act as first responders to incidents and may provide a variety of services from reporting the abuse to the appropriate child welfare agency to conducting an investigation of the incident. <1.6. Military Installation Jurisdictions and the Adjudication of Criminal Offenses> As of 2018, DOD occupied varying legislative jurisdictions throughout the 26.9 million acres of land at 4,775 sites worldwide for which it is responsible. Military installations may consist of one or more sites. In the United States, military installations have one of four types of legislative jurisdiction or, depending on the installation, multiple types of jurisdiction that, among other things, helps determine the proper adjudication venue for any criminal offenses committed on the property of the installation. The four types of jurisdiction are described below. Exclusive federal jurisdiction gives the federal government sole authority to adjudicate criminal misconduct. Exclusive federal jurisdiction exists when the federal government elected to reserve authority at the time the real property was granted to the state, or when the state transferred real property to the federal government and failed to reserve jurisdictional authority as part of the transfer. Concurrent jurisdiction applies when both the state and the federal governments retain all authority to adjudicate criminal misconduct. In the event of a conflict, the federal government prevails under the Supremacy Clause of the Constitution. Partial jurisdiction applies when both the state and the federal government have some legislative authority, but neither one has absolute power. The sharing of authority is not exclusive to adjudication of criminal misconduct and federal supremacy applies in the event of a conflict. Proprietary jurisdiction applies to instances where the federal government has virtually no legislative authority. The only federal laws that apply are those that do not rely upon federal jurisdiction, such as espionage, bank robbery, tax fraud, and counterfeiting; the federal government maintains immunity and supremacy for inherently governmental functions. An installation commander can exclude civilians from the area pursuant to his or her inherent authority. The installation s jurisdiction as well as the status of the alleged offender (civilian or servicemember) determines which venue will adjudicate the incident. For example, if a servicemember commits a crime in exclusive federal jurisdiction, the adjudication would likely fall under the Uniform Code of Military Justice. If a civilian commits a crime in exclusive federal jurisdiction, he or she may be prosecuted under federal law through the appropriate United States Attorney s Office. However, if a civilian commits a crime in concurrent or proprietary jurisdiction, he or she may be prosecuted by the state. The age of the accused is also an important consideration because the intent of federal laws concerning juveniles is to help ensure that state and local authorities will deal with juvenile offenders whenever possible. Exclusive federal jurisdiction may be relinquished in part or completely to a state, and this action is referred to as the retrocession of jurisdiction. The conference report accompanying the John S. McCain National Defense Authorization Act for Fiscal Year 2019 included a provision for the Secretaries of the military departments to seek to relinquish jurisdiction, such that the state, commonwealth, territory, or possession would have concurrent jurisdiction over offenses committed on military installations by individuals not subject to the Uniform Code of Military Justice, such as civilian dependents and children. The conference report also directed the Secretaries of the military departments to report to the defense committees on these efforts 15 months after the enactment of the Act. In June 2019, the Acting Deputy Secretary of Defense issued a memorandum directing each military department to seek to establish concurrent jurisdiction with the respective states for offenses committed by juveniles in areas on military installations that are currently exclusive federal jurisdiction. This action seeks to provide ways for the department to address actions by children in areas of exclusive federal jurisdiction that may constitute a crime, such as some instances of problematic sexual behavior in children and youth, since, absent unusual circumstances, children and other civilians are not subject to the Uniform Code of Military Justice. According to Army and department officials, states whose juvenile courts are rehabilitative in nature are much better equipped to deal with suspected crimes committed by children than the federal government, which does not have a juvenile justice system. These officials also noted that federal prosecution is usually declined for such cases. There are various laws and agreements in place regarding crimes committed on U.S. military installations or involving servicemembers or military dependents overseas. These laws include U.S. criminal laws that may be applied extraterritorially, the Military Extraterritorial Jurisdiction Act, the Uniform Code of Military Justice, and host nation laws. Whether a particular law provides extraterritorial jurisdiction over such crimes depends on the specific facts of the incident, such as the nature and location of the alleged crime, the status of the alleged offender (servicemember or civilian), and the nationalities of the alleged offender and the victim. Status of forces agreements between the United States and the host nation may also clarify how these circumstances should be considered in determining venue. <2. Several Issues Limit DOD s Visibility over Reported Incidents of Child Abuse and Child-on-Child Abuse> Three primary issues limit DOD s visibility over reported incidents of child abuse and child-on-child abuse standalone databases, information sharing challenges, and installation discretion. The military services use standalone databases to track the reporting, response to, and resolution of each reported incident of child abuse, which limits the department s visibility over these incidents. While DOD is developing a new database to track problematic sexual behavior in children and youth, it has not yet made key decisions about its development and implementation, which could further affect visibility. In addition, challenges related to information sharing limit visibility over child abuse incidents within and across the military services. Further, Family Advocacy Program (FAP) installation personnel are given considerable discretion in deciding how reported incidents of child abuse are tracked and reported, as are DODEA school personnel with regard to incidents of child-on-child abuse, which also hinders the department s visibility over these incidents. <2.1. Standalone Databases Limit DOD s Visibility over Reported Incidents and Key Decisions Related to a New Database Have Not Yet Been Made> <2.1.1. Standalone Service Databases Limit the Department s Visibility over Both the Extent to Which Children Have Been Affected by Abuse and Incident Outcomes> Each military service maintains multiple standalone databases that separately track the reporting, response to, and resolution of each reported incident of child abuse, which limits DOD s visibility over the extent to which children have been affected by abuse on military installations or as military dependents and its visibility over incident outcomes. Depending on the reported incident, information regarding the alleged abuse may be retained in multiple databases or only one database. Specifically, each service s FAP has a database referred to as the central registry where it tracks the total number of reported incidents of child abuse (by a parent or someone in a caregiving role) and detailed information, such as information about the offender, victim, and type of abuse, for incidents that met DOD s criteria for abuse. Incidents of abuse where the alleged offender was not in a caregiving role are not tracked in the FAPs central registries and would only be tracked as incidents of abuse if they were investigated by military law enforcement. Information associated with investigations of these incidents by any military criminal investigative organization is tracked in a separate database maintained by each investigative organization. If the alleged offender was a servicemember, information related to the adjudication or case resolution is tracked in the relevant service s military justice database maintained by the services legal offices. Figure 1 shows the department s databases for tracking the abuse of children and how they differ depending on the circumstances of the incident. Because of DOD s multiple standalone data systems, it is difficult to know the extent to which children have been affected by abuse on military installations or as military dependents. From fiscal years 2014 through 2018, the military service FAPs central registries recorded more than 69,000 reported incidents of child abuse, of which 48 percent met DOD s criteria for abuse. Over this same time period, the military criminal investigative organizations conducted approximately 9,500 investigations involving a child victim, some but not all of which may have also been recorded in the service FAPs central registries. Figures 2 and 3 show the number of incidents of child abuse reported to the military service FAPs and the number of military investigations involving a child victim from fiscal years 2014 through 2018, respectively. However, the number of incidents tracked by both organizations cannot simply be added together because, as previously discussed, there is some overlap between them. For example, an incident of child sexual abuse inflicted by a servicemember parent or a teacher would likely be in both databases. Moreover, neither the service FAPs nor the military criminal investigative organizations individually track all reported incidents of abuse. Specifically, the FAP only tracks information related to abuse inflicted by a parent, guardian, or someone in a caregiving role. It does not capture incidents of abuse inflicted by, for example, a neighbor who was not babysitting at the time of the incident. While the services military criminal investigative organizations track any abuse of a child that rises to their level of investigation, such as a felony or sexual offense regardless of the relationship between the alleged offender and the victim they only investigate certain crimes. For example, an incident of child neglect would likely only be in the FAP s central registry because incidents of neglect do not typically rise to the level of a military criminal investigative organization investigation. Similarly, an August 2019 report by the Defense Health Board found that it is difficult to establish the true incidence of child abuse across the department due to challenges associated with the underreporting of cases and unreliable capture of data. Standalone databases also limit DOD s visibility over incident outcomes. Depending on the reported incident of abuse for example, child sexual abuse inflicted by a servicemember parent to get the most complete picture of how the incident was reported, responded to, and resolved, service officials would need to query three databases: the FAP, military criminal investigative organization, and military justice databases. Navy legal officials stated that a centralized database for all child abuse incidents that tracks the FAP s determination about whether the incident met DOD s criteria for abuse, the investigation, and resolution would be beneficial because it is currently very difficult to track an incident from the initial report to its final outcome in order to easily determine what happened in a particular case. These officials further stated that such a database would benefit commanders oversight of cases for which they are responsible. The John S. McCain National Defense Authorization Act for Fiscal Year 2019 included a provision directing DOD to establish and maintain a centralized database on each incident of problematic sexual behavior in children and youth reviewed by an installation FAP. Specifically, per the statute, for each substantiated and unsubstantiated incident of problematic sexual behavior, the database is to track a description of the allegation, whether or not a FAP review of the case has been completed, the status and results of any related law enforcement investigation, and the nature of any action taken. Officials responsible for the development of the database which is supposed to begin in fiscal year 2020 stated that it will maintain information related solely to cases of problematic sexual behavior and will not include other types of child-on-child abuse, such as physical assaults not of a sexual nature. Additionally, these officials stated that they do not have plans to expand the scope of the database to include any adult-on-child inflicted abuse. As a result, even once the centralized database on problematic sexual behavior in children and youth is implemented, DOD will still lack a centralized mechanism to track the reporting, response to, and resolution of other incidents of abuse involving children that were reported to the FAP or investigated by a military law enforcement organization specifically, any abuse or neglect inflicted by an adult or physical abuse inflicted by another child. DOD officials responsible for the development of the database stated that they do not plan to expand the scope of the centralized database because they do not want to conflate the processes for responding to incidents of adult-inflicted child abuse and incidents of problematic sexual behavior. While the response process differs between incidents of adult-inflicted child abuse and incidents of problematic sexual behavior, DOD officials acknowledged that the organizations involved in the response process and the primary data sources are the same. Additionally, DOD FAP officials stated the scope of the centralized database was defined in statute and that they foresee additional privacy and data-safeguarding issues if they were to expand its scope. While the statute indicated what must be included in the database, it did not limit the scope of the database to those required elements. DOD not only lacks visibility over incidents of problematic sexual behavior, but over any reported abuse of a child and could therefore benefit from a centralized tracking mechanism for all such incidents. With regard to privacy and data-safeguarding concerns, according to DOD, data-safeguarding precautions were taken when developing the Defense Sexual Assault Incident Database, which the department successfully implemented. While the Defense Sexual Assault Incident Database does not contain information pertaining to children, it contains sensitive information that the department has taken steps to protect. Specifically, according to DOD, the Defense Sexual Assault Incident Database is reviewed annually to ensure all security controls are maintained and it is secured using physical, technical, and administrative controls, such as role-based permissions, to maintain the privacy of personal information. DOD FAP officials also expressed concerns about maintaining information about both adults and children in the centralized database. However, information about both adults and children is included in the service FAPs central registries and the military criminal investigative organizations databases. DOD officials responsible for developing the database noted that the department already plans to take precautions when developing the database due to the collection and retention of information about children. Standards for Internal Control in the Federal Government states that management should use quality information to achieve the entity s objectives. Specifically, quality information is appropriate, current, complete, accurate, accessible, and provided on a timely basis. In addition, management should design control activities to achieve objectives, such as clearly documenting significant events in a manner that allows the documentation to be readily available for examination. Without a centralized database that tracks all incidents of abuse involving children that were reported to the FAP or investigated by a military law enforcement organization, DOD and Congress will not know the extent to which children have been affected by abuse on military installations or as military dependents, or how such incidents have been responded to and resolved making it difficult to identify and address trends that could lead to further prevention efforts. <2.1.2. DOD Has Not Yet Made Key Decisions Related to the Development of Its Database to Track Problematic Sexual Behavior> While DOD is in the early stages of developing a centralized database to track incidents of problematic sexual behavior in children and youth, it has not yet made key decisions about its development and implementation, which could further affect visibility over such incidents. Specifically, DOD has not yet identified all information requirements, developed a plan for how it will use the data it collects, or established a schedule for development and implementation. DOD officials responsible for developing the database stated that they are still in the process of selecting a vendor to develop the system and that once a contract has been awarded and is underway, they can make such decisions. Our prior work has found that inadequate acquisition planning, including poorly defined requirements and unrealistic cost estimates, can increase the risk that the government may receive services that cost more than anticipated, are delivered late, and are of unacceptable quality. Given that DOD officials stated they plan to select a vendor in early fiscal year 2020 and move quickly with development expecting to complete the bulk of it in fiscal year 2020 it is an appropriate time to make these decisions. First, DOD has not yet identified all of the information it will track in the database. DOD officials responsible for the development of the centralized database stated that they have not yet identified all of the information the database will track other than the information required by statute and some information related to the response process because they are still in the early stages of the development process. However, as previously discussed, DOD officials expect to complete the bulk of the development this fiscal year. In November 2006, we found that establishing a valid need and translating that into a service acquisition requirement is essential for obtaining the right outcome. Without this, an organization increases the risk that it will pay too much for the services provided, acquire services that do not meet its needs, or enter too quickly into a sensitive arrangement that exposes the organization to financial, performance, or other risks. Additionally, Standards for Internal Control in the Federal Government states that management should use quality information to achieve the entity s objectives, which includes identifying information requirements that consider the expectations of both internal and external users. As DOD progresses in its development of the centralized database, identifying and defining the elements that each responsible organization, such as the FAP and military law enforcement, must track would help to ensure that the data collected are useful, accurate, and complete, and that the data collected ultimately increase the department s visibility over these incidents. Second, DOD has not yet determined how it will use the data it collects from the database to increase visibility. DOD officials stated that because they have not yet finalized the information requirements for the database, they have not yet developed a plan for how the collected data will be used. GAO-identified leading practices for results-oriented management have shown that data-driven decision making leads to better results. Further, agencies can use performance information to identify problems or weaknesses in programs, to try to identify factors causing the problems, and to modify a service or process to try to address problems. As DOD progresses in the development of its database, developing a plan for data-driven decision making that details how the department will use the data to help inform program development and increase visibility would help DOD to assess its processes and procedures for responding to and resolving incidents of problematic sexual behavior in children and youth, identify any needed changes, and modify them as appropriate. Finally, DOD has not yet established a completion date for the database or developed a schedule to guide its development and implementation. According to DOD officials responsible for the development of the database, while they do not have a planned completion date for the database or any associated milestones, they plan to select a vendor for the development in early fiscal year 2020 and they anticipate the majority of the development will take place the same year. These officials stated that they have not yet set a completion date, in part, because of the sensitivity of the information being collected and because the department does not have a comparable database that collects and maintains information on children. In addition, while these officials stated that they had identified resources for the development of the database through fiscal year 2020, they had not yet identified funding for future years. GAO-identified practices for developing and maintaining a reliable schedule include: (1) capturing all key activities, (2) sequencing all key activities, (3) assigning resources to all key activities, (4) integrating all key activities horizontally and vertically, (5) establishing the duration of all key activities, (6) establishing the critical path for all key activities, (7) identifying float the amount of time a task can slip before affecting the critical path between key activities, (8) conducting a schedule risk analysis, and (9) updating the schedule using logic and durations to determine the dates for all key activities. Given that DOD is in the early stages of development, establishing a reliable schedule for the development and implementation of the centralized database including key activities and the timeframes and resources needed to execute them would provide the means to gauge progress, identify and address potential problems, and promote accountability. Until the database is implemented, DOD will continue to have limited visibility over incidents of problematic sexual behavior in children and youth. <2.2. Information Sharing Challenges Limit Visibility over Child Abuse Incidents within and across the Military Services> <2.2.1. Information Sharing Challenges Limit Visibility within Each Military Service> Information sharing challenges limit visibility within each military service specifically, as it relates to required notifications between a service s installation FAP office and military law enforcement about reported incidents of child abuse inflicted by a parent or someone in a caregiving role. DOD policy states that the Secretaries of the military departments are to ensure that installation commanders or service-equivalent senior commanders ensure that the installation FAPs immediately report any allegations of child abuse and any criminal allegations to the appropriate law enforcement authority. Similarly, service guidance states that military law enforcement is responsible for notifying the installation FAP office of reported or suspected incidents of child abuse. However, officials at four installations in our review described notification challenges between these organizations. For example, officials at one installation described a child abuse incident that had been investigated by military law enforcement for 2 to 3 months, but the investigating organization had not notified the installation s FAP office. Legal officials at another installation stated that over the past year, there had been five incidents of child abuse that were reported to the installation FAP office, but that the FAP had not reported to military law enforcement. These officials stated that the lack of notifications can be frustrating for commanders who need complete information about these incidents to determine whether they need to take any action. In addition, DODEA policy states that, among other things, DODEA personnel are to promptly report all suspected or alleged incidents of child abuse to the installation FAP office and the relevant child welfare agency, if available. The policy does not require them to also report the suspected abuse to law enforcement, but the FAP is to report the incident to law enforcement. However, a senior DODEA official stated that one of its regions has instituted a procedure for all child abuse incidents to be reported to the FAP and law enforcement because the region had experienced challenges with the FAP not consistently notifying law enforcement. The extent of these notification challenges is unknown because service FAP and military law enforcement officials stated that they do not document in their central registries or military criminal investigative organization databases whether each notified the other. Service FAP and military law enforcement officials stated that they can add fields to their databases to track new information if provided with the direction and resources to do so. Officials from these organizations also noted that any notification to the other entity may instead be documented in any case notes or in the case file. However, in April 2019, the DOD Office of Inspector General evaluated military law enforcement incident reports and found similar notification challenges related to FAP and military law enforcement notifications for domestic violence incidents. Specifically, the DOD Office of Inspector General evaluated 212 military law enforcement domestic violence reports in which a FAP notification was required and for 23 percent of the incidents (49 incidents) the military law enforcement organization had not notified the FAP as required. Standards for Internal Control in the Federal Government states that management should internally communicate information to achieve the entity s objectives. For example, information is communicated down, across, and up reporting lines to all levels of the entity. In addition, the oversight body receives quality information that flows up the reporting lines from management and personnel. Without directing the service FAPs and military law enforcement organizations to document in their respective databases the date that they notified each other, these entities headquarters will remain limited in their oversight abilities to ensure that these notifications occur and to take appropriate actions in response. Even if notifications are documented in case files, there is no mechanism for the headquarters entities to efficiently determine whether a notification was made. Without ensuring that notifications are made to both organizations, which play critical roles in addressing incidents of child abuse, it is possible that an incident may not be fully assessed by the FAP or investigated by military law enforcement. Notification delays could result in at-risk children remaining in an unsafe environment or could delay time-critical portions of an investigation, such as forensic interviews or sexual assault exams. <2.2.2. Information Sharing Challenges Limit Visibility across the Military Services> Information sharing challenges limit visibility across the military services, specifically as it relates to sharing child abuse incident determinations. Installation officials stated that the lead service for any installation is responsible for the installation s FAP. They stated that even though the Incident Determination Committee (IDC) will hear cases about the other services members and dependents, all information is recorded in the lead service s central registry. For example, if an Air Force servicemember is involved in a reported incident of child abuse while on an Army installation, the Army FAP will record information about the incident in its central registry. Of the Air Force FAP s more than 3,000 reported incidents that met criteria for child abuse from fiscal years 2014 through 2018 and had a servicemember offender, 22 percent of those offenders were from one of the other three services. For the Army, the Navy, and the Marine Corps, 2 percent, 9 percent, and 5 percent, respectively, of their records were associated with servicemembers from another service. Table 1 shows the number of child abuse incidents that met DOD s criteria for child abuse and involved a servicemember offender from fiscal years 2014 through 2018, by the service that recorded the incident and servicemember affiliation. Since FAP personnel at the installations do not share access to the other service s central registries or the DOD Central Registry, according to DOD FAP officials, they have established a process to share information about child abuse allegations and determinations across the services. Per DOD guidance, the service FAPs are to submit data from their central registries on a quarterly basis for consolidation into DOD s Central Registry. According to DOD FAP officials, after the service FAPs submit their data, the Defense Manpower and Data Center reviews the data and identifies any child abuse incidents that met DOD s criteria for abuse and were recorded by a service FAP that is not the service to which the servicemember is assigned. According to these officials, the Center then forwards those relevant incidents to the services to which the servicemembers are assigned with the expectation that they will incorporate them into their central registries. According to Air Force, Navy, and Marine Corps FAP officials, they regularly incorporate the data received from the Center into their central registries so that they can be searched by FAP personnel at the installations. However, DOD does not have guidance that describes how the service FAPs should receive information from the Center about child abuse allegations and determinations that involve their personnel, but were recorded by another service s installation FAP, or how they should incorporate such information into their central registries once received. Further, according to DOD FAP officials, DOD does not have a process to monitor that the service FAPs are consistently incorporating the information they receive from the Center into their central registries. Standards for Internal Control in the Federal Government states that management should internally communicate information to achieve the entity s objectives. In addition, management should implement control activities through policies and establish and operate monitoring activities and evaluate the results. Specifically, ongoing monitoring is built into the entity s operations, performed continually, and responsive to change. For example, one of the required fields in the service FAPs central registries is whether the offender was previously known to the service s central registry meaning that the offender was involved in a previous incident of child abuse or domestic abuse that was presented to the service FAP and was determined to meet DOD s criteria for abuse. However, if the incident of abuse occurred on another service s installation, and was therefore recorded in that other service s central registry and the service to which the servicemember is assigned was either not informed or did not input the information into its central registry the servicemember s FAP may not be aware of the prior case and therefore may not record the offender as previously known. Issuing guidance that describes the process through which the service FAPs are to receive and incorporate information into their central registries regarding child abuse allegations and determinations involving their servicemembers and dependents that also includes a mechanism to monitor that the process is consistently occurring would provide better assurance that the services have complete and up-to-date information about their personnel and their dependents, which ultimately affects their visibility over such incidents. <2.3. Discretion by FAP and School Personnel in How Incidents of Child Abuse and Child-on-Child Abuse Are Tracked and Reported Further Hinders DOD s Visibility> <2.3.1. FAP Discretion in Screening Reported Incidents Hinders Overall Visibility> FAP personnel at all seven installations in our review stated that they screen reported incidents of child abuse to determine whether to present them to the IDC. DOD guidance states that every reported incident of child abuse must be presented to the IDC for a determination unless there is no possibility that the incident could meet any of the criteria for child abuse or neglect. However, installation personnel described reported incidents of child abuse that had been screened out that, per DOD guidance, should have been presented to the IDC. For example, FAP officials at one installation stated that they screen out reports of spanking by a parent if there is no mark. Since DOD s list of actions considered to be nonaccidental physical force includes spanking, it meets at least one of DOD s criteria for child abuse and should be presented to the IDC for a determination. The IDC would then determine whether there was a significant impact on the child, such as a welt or a more than superficial bruise, or the reasonable potential for a more than inconsequential physical injury or fear reaction to determine whether the reported incident meets all of DOD s criteria for child physical abuse. Officials from three of the services FAPs stated that if spanking is used as a discipline technique without information of injury or potential for injury or psychological harm then it should not be opened as an incident and presented to the IDC. However, this is in conflict with DOD guidance as confirmed by DOD FAP officials. At another installation, child development center officials described an incident where a staff member was speaking harshly with a child. These officials stated that the supervisor at the center considered the action to be child abuse berating the child, which per DOD guidance is an act of emotional abuse and contacted the installation FAP. However, they stated that the FAP personnel that received the report stated, without any assessment of the incident, that it was not emotional abuse and that the center should handle it administratively. According to center officials, the incident was never presented to the IDC, but they considered the incident to be significant enough that the center terminated the staff member s employment. FAP officials at a different installation stated that the medical clinics were not previously reporting suspected abuse to the FAP, but are now doing so. Because of this change, the FAP personnel said they believe the clinics are over-reporting, which has led to the FAP personnel screening out some of the clinic s reported incidents of suspected child abuse. Two of the parents of children affected by abuse that we interviewed discussed incidents that were reported to the FAP, but that the FAP did not initially present to the IDC. According to one parent, one incident of child abuse was presented to an IDC at a different installation after the parent contacted the FAP at that installation for advice more than 2 years after the initial report of abuse. According to the other parent, the other incident of child sexual abuse was only presented to the IDC following congressional involvement. FAP personnel at one installation described the process of determining whether a reported incident should be presented to an IDC as a clinical judgement call and noted that they screen out about one-third of reported incidents of child abuse annually. FAP personnel at another installation stated that, as of summer 2019, they had received about 50 reported incidents of child abuse since the start of the calendar year and that they had screened out the majority of them. While installation FAP personnel also described reported incidents of abuse that should be screened out as child abuse per DOD guidance such as abuse where the alleged offender was not a parent, guardian, or someone in a caregiving role, which is outside of the FAP s purview it is unclear how many of the reported incidents that they have screened out should have been presented to the IDC per the guidance. Incidents that are not presented to the IDC are not recorded in the relevant service FAP s central registry and therefore are not captured in DOD s consolidated Central Registry, which the department uses to prepare its statutorily required annual reports to Congress on child abuse and domestic abuse. As a result, the actual total number of reported incidents of child abuse across the department which according to our previously discussed analysis totaled more than 69,000 from fiscal years 2014 through 2018 may be higher. As previously discussed, the Defense Health Board s August 2019 report noted that it is difficult to establish the true incidence of child abuse across the department due to challenges associated with the underreporting of cases and unreliable capture of data and that as a result, it is difficult to measure and monitor the scope of the problem. When we discussed with DOD FAP officials what the installations we visited told us about how they screen reported incidents of child abuse, officials expressed concerns about these installations not adhering to DOD guidance. However, as previously discussed, the service FAPs are responsible for overseeing installation FAPs. According to service FAP officials, oversight of the screening process is primarily handled by personnel at each installation. Air Force FAP officials stated that the FAP personnel making these screening determinations have to meet certain education requirements. Standards for Internal Control in the Federal Government states that management should establish and operate monitoring activities and evaluate the results. Without each military service developing a process to monitor how reported incidents of child abuse are screened at installations, the services cannot be sure that incidents are being presented to the IDC in a consistent manner. Further, installation FAPs may continue to screen out reported incidents of child abuse, in contradiction of DOD guidance, therefore excluding them from being documented in DOD s Central Registry. As a result, DOD does not know and cannot accurately report on the total number of reported incidents of child abuse across the department. In addition to other known underreporting, without such initiatives, DOD is further limiting its visibility over incidents and hindering its ability to ensure appropriate responses to incidents. <2.3.2. School Discretion in Reporting Serious Incidents Hinders DODEA Leadership Visibility> According to our analysis of DODEA data, DOD schools may not be reporting all serious incidents of child-on-child abuse, which hinders DODEA leadership visibility. From school years 2013-2014 through 2017-2018, across its 163 schools, DODEA reported a total of 167 serious incidents involving either an alleged violation of law or an alleged sexual event on average, one serious incident per school over the 5- year period. The types of reported serious incidents included a student reporting that they were raped by two students in the school parking lot, a student stabbing another student in the finger with a plastic fork and drawing blood, and a wide range of other conduct. There was a slight decrease in the number of serious incidents reported from school years 2013-2014 to 2014-2015, but since school year 2014-2015, the number of serious incidents reported each year increased from a low of 22, to 55 in school year 2017-2018. DODEA officials attribute the increased reporting, in part, to the issuance of additional reporting guidance in August 2016. Figure 4 shows the number of serious incidents involving either an alleged violation of law or an alleged sexual event reported by DODEA from school years 2013-2014 through 2017-2018. According to DODEA officials, all serious incident reports are reviewed by DODEA headquarters to ensure that the schools took the appropriate actions needed to protect students and to ensure that incidents are correctly categorized. These officials stated that the reports also help to increase visibility at the headquarters level about the types of incidents occurring in DODEA schools and where additional resources may be needed. In addition, DODEA officials stated that they retain serious incident reports for 5 years, which allows them to track serious conduct issues when students transfer schools. While the reporting of serious incidents has increased, our analysis of DODEA student misconduct records found that schools reporting of these incidents was incomplete. Specifically, our analysis identified 216 student misconduct records for school years 2016-2017 and 2017-2018 that school administrators, following DODEA guidance, could have reasonably classified as serious incidents. The types of incidents described in the student misconduct records included, among other things, the use of physical force by a student on another student that resulted in an injury; a student touching another student s groin, breasts, or buttocks without consent; and verbal and behavioral sexual harassment. However, for this time period for which DODEA reported the highest number of serious incidents from school years 2013-2014 through 2017-2018 DODEA only reported 89 serious incidents. In addition, DODEA officials stated that prior to August 2018, up to one-third of schools were not recording student misconduct in the student information system because they were not required to do so and, as a result, we were not able to review any misconduct records for those schools. Challenges related to the reporting of serious incidents were also highlighted in our interviews with parents and DODEA school administrators. Specifically, two of the parents of children affected by child-on-child sexual abuse that we interviewed discussed incidents that occurred within DODEA schools. They both stated that they received information about the incidents as part of Freedom of Information Act requests and that the schools had not reported the abuse as serious incidents. For one of these incidents, we identified a corresponding DODEA child abuse report, but not a serious incident report. Per DODEA guidance, the incident should have been categorized as a serious incident (but not as child abuse) because the offender was a student child abuse reports are only to be filed if the alleged offender was an individual responsible for the child s welfare, such as a parent or a teacher. In addition, at one installation in our review, FAP personnel discussed a recent sexual assault within a DODEA school. When we discussed this incident with a senior DODEA official who is to be notified of all serious incidents reported in the region in which the school is located, the official was unaware of the incident because it was not categorized as a sexual assault in the serious incident report and another senior official for the region had handled it directly. Further, administrators at one of the DODEA schools we visited stated that the reporting guidelines are not fully clear and that they often call the superintendent s office for advice on what to report and how to report it. Standards for Internal Control in the Federal Government states that management should internally communicate the necessary quality information to achieve the entity s objectives. Specifically, management communicates quality information down and across reporting lines to enable personnel to perform key roles in achieving objectives. However, DODEA s guidance affords school administrators discretion in what to report because it does not explicitly define what types of serious incidents must be reported. While the guidance identifies and defines a number of incidents that could be reported as serious incidents, and provides detailed examples like a student intentionally exposing their genitals or a student posting naked or suggestive photos of another student online the guidance does not mandate that these incidents be reported. Specifically, the guidance states that the lists of events, activities, and paraphernalia described in the guidance as serious incidents are illustrative only and do not identify every incident that may be inappropriate, nor require that each incident result in a serious incident report. While DODEA officials noted that both reporting and their visibility over serious incidents has been improving, they acknowledged that administrators may not be reporting all serious incidents described in the guidance because, in part, it may be easier for them to resolve some incidents such as students jokingly slapping each other on the buttocks at the school level instead of filing a serious incident report. These officials stated that they are optimistic a new reporting database for serious incidents that they implemented in August of 2019 will streamline the process for administrators and increase reporting. In addition, in February 2019, DODEA issued guidance related to the reporting of and response to prohibited sexual, sex-based, and other related abusive misconduct, which DODEA officials told us they believe will reduce discretion in how alleged child-on-child sexual abuse is recognized and reported. While the new reporting system and guidance related to child-on-child sexual abuse are positive steps, without additional guidance that clarifies the types of incidents including non-sexual incidents that must be reported as serious incidents, DODEA may continue to lack full visibility into the extent to which serious incidents are occurring. As a result, systemic issues within a particular school or district may never be reported to DODEA leadership and any additional resources that a school or district needs to prevent future incidents may not be identified. Further, when a student transfers schools, the new school may be unaware of serious conduct issues that were not properly documented, raising safety concerns for the school and installation. <3. DOD Has Expanded Policies and Procedures on Child Abuse to Address Child-on-Child Abuse, but Gaps Exist in Processes for Responding to and Resolving Incidents of Abuse> DOD and the military services have taken steps to expand child abuse policies and procedures to address child-on-child abuse in response to Congress, but gaps exist in the processes for responding to and resolving incidents of abuse. Specifically, the services may lack pertinent stakeholder perspectives on the IDC after DOD policy changed the permanent voting membership of the committee. In addition, families of child abuse victims may receive inconsistent levels of information following a report of child abuse, which can cause confusion and prevent them from receiving available services. Further, service guidance regarding the extent of commander authority to remove children from unsafe homes overseas is unclear. Finally, the availability of certified pediatric sexual assault forensic examiners is limited, especially overseas. <3.1. DOD Has Taken Steps to Expand Child Abuse Policies and Procedures to Address Child-on-Child Abuse> In accordance with provisions in the John S. McCain National Defense Authorization Act for Fiscal Year 2019, DOD and the military services have taken steps to augment existing child abuse policies and procedures to also include child-on-child abuse, specifically the incidence of problematic sexual behavior in children and youth. The statute required, among other things, that the Secretary of Defense establish a policy, applicable across all military installations, to respond to allegations of problematic sexual behavior in children and youth on military installations. The purpose of the policy is to ensure a consistent, standardized response to such allegations across the department. In May 2019, DOD issued a revised FAP instruction that establishes policy, assigns responsibilities, and prescribes procedures for the FAP specific to child abuse, domestic abuse, and problematic sexual behavior in children and youth. In addition, in July 2019, DOD revised the FAP standards to implement policy, assign responsibilities, and provide procedures for addressing problematic sexual behavior in children and youth in military communities. As of October 2019, the military services had not yet issued their updated FAP policies to incorporate the new department- wide policy and standards, but the policies were under development, according to DOD FAP officials. Prior to the issuance of DOD s updated FAP policy, the Army issued a broader policy on major juvenile misconduct in March 2019. The policy addresses the command response to juvenile misconduct and the referral of juvenile cases to civilian authorities. For Army installations in the United States with areas of exclusive federal jurisdiction, the policy directs such commands to seek to establish concurrent jurisdiction of juvenile criminal offenses. In instances where establishing concurrent jurisdiction is not feasible or recommended, the policy directs commanders to pursue memoranda of agreement with local prosecution authorities that address the referral of juvenile cases to the local juvenile court system for state review and state determination of appropriate disposition. Army officials stated that the Army policy covers more than incidents of problematic sexual behavior in children and youth because the challenges involving children on Army installations are broader than problematic sexual behavior and encompass other types of misconduct, such as fights, vandalism, and shoplifting. Officials from the other services stated that their policies, which are under development, will focus on problematic sexual behavior because that was what was required per statute. In addition, DOD has taken steps to implement a training program for personnel at installations that focuses on problematic sexual behavior in children and youth. Specifically, DOD and DOJ s Office of Juvenile Justice and Delinquency Prevention entered into an interagency agreement in July 2019 to expand the scope of DOJ s cooperative agreement with the University of Oklahoma. According to DOD officials, this agreement includes providing training and technical assistance in support of DOD s response to problematic sexual behavior in children and youth. The 3-year interagency agreement provides $1.5 million in funding, and according to DOD officials, the funding will be used to develop and implement targeted training on problematic sexual behavior in children and youth for FAP personnel at the installations. According to DOJ officials, other efforts include a DOJ and DOD working group on child-on-child sexual abuse focused on resolving jurisdictional issues, as will be discussed in greater detail later in the report and the development of a centralized database for tracking incidents of problematic sexual behavior in children and youth, as previously discussed. Further, DODEA has implemented a number of initiatives related to serious student misconduct. These include the issuance of a standalone sexual harassment policy and providing administrators with additional guidance on reporting and responding to sexual activity within DODEA schools, and the development and distribution of standardized language regarding discrimination and sexual harassment for each school s student handbook. DODEA also created outreach materials for students on how to recognize and respond to sexual harassment. DODEA has conducted training for administrators on these topics. Other training initiatives include training for all counselors, school psychologists, and nurses on problematic sexual behavior in children and youth. As previously discussed, DODEA also introduced a new reporting database for serious incidents in August 2019 that is intended to simplify the serious incident reporting process for administrators. <3.2. Installation Incident Determination Committees May Lack Pertinent Stakeholder Perspectives> In August 2016, DOD issued guidance to standardize the incident determination process across the military services, which, among other things, reshaped the permanent voting membership of the IDC. However, the new structure may lack stakeholders with the requisite knowledge and expertise to allow the IDC to make fully informed determinations. The standardized process to determine whether an incident meets DOD s criteria for child abuse was informed by a collaboration between the Air Force and New York University researchers, which yielded a decision- tree algorithm. The process was implemented by the Air Force and then subsequently adopted by the Navy and the Marine Corps. According to Army officials, the Army s phased implementation of the IDC process was ongoing as of October 2019. As part of the standardization of the process in the 2016 guidance, medical personnel were removed as permanent voting members of the IDC, although they regularly participated in some of the services prior incident determination processes, according to Army FAP officials. The external researchers involved in the effort noted that they were primarily involved in the decision-tree algorithm and not the composition of voting members, which was an internal DOD decision. According to DOD, the definitions in the decision-tree algorithm used to determine if an incident meets criteria to be considered child abuse were robust enough that experienced healthcare providers were not needed to determine if an incident met DOD s criteria for child abuse. In addition, DOD FAP officials stated that participation in the IDC process by medical personnel could take them away from their clinical duties and become burdensome since the IDC at larger installations may meet weekly and for several hours. DOD officials noted that medical personnel, and others, can still be invited to participate in the IDC process as needed to provide information related to specific incidents. While IDC members at four of the installations in our review also noted that medical personnel can still be invited to share relevant case information in a nonvoting capacity medical personnel we spoke to at three of these installations noted that they are rarely invited to participate. As a result, medical personnel at one installation we visited stated that they have attempted to write their medical reports in more lay terminology to bridge the gap and to help ensure that critical information is properly relayed during the IDC meeting. Medical personnel with expertise in child abuse stated that they would welcome the opportunity to again participate in IDC meetings about which they have specific knowledge, but that they are contacted to participate once every 2 years at the most. In addition, medical personnel at one of the installations we visited had never heard of the IDC and were unaware of its function. During a number of our interviews and installation visits, medical personnel frequently expressed concerns about the lack of medical expertise in the IDC process. For example, medical personnel at three installations we visited expressed concerns that the absence of medical personnel on the IDC may prevent reported incidents of child abuse from being fully understood. They noted that medical personnel specifically, pediatricians have particular utility on the IDC because of the complexity of some of the cases and the need to articulate how medical findings can indicate whether an injury resulted from a nonaccidental use of force. Medical personnel with expertise in child abuse stated that there is a strong medical component to many child abuse cases and that FAP clinicians may not have the requisite medical expertise needed to appropriately interpret that information. Medical personnel also stated that lacking this expertise could result in the IDC incorrectly voting that an incident meets criteria for abuse or does not meet criteria. For example, a pediatrician described one IDC meeting in which they were invited to participate, as a nonvoting member, related to an incident that had medical evidence that the pediatrician referred to as clearly presenting a hallmark finding in child abuse ear bruising patterns in a very young child. However, the pediatrician stated that the IDC voted that the incident did not meet DOD s criteria for abuse before allowing medical personnel to present information they had about the incident. According to this pediatrician, after the vote, the IDC allowed the pediatrician to provide information about the incident, but it did not alter the committee s initial determination. At one of the IDC meetings we observed, IDC members discussed a case that involved bruising. The IDC members noted that they wished that a doctor had been present so that they could determine whether the allegation had any merit. However, no medical personnel were present and the IDC reached a determination without medical input. Members of this IDC also discussed concerns about a downward trend across the service in the number of cases meeting DOD s criteria for abuse, which they attributed to changes to the voting membership of the committee. In addition, one of the parents that we spoke with described an incident that met DOD s criteria for child sexual abuse under the military service s prior incident determination process. However, the parent stated that after the service implemented the new IDC process, the servicemember s command which was added as a permanent voting member of the IDC requested that the determination be reconsidered. The parent stated that the incident was again presented to the IDC and the committee reversed the initial determination, concluding that the incident did not meet DOD s criteria for child sexual abuse. The parent expressed concerns that the removal of medical personnel from the IDC process played a significant role in the reversal of the determination. Further, at one installation in our review, after the installation implemented the new IDC process, officials set up a separate pre-IDC process to discuss the same cases with medical personnel and others to ensure that they include their perspectives in the determination process. Installation officials stated that they felt the need to establish this redundant process because participation and discussion are more limited under the IDC process and there was an identified gap. In August 2019, the Defense Health Board recommended that DOD reconsider requiring at least one comprehensive pediatric medical health care provider to be a member of all IDCs. However, DOD FAP officials stated that they have no plans to reassess or expand the voting membership based on this recommendation or the concerns expressed by medical personnel across the military services. They stated that there are other meaningful ways in which medical personnel can participate in the IDC process, but that they should not be voting members because their competing clinical responsibilities may lead to a lack of continuity on the IDC and they might not have any direct knowledge of the incidents being discussed. However, as previously discussed, medical personnel are not being regularly invited to participate and, when they are, the information they present may not be considered as part of the voting process. In addition, medical personnel at one installation we visited noted that even if they were regularly invited to participate, since they are not permanent voting members, other clinic responsibilities may take precedence. A 2018 Department of Health and Human Services guide for child protective caseworkers noted that involving teams with a diversity of skillsets, including pediatricians, early in the child abuse determination process can improve accurate and comprehensive assessments, information sharing, and analysis of gathered information to support an accurate substantiation decision. In addition, GAO-developed practices to enhance and sustain collaboration in interagency groups note that it is critical to involve nonfederal partners, key clients, and stakeholders in decision-making. Further, in February 2014, we found that if collaborative efforts do not consider the input of all relevant stakeholders, important opportunities for achieving outcomes may be missed. Without expanding the voting membership of the IDC to include medical personnel, installation officials may not have all of the relevant information to make a fully informed decision about whether an incident meets DOD s criteria for child abuse. The IDC may make different determinations without the benefit of input from all relevant personnel, thus affecting confidence in the efficacy of the process. Further, without expanding the voting membership to include medical personnel, installations may continue to develop concurrent or redundant processes in order to ensure that all pertinent information about cases is shared. <3.3. Inconsistent Levels of Information Are Available to Victims Families Following a Reported Incident of Child Abuse> Victims families receive inconsistent levels of information related to the response process and available services after an incident of child abuse is reported. The process to respond to and address incidents of child abuse can be lengthy the average investigation is more than 9 months and the responding organization and the particular steps it takes depend on variables including the type of abuse, the status of the alleged offender, and the location of the incident. For example, as previously discussed, military criminal investigative organizations primarily only investigate serious felony-level offenses and any type of sexual offense. According to military criminal investigative organization officials, cases that do not meet this threshold may be investigated by other military law enforcement investigators, such as military police or local civilian law enforcement. Additionally, the FAP only reviews incidents of child abuse where the alleged offender was a parent or someone in a caregiving role. As a result, the FAP would not present incidents to the IDC where the alleged offender was another child or an adult who was not in a caregiving role, such as a neighbor who was not babysitting at the time of the incident. Further, as previously discussed, the jurisdiction of the installation where the incident took place determines which entity, such as the state or the federal government, will adjudicate the incident. The process can also differ based on the state and local laws where the incident occurred. For example, according to some state child welfare agencies, they are more likely than the FAP to accept cases of child-on- child abuse, and they review such cases to see if a lack of supervision or other aspect of parental neglect is involved. The legal services that victims are eligible to receive differ depending on the status of the alleged offender and the victim, and the type of abuse alleged. For example, for incidents of child sexual abuse with an alleged servicemember offender, victims and their families are eligible for military- provided legal advice and assistance, even if the abuse occurred off the installation. However, the status of the victim (that is, whether the victim is the dependent of a military member or not) will impact the nature and extent of the legal assistance that can be provided. Of the 20 parents of children affected by abuse that we interviewed, nine stated that they did not understand what to expect during the investigation and resolution process and nine were not aware of all available services and resources offered. Some parents noted that if they had better understood the process and available services, they could have received counseling and other services more quickly. Twelve parents highlighted that a guide summarizing the process and available services would have been helpful. For example, seven parents said that they did not receive and were not offered any services by the military. Multiple respondents also highlighted the lack of sufficient legal assistance. Specifically, five parents stated that they would have liked legal assistance but none was available, and seven parents stated that the legal services offered by the military did not meet their needs. For example, one parent stated that they requested a waiver to receive the services of a Special Victims Counsel, but the request was denied for reasons that are unclear. Standards for Internal Control in the Federal Government states that management should externally communicate the necessary quality information to achieve the entity s objectives. Specifically, management communicates with and obtains quality information from external parties, including the general public, and in this case victims families. However, while each organization, such as the FAP, may provide information to families relevant to that organization s responsibilities and services, the military services have not established efforts to comprehensively inform victims families about how child abuse incidents are to be addressed by each responsible organization, for example by consolidating information to help families understand the process and the services available to them. While DOD officials stated that they have plans to develop such a guide for responding to incidents of problematic sexual behavior in children and youth, they stated that they do not have plans to develop a similar guide for responding to incidents of child abuse because information is already available from a number of different sources. However, the parents we spoke with had challenges locating this information in a timely manner following an incident of child abuse and highlighted the need for additional information in a consolidated format to avoid confusion and to more easily receive necessary services. Without each military service establishing efforts to comprehensively inform victims families about how reported incidents of child abuse will be addressed, affected families may be confused about the process and where to go for information. In addition, they may not receive the services that they are entitled to and need, such as a Special Victim Counsel or a legal assistance attorney, because they do not know that these resources are available. As a result, DOD may not be providing comprehensive responses to reported incidents of child abuse. <3.4. Service Guidance Related to the Extent of Commander Authority to Remove Children from their Homes on Overseas Installations Is Unclear> The military services guidance regarding the extent of commander authority to remove children from their homes on overseas installations is unclear. Within the United States, state and local child welfare agencies have the authority to remove children from unsafe homes. However, officials at an overseas installation stated that there is no law that clearly authorizes commanders to exercise this authority on overseas installations, and there may be no local authorities to provide guidance or services at overseas installations. Rather, service guidance grants installation commanders the authority to remove children from unsafe homes on a temporary basis. Guidance describing this authority is not standardized across the services and installation officials overseas stated that additional guidance would help clarify situations when a child can be removed from an unsafe home. For example, according to Army guidance, an installation commander may authorize emergency placement care when abuse is substantiated and when neither judicial authorization nor parental consent can be obtained, and the removal is necessary to avoid risk of imminent death, serious bodily harm, or serious mental or physical abuse. In addition, commanders may take action in situations when medical protective custody is not appropriate. Per Navy guidance, commanders can only use this authority in situations where there is substantial reason to believe the life or health of the child is in real and present danger. Air Force guidance states that base security and unit leadership are responsible for overseeing the appropriate removal or placement of children with consultation and guidance from the FAP. Per Marine Corps guidance, commanders may implement a child removal order designed for short- term placement of a child into a place of safety. Individual installation commanders are responsible for issuing a written policy setting forth the procedures and criteria for the removal of child victims of abuse or other children in the household when they are in danger of continued abuse or life-threatening child abuse. Officials at installations overseas stated that the decision to remove a child from an abusive home can vary depending on the commander s comfort level in doing so. For example, officials at two installations provided an example where a commander removed a child from the home in a situation of suspected abuse, and then a parent requested an Inspector General investigation questioning the commander s authority to do so. Installation officials stated that the complaint to the Inspector General was not substantiated, but that the ambiguity of the guidance, coupled with the possibility of a commander having his or her actions reviewed by the Inspector General, could affect a commander s willingness to take action in similar cases. Medical personnel we spoke with highlighted examples where military hospitals overseas have admitted child abuse victims for their safety in situations when installation commanders did not take action to otherwise remove the child from the home. In one example, an infant presenting with physical trauma consistent with abuse was admitted to the hospital for 1 month until the child could be returned to the United States and a state child welfare agency could respond to ensure the child s safety. Installation officials overseas responsible for addressing incidents of child abuse stated that they believe additional clarity regarding commander authorities would help commanders in making a determination about when to exercise their authority to remove an at-risk child from a home. In comparison to the services guidance, some state child welfare agencies have comprehensive checklists and decision matrices to help officials make decisions regarding child removal. One child welfare agency we visited provided a list with 14 specific safety factors, including descriptions of each factor, and a list of 10 protecting interventions. Safety factors include anything that may put a child in danger, for example, questionable caretaker explanations for a child s injuries, or the family not allowing the child welfare agency access to the child. Protecting interventions include actions such as the family making use of community agencies or services as a safety resource, or the non-offending caretaker moving to a safe environment with the child. There is no comparably detailed guidance for military commanders. Standards for Internal Control in the Federal Government states that management should internally communicate the necessary quality information to achieve objectives. Quality information is reported down and across reporting lines to enable personnel to perform key roles in achieving objectives. However, legal officials and medical personnel at overseas installations stated that existing guidance regarding commander authority to remove children from potentially unsafe homes in overseas environments is unclear. For example, these medical officials stated that terms like real and present danger are not well defined, and that there may be no child welfare agency available overseas to provide guidance or services. These officials also stated that this threshold may be too high, and could result in children suffering moderate neglect or abuse because it does not rise to the level of real and present danger. Without clarifying and standardizing across the services, in guidance, the circumstances under which commanders may exercise their authority to remove children from potentially unsafe homes overseas, timely response to incidents may be inhibited and children may be left in unsafe situations. Commanders may also face adverse actions if their authority to remove a child from the home is not well-defined and their decision comes under legal scrutiny. <3.5. Availability of Certified Pediatric Sexual Assault Forensic Examiners Is Limited, Especially Overseas> The availability of certified pediatric sexual assault forensic examiners across the military services is limited, especially overseas. Based on our analysis, from fiscal years 2014 through 2018, for all four military services, there were 1,448 incidents that met DOD s criteria for child sexual abuse and may have therefore necessitated a sexual assault forensic exam. According to our analysis of FAP data over these 5 years, the average age of the victims involved was 10. However, according to Defense Health Agency officials, there are only four child abuse pediatricians who are certified to perform pediatric sexual assault forensic exams: two in the Navy, one in the Army, and one in the Air Force. In addition, according to these officials, the Army has seven sexual assault forensic examiners, initially certified to perform exams on adults, who have completed a 40-hour pediatric course, for a total of 11 certified pediatric examiners across the department. In comparison, according to these officials, there are a total of 466 sexual assault forensic examiners throughout the department who are certified to perform exams on adults 161 are located overseas and 305 are located within the United States. As a result of this disparity between examiners certified to perform exams on adults and those certified for children, children affected by sexual abuse on military installations or as military dependents may lack access to qualified pediatric sexual assault forensic examiners. This lack of access on overseas installations identified by medical personnel as a significant concern can prevent them from being examined in a timely manner or may subject them to further trauma if they are first examined by an untrained provider and have to be examined again. When victims of sexual assault receive a forensic exam, the exam may be provided by either a trained sexual assault forensic examiner that is, a medical provider who has received specialized training in properly collecting and preserving forensic evidence or a medical provider who has not received such specialized training. Studies have shown that exams performed by trained sexual assault forensic examiners may result in shortened exam time, better quality health care delivered to victims, higher quality forensic evidence collection, as well as better collaboration with the legal system and higher prosecution rates. Navy officials stated that pediatric sexual assault forensic examiners are not a billeted position at any installation and Air Force officials stated that there are no certified pediatric sexual assault forensic examiners billeted to any installation in Japan which hosts the largest number of active duty U.S. servicemembers outside of the United States due to inconsistent demand. Medical personnel we spoke with described two options to overcome the lack of certified pediatric examiners: call a certified pediatric examiner in the United States to guide via telephone a pediatrician on the overseas installation through the exam; or medically evacuate the victim to the United States. Although DOJ best practices for sexual assault exams note that telemedicine can result in significant positive changes in the methods of examination and evidence collection, medical personnel stated that it is inferior to an in-person exam because the person conducting the exam is not the actual certified examiner, which can open the exam findings up to legal challenges. Medical personnel also stated that a child may need to undergo multiple exams if the initial exam is not performed correctly, which, as noted previously, can add to a victim s trauma. Additionally, medical personnel stated that there can be technical challenges with getting the right equipment in place and training people who may quickly transition to another installation. If telemedicine processes were to be established at overseas installations, there are still only four child abuse pediatricians across the department who can consult on the exams, and they may not be available to consult on all cases. Further, medical personnel noted that using telemedicine for pediatric exams overseas may result in these exams being physically conducted by someone with little to no experience conducting any type of genital exam. This is because pediatricians in the military typically do not conduct any genital exams on children, even basic or preventative exams. In the event that a girl becomes pregnant, officials stated that she will be sent to a military adult obstetrician, and the military pediatrician would not conduct any of the relevant exams. These personnel also stated that the military does not conduct routine cervical exams on women until they are 21 years of age, so pediatricians likely have no practical experience conducting even standard exams. A 2018 Department of Health and Human Services guide for child protective caseworkers noted that if health care providers do not routinely examine the genitalia of young children, they may mistake normal conditions for abuse or vice versa. One parent that we spoke with about an incident of sexual abuse overseas stated that the child s pediatrician was not comfortable conducting such an exam, but offered to take a cursory peek for anything concerning. The parent declined the offer because they knew the pediatrician was neither trained nor certified to perform such an exam. Although medical personnel stated that a medical evacuation to the United States for an exam is a potential option, medical evacuations are challenging because they can take 5 to 6 days. However, the physical evidence from a sexual assault should be collected as soon as possible and ideally between 1 and 5 days after the assault, according to DOJ best practices. Additionally, installation medical personnel noted that medical evacuations can result in additional stress on the victim from travel, increased complexity of legal and investigation processes, and travel costs that may be greater than training local examiners. DOD medical personnel stated that it can be challenging because in some instances the children can only receive the exam at medical facilities that have a memorandum of understanding in place with the military because the exams are typically funded locally. For example, these officials described an incident of child sexual abuse in Okinawa a remote location in Japan with no certified examiners. These personnel noted that while a medical evacuation to Hawaii would seem like a good solution because there is a trained pediatrician there to conduct sexual assault exams the pediatrician in Hawaii can only examine children who have been referred directly by Hawaii s child welfare agency. These personnel noted that the next best option is San Diego, where there is a DOD child abuse pediatrician, but by the time the travel is arranged, which can take days, the evidence might no longer be available. These personnel suggested that instead of relying on medical evacuations or telemedicine, better options to ensure that child victims get timely access to care could include certifying pediatricians or adult sexual assault forensic examiners as pediatric examiners during mandatory training or establishing shared regional assets. In the United States, child victims of sexual abuse may have more options to receive pediatric sexual assault forensic exams. Specifically, pediatric exams may be done at a local Children s Advocacy Center (CAC) or hospital. However, it is still challenging in the United States because CAC coverage is not uniform across the country, and rural patients may have to travel several hours to the closest center. For example, officials at one CAC we visited noted that while they have a certified pediatric examiner, this individual is only available once per week. One parent that we spoke with stated that they had to drive their child 2 hours to the closest CAC to receive an exam when stationed at an installation in the United States. Two parents described delays in receiving an exam in the United States after the incident was reported, which could have prevented quality evidence from being collected. DOJ protocols for sexual assault forensic exams state that these exams should be performed by a healthcare professional specially trained in collecting evidence relating to sexual assault cases, such as a sexual assault nurse examiner or other appropriately trained medical professional. In particular, female children who have not yet reached puberty should only be examined by health care providers specifically trained in pediatric sexual abuse. Further, related DOJ best practices state that evidence should be collected as soon as possible, ideally between 1 and 5 days post assault. However, DOD does not have processes in place to help ensure that children who are sexually abused overseas have timely access to certified pediatric sexual assault forensic examiners. Without processes that help ensure timely access to certified pediatric examiners overseas, child victims of sexual abuse may not receive exams in time for the evidence to be collected for use in prosecution. In addition, the difficulty and time associated with obtaining an exam could potentially increase the stress and trauma of affected victims and their families. Further, because of the variation in resources across military installations, child victims of sexual abuse may have access to different levels of care depending on the geographic location of the installation due to the lack of standardized availability of certified pediatric examiners. <4. DOD Collaborates with Interagency Partners to Address Reported Incidents of Child Abuse and Child-on-Child Abuse, but Challenges Remain> DOD collaborates at various levels both inside and outside the department to address reported incidents of child abuse and child-on- child abuse. However, improving communication and establishing comprehensive agreements could enhance the information DOD receives about these incidents as well as the resources available to both the department and victims of abuse. <4.1. DOD Collaborates with States and Localities to Ensure It Is Notified When Servicemembers or Military Dependents Are Involved in Reported Incidents of Child Abuse Outside the Installation> DOD has successfully collaborated with a number of states to help ensure it receives notification from state authorities when servicemembers or military dependents are involved in reported incidents of child abuse off a military installation. DOD is required to address child abuse in military families. However, with approximately 70 percent of active-duty military families living off military installations in the civilian community, service officials do not always have visibility over these incidents since they may first be reported to the relevant civilian authorities instead of to the military. The Defense State Liaison Office has highlighted the importance of state statutes that require the collection and reporting of military affiliation to the appropriate military authorities as part of state child abuse cases, and has identified this as a key issue. According to a senior Defense State Liaison Office official, the office has successfully collaborated with a number of states on child abuse reporting measures to require or allow local jurisdictions to report incidents of child abuse in military families to relevant military service officials. According to DOD, at least half of the states have no such requirements, but at least one is considering passing a law to provide for the requirement. According to this senior Defense State Liaison Office official, the effort will remain a key issue area for the office through at least fiscal year 2020 in order to continue to focus efforts on these remaining states. In August 2019, the Defense Health Board noted that child abuse can be difficult to quantify because of underreporting, and some studies suggest a lower rate of incidents being reported to the FAP if the incidents are first identified at a civilian facility. Therefore, it recommended, in the absence of state legislation, that DOD ensure that all U.S. military installations have memorandums of agreement in place with state child welfare agencies for bilateral information sharing on child abuse cases. A senior Defense State Liaison Office official stated that the office has sought legislation because prior efforts to establish memorandums of agreement were only focused on information sharing and did not specify procedures for state and local child welfare agencies to use in determining whether a family involved in an incident had a military connection. Additionally, the official noted that a statutory basis is important because otherwise state laws that limit who child welfare agencies can share information with about child abuse cases may take precedence. For example, some states have expressed concerns that sharing information about an alleged, but not yet confirmed, incident of child abuse could be detrimental to a servicemember s career. We found that the extent of collaboration between the military and other state and local authorities (such as child welfare agencies) varied among the installations in our review. For example, child welfare agency officials in Virginia noted that state policies requiring that they notify the FAP about cases with a military affiliation have increased the amount of coordination between the state and the military. However, according to FAP officials at one installation we visited in North Carolina where approximately 80 percent of dependent children live off the installation it was rare to receive notification from some counties for child abuse cases with a military affiliation because, at the time of our visit, there was no state policy requiring it. DOD s continued focus on improving collaboration with the states that have not yet established such a requirement should help to increase the department s visibility over incidents occurring off the installation. It should also help to ensure that military families obtain the available FAP services for which they are eligible. <4.2. DOD and DOJ Have Taken Some Actions to Increase Collaboration> DOD and DOJ have taken some actions to increase collaboration in addressing the abuse of children on military installations. As previously discussed, the conference report accompanying the John S. McCain National Defense Authorization Act for Fiscal Year 2019 included a provision for the service Secretaries to seek to relinquish jurisdiction over offenses committed on military installations by individuals not subject to the Uniform Code of Military Justice, such as civilians and children. In response, according to DOJ officials, DOD and DOJ have, among other things, established a joint working group to coordinate on issues related to child-on-child sexual assault on military installations, including the relinquishment of exclusive federal jurisdiction to the states. Both DOD and DOJ officials agreed that the federal justice system is not well suited to prosecuting juvenile offenses because it lacks a dedicated juvenile justice system and that state courts, which aim to be rehabilitative in nature, are better suited to adjudicate these cases. Specifically, DOJ s Justice Manual states that the intent of federal laws concerning juveniles is to help ensure that state and local authorities will deal with juvenile offenders whenever possible. Working group officials stated that they are compiling a list of United States Attorneys Offices and the military installations in their respective districts from which they have received referrals, as well as the types of jurisdictions at those installations. These efforts are designed to ultimately result in a comprehensive chart detailing the precise jurisdictional status of each military installation in the United States, which can then be used to inform discussions with each state about the relinquishment of exclusive federal jurisdiction. According to DOJ officials, the working group is also developing templates of coordination documents, such as letters and memoranda of understanding for outreach with the states. Working group officials stated that the group has identified and is attempting to address other issues, such as those regarding privacy concerns related to information to be contained in DOD s centralized database for problematic sexual behavior in children and youth, which, as previously discussed, is under development. The difficulties of addressing child-on-child sexual assault are exacerbated when the incident occurs overseas, where no U.S. state authorities exist to assume jurisdiction. The Military Extraterritorial Jurisdiction Act can be used to either prosecute child offenders as adults for certain violent or controlled substance violations or to initiate federal delinquency proceedings. However, as discussed, while both DOD and DOJ officials stated that they prefer to refer children to state courts, this is currently not possible when the incident occurs overseas. Working group officials stated that this challenge is another issue being actively discussed by the group in an effort to identify potential solutions. For example, they stated that one idea under discussion relates to a specific Virginia state law that asserts concurrent jurisdiction over federal crimes committed by a child, to be assumed only if waived by the federal court or the United States Attorney. The discussion centered on the idea that the Virginia state law could potentially be applied extraterritorially. Therefore, if a sexual assault were to occur on an installation with exclusive federal jurisdiction in Virginia or theoretically overseas where the United States has jurisdiction the Virginia courts could assert jurisdiction as long as the relevant United States Attorney s office has waived jurisdiction. However, whether or not Virginia could use its courts to address matters that occurred overseas and where the juvenile offender is not a resident is not yet clear. Legal officials at one installation who are involved in the working group efforts stated that they were considering whether it was possible to have a single municipal court have sole jurisdiction for any juvenile crimes occurring on overseas installations. However, officials stated that the working group continues to research and discuss these types of issues to improve collaboration between the two departments and identify solutions to these important issues. <4.3. DOJ Notices of Declination of Prosecution Do Not Typically Provide Adequate Detail About the Reasons to Inform Military Investigators> Service officials stated that while DOD is typically notified by DOJ when it declines to prosecute the abuse of a child on a military installation, the notification does not consistently include detailed reasons for why the case was declined. Officials from the Army Criminal Investigation Command the military criminal investigative organization with the largest number of cases stated that they are not informed of the reasons for case declinations because they have been told that the information is considered an attorney work product. Officials from the other military criminal investigative organizations stated that for some cases they do receive reasons why they are declined. However, DOJ officials stated that in cases where a United States Attorney does notify DOD of a declination and the reason, it may be very vague, such as insufficient evidence, and may not detail the insufficiencies. DOJ officials stated that while a case may be declined for various reasons, there are three primary reasons for declinations: (1) insufficiency of the evidence (not enough admissible evidence to obtain and sustain a conviction beyond a reasonable doubt); (2) the person is subject to prosecution under another jurisdiction, such as in a state court system; or (3) there is an adequate noncriminal alternative to criminal prosecution. Officials within the Executive Office for United States Attorneys stated that they were not aware of any standard letters used to notify DOD of prosecutorial decisions and that the format and content of the notification are office dependent. According to DOJ officials, the investigating organization is to inform victims of a declination of prosecution. However military law enforcement officials from two services stated that the responsibility for informing victims of a declination of prosecution would be dependent on the circumstances of the individual case, such as whether formal charges had been preferred and any discussion between the military criminal investigative organization and United States Attorney. According to some of the parents we spoke with, this process does not always result in a timely notification of a prosecution declination including the reasons for the declination to the victims or their families. For example, one parent we spoke with highlighted the lack of information when they tried repeatedly for nearly one year to contact the military investigators for a case status update while in the process of filing an information request with DOJ and were finally told that their child s case had been declined for prosecution with no additional information on the reasons for the declination. Another parent stated that the Assistant United States Attorney informed them that a child-on-child abuse case would not be prosecuted due to a lack of strong evidence, specifically, a poor child forensic interview conducted by the military criminal investigative organization and the mishandling of electronic evidence. DOJ has committed to assisting DOD in responding to incidents of child- on-child abuse through the working group, as discussed previously. Additionally, DOJ has begun tracking referrals made to United States Attorneys by DOD for child-on-child sexual offenses. Specifically, in September 2018, the Director of the Executive Office for United States Attorneys issued a memorandum that instructed all United States Attorneys to begin tracking referrals of child-on-child sexual offenses from the military. According to these data, between October 1, 2018 and August 5, 2019, the military referred 63 of these cases to United States Attorneys for prosecution. Two of these cases were accepted for prosecution and 19 were declined the remaining cases were either referred to state or local authorities or were still pending. Per the memorandum, this information is to be provided, on a monthly basis, to the Office of the Deputy Attorney General, the lead DOJ office for the working group. DOJ s Principles of Federal Prosecution recommends that whenever an attorney declines to prosecute, the prosecutor should ensure the decision and reasons are communicated to the investigating agency involved and to any other interested agency. In addition, Standards for Internal Control in the Federal Government states that management should externally communicate the necessary quality information to achieve objectives. Specifically, management selects appropriate methods of communication, such as a written document in hard copy or electronic format or a face-to-face meeting. Management periodically evaluates the entity s methods of communication so that the organization has the appropriate tools to communicate quality information within and outside of the entity on a timely basis. However, United States Attorneys are not consistently communicating the reasons for declining to prosecute DOD cases involving child victims to the military criminal investigative organizations. Without seeking avenues to improve communication between the military criminal investigative organizations and United States Attorneys for relevant cases involving child victims to help ensure that investigators are notified when prosecution is declined investigators may not be informed of the reasons why a case is declined, such as for investigative deficiencies or weaknesses. As a result, DOD may be limited in its ability to improve investigative processes or identify areas where additional investigative training may be needed to improve future incident resolution. Improving this communication through the ongoing DOD and DOJ working group or by other means could also increase the information DOD receives about incident outcomes. Additionally, victims and their families may be better informed of their case disposition and the reasoning behind that disposition. <4.4. The Military Services Do Not Consistently Make Use of Children s Advocacy Center Resources Available for Child Victims of Abuse> Per the National Children s Alliance, most military installations in the United States with FAP services are located within 50 miles of a Children s Advocacy Center (CAC). However, military families may not be able to access CAC services because, according to a 2019 study conducted by the Alliance, only 7 percent of CACs with military installations in their service area reported having a memorandum of understanding, which is needed to authorize services associated with a FAP referral. In addition, according to the Alliance s 2019 study, while 66 percent of service FAP offices reported having a relationship with their local CAC, 47 percent of those offices reported that contact with the local CAC was infrequent. As shown in figure 5, there are CACs in each state. CACs have considerable experience working with abused children. Specifically, according to the National Children s Alliance, in 2018 CACs collectively served over 367,000 children, conducted over 260,000 forensic interviews, and completed over 91,000 medical exams and treatments. Further, CACs provide a child-friendly environment to conduct these interviews and exams, which are then reviewed by a multidisciplinary team that includes medical, law enforcement, mental health, and legal personnel, victim advocates, and state child welfare agencies. The purpose of the multidisciplinary team is to determine how to best support the child, such as through therapy, courtroom preparation, and victim advocacy. With regard to child forensic interviews, CACs work to minimize retraumatization of a child by only conducting one comprehensive interview of the child that is typically recorded and involves a team viewing the interview from a separate room. The recorded interview can then be shared with other interested parties with a need to know to include doctors, police, lawyers, therapists, investigators, and judges. This prevents the child from having to talk about the traumatic experience repeatedly in environments where they may be uncomfortable, such as in a police station where they may think they are in trouble. Officials from the Naval Criminal Investigative Service stated that they prefer to use CACs for child forensic interviews when available and where agreements are in place. Both the Army and the Air Force s military criminal investigative organizations stated that, depending on the circumstances of the case, they may make use of CACs when, for example, agents qualified in child forensic interviews are unavailable. At one U.S. installation we visited, military criminal investigators told us that due to personnel transfers they sometimes do not have investigators available who are qualified to conduct these interviews. Other military criminal investigators with whom we spoke noted that the lack of continuous training for military child forensic interviewers is challenging because regular practice is needed to develop and maintain the skillset. One investigator stated that even though they had not conducted a child forensic interview in 4 years, they were still technically qualified to conduct these interviews. Despite their ability to conduct the interviews, we spoke to military criminal investigators who preferred to rely on child forensic interviewers from the CACs who had more expertise because of the volume of interviews that they conduct. In September 2012, we found that agencies that articulate their agreements in formal documents can strengthen their commitment to working collaboratively. However, according to installation and CAC officials, four of the U.S. installations that we visited either did not have a formal agreement in place with the local CAC or noted that maintaining the agreement was challenging because of the limits that military turnover puts on their ability to build such partnerships. DOD has assigned the responsibility to establish formal agreements with counterparts in the community, such as CACs, within the Family Advocacy Committee at each individual installation. However, given that only 7 percent of CACs with a military installation in their area reported having such an agreement in place according to the National Children s Alliance s 2019 study, developing installation-level agreements with CACs has had limited success. In 2015, the Federal Bureau of Investigation established a nationwide memorandum of understanding with the National Children s Alliance to use CACs to conduct forensic interviews. DOD FAP officials stated that a similar agreement between the military services and the National Children s Alliance would benefit military families. In August 2019, the National Children s Alliance recommended the development of a national memorandum of understanding between the National Children s Alliance, service FAPs, and military criminal investigative organizations within each service. Similarly, in August 2019, a report by the Defense Health Board recommended the development of memorandums of agreement with external entities, such as the National Children s Alliance and state child welfare agencies. DOD FAP and National Children s Alliance officials noted that discussions about establishing these types of agreements are not new and believed that agreements would be most effective between the National Children s Alliance and each respective military service and military criminal investigative organization versus at the installation level. As of February 2019, officials from three of the services indicated that while discussions have been underway, none of these military services have an established agreement, though the status of their efforts varies. For example, as of September 2019, Air Force officials described the effort as being in its infancy, with no established timeframes to achieve goals. Marine Corps FAP officials stated that they were exploring the feasibility of establishing an agreement with the National Children s Alliance, and Navy FAP officials stated that they were developing a draft agreement for services and support to families impacted by problematic sexual behavior in children and youth. However, given the need for services associated with any type of abuse, such an agreement should not be restricted to problematic sexual behavior. The John S. McCain National Defense Authorization Act for Fiscal Year 2019 stated that the Secretaries of the military departments shall permit, facilitate, and encourage multidisciplinary teams at the installations to collaborate with appropriate civilian agencies for services to support child abuse victims. A national memorandum of understanding could help to break down some of the currently cited barriers to collaboration between CACs and the military, and facilitate such a multidisciplinary approach to addressing incidents of child abuse. For example, DOJ has provided funding for CAC-military partnership pilot projects, which are aimed at improving coordination between CACs and the military to address reported incidents of child abuse. Information from current CAC-military partnership pilot projects indicates that a common barrier to coordination of services is continuity in staffing and leadership for their military counterparts. A base commander s assignment at a post is time limited, as are some military investigative personnel. These frequent changes in staffing and leadership can result in changes in leadership styles, priorities, and methods of operation and can require a perpetual cycle of building relationships and revising protocols with new counterparts. Without a memorandum of understanding in place between each military service and the National Children s Alliance, the coordination between the military services and the CACs will continue to be ad hoc and dependent on the relationships of individuals at each installation. Further, without such agreements, the military services may not be fully aware of CAC services and thus may not effectively leverage their facilities or personnel to help address incidents of child abuse involving military dependents. <5. Conclusions> While DOD has taken steps to address recent incidents of child-on-child sexual abuse reported by the media by establishing policies and beginning to develop a centralized database for problematic sexual behavior in children and youth the department faces broader challenges, related to visibility, process, and collaboration in addressing the abuse of children. For example, DOD s visibility over incident outcomes and the extent to which children have been abused by an adult or another child is limited by standalone databases, information- sharing challenges, and personnel discretion at the installation level. As DOD develops a centralized database on problematic sexual behavior, it could address some of these challenges by expanding the scope of the database to include any abuse of a child, regardless of offender and type of abuse, and making key decisions related to its development. Further, additional guidance and processes are needed to help reduce information-sharing challenges and installation-level discretion in the tracking and reporting of these incidents. Until DOD resolves these challenges, it will continue to have limited visibility over the extent to which children have been affected by abuse on military installations or as military dependents. Additionally, the department faces gaps in its existing processes for responding to and resolving incidents of child abuse that should be addressed as it continues to develop processes related to problematic sexual behavior in children and youth. For example, given concerns expressed by medical personnel across the military services, DOD should expand the voting membership of the IDC to include medical personnel to ensure that stakeholders with pertinent knowledge and expertise are included. It is critical that IDC determinations are made with all of the relevant information available. Moreover, qualified medical personnel play an essential role in responding to children who have been abused, such as by conducting sexual assault exams. However, according to DOD officials, there are only 11 certified pediatric sexual assault examiners across the department. Without processes to ensure that children who are sexually abused overseas have timely access to a qualified examiner, child victims of sexual abuse may not receive exams in time for the evidence to be collected for use in prosecution and may experience additional stress and trauma. Until DOD addresses these process-related challenges, among others, child victims and their families may not receive the assistance, care, and services that they need. Finally, while DOD has successfully collaborated with a number of states to increase information sharing and with DOJ to address child-on-child sexual offenses occurring on military installations, there are opportunities for DOD to improve its collaboration with external partners to the benefit of military families. For example, there are opportunities to improve communication between the military criminal investigative organizations and United States Attorneys to ensure that DOD is aware of declinations of cases involving the abuse of children and why they were declined. Such avenues could, among other things, help identify needed changes to investigative processes or training. Further, there are opportunities to facilitate awareness and increase the military services use of CAC resources, such as through the establishment of a national agreement between the National Children s Alliance and each military service. Ultimately, improving interagency collaboration could enhance DOD s visibility over these incidents and increase the resources available to both the department and victims of abuse. <6. Recommendations for Executive Action> We are making a total of 23 recommendations, including 11 to the Secretary of Defense, three to the Secretary of the Army, six to the Secretary of the Navy, and three to the Secretary of the Air Force. The Secretary of Defense, in collaboration with the Secretaries of the military departments, should expand the scope of the department s centralized database on problematic sexual behavior in children and youth, which is under development, to also track information on all incidents involving the abuse of a child (physical, sexual, emotional, and neglect) reported to the Family Advocacy Program or investigated by a military law enforcement organization, regardless of whether the offender was another child, an adult, or someone in a noncaregiving role at the time of the incident. (Recommendation 1) The Secretary of Defense, in collaboration with the Secretaries of the military departments, should, as part of the ongoing development of the centralized database, identify and define the elements to be tracked by each responsible organization, such as the Family Advocacy Program and military law enforcement. (Recommendation 2) The Secretary of Defense, in collaboration with the Secretaries of the military departments, should develop a plan for how it will use the data it will collect in the centralized database to help ensure data-driven decision-making is used to inform program efforts. (Recommendation 3) The Secretary of Defense, in collaboration with the Secretaries of the military departments, should establish a reliable schedule for the development and implementation of the centralized database on problematic sexual behavior in children and youth that includes key activities, the timeframes and resources needed to execute them, and GAO-identified practices for developing and maintaining a reliable schedule. (Recommendation 4) The Secretary of Defense, in collaboration with the Secretaries of the military departments, should direct the service Family Advocacy Programs and military law enforcement organizations to document in their respective databases the date that they notified the other entity of a reported incident of child abuse. (Recommendation 5) The Secretary of Defense, in collaboration with the Secretaries of the military departments, should issue guidance that describes the process through which the service Family Advocacy Programs are to receive and incorporate information into their central registries regarding child abuse allegations and determinations involving their servicemembers and dependents that were recorded by another service s installation Family Advocacy Program. Such guidance should include a mechanism to monitor that the process is occurring consistently. (Recommendation 6) The Secretary of the Army should develop a process to monitor how reported incidents of child abuse are screened at installations to help ensure that all reported child abuse incidents that should be presented to an Incident Determination Committee are consistently presented and therefore tracked. (Recommendation 7) The Secretary of the Navy should develop a process to monitor how reported incidents of child abuse are screened at installations to help ensure that all reported child abuse incidents that should be presented to an Incident Determination Committee are consistently presented and therefore tracked. (Recommendation 8) The Secretary of the Navy should ensure that the Commandant of the Marine Corps develops a process to monitor how reported incidents of child abuse are screened at installations to help ensure that all reported child abuse incidents that should be presented to an Incident Determination Committee are consistently presented and therefore tracked. (Recommendation 9) The Secretary of the Air Force should develop a process to monitor how reported incidents of child abuse are screened at installations to help ensure that all reported child abuse incidents that should be presented to an Incident Determination Committee are consistently presented and therefore tracked. (Recommendation 10) The Secretary of Defense should ensure that the Under Secretary of Defense for Personnel and Readiness, in coordination with the Director of the Department of Defense Education Activity, clarifies Department of Defense Education Activity guidance to define what types of incidents must be reported as serious incidents to help ensure that all serious incidents of which Department of Defense Education Activity leadership needs to be informed are accurately and consistently reported by school administrators. (Recommendation 11) The Secretary of Defense, in collaboration with the Secretaries of the military departments, should expand the voting membership of the Incident Determination Committee to include medical personnel with the requisite knowledge and experience. (Recommendation 12) The Secretary of the Army should establish efforts to comprehensively inform victims families about how reported incidents of child abuse will be addressed following the report, such as a comprehensive guide that explains the process the Family Advocacy Program and military law enforcement organizations will follow, and available victim services. (Recommendation 13) The Secretary of the Navy should establish efforts to comprehensively inform victims families about how reported incidents of child abuse will be addressed following the report, such as a comprehensive guide that explains the process the Family Advocacy Program and military law enforcement organizations will follow, and available victim services. (Recommendation 14) The Secretary of the Navy should ensure that the Commandant of the Marine Corps establishes efforts to comprehensively inform victims families about how reported incidents of child abuse will be addressed following the report, such as a comprehensive guide that explains the process the Family Advocacy Program and military law enforcement organizations will follow, and available victim services. (Recommendation 15) The Secretary of the Air Force should establish efforts to comprehensively inform victims families about how reported incidents of child abuse will be addressed following the report, such as a comprehensive guide that explains the process the Family Advocacy Program and military law enforcement organizations will follow, and available victim services. (Recommendation 16) The Secretary of Defense, in collaboration with the Secretaries of the military departments, should clarify, in guidance, the circumstances under which commanders may exercise their authority to remove a child from a potentially unsafe home on an overseas installation. (Recommendation 17) The Secretary of Defense should ensure that the Under Secretary of Defense for Personnel and Readiness, in coordination with the Director of the Defense Health Agency, establishes processes that help ensure children who are sexually abused overseas have timely access to a certified pediatric sexual assault forensic examiner to conduct the examination. Initiatives could include certifying pediatricians or adult sexual assault forensic examiners as pediatric examiners during mandatory training or establishing shared regional assets. (Recommendation 18) The Secretary of Defense, in collaboration with the Deputy Attorney General, should seek avenues to improve communication between the military criminal investigative organizations and United States Attorneys for relevant cases involving child victims to help ensure that investigators are notified when prosecution is declined, including the reasons for the declination when appropriate, such as details about any investigative deficiencies. (Recommendation 19) The Secretary of the Army should seek to develop a memorandum of understanding with the National Children s Alliance that makes children s advocacy center services available to all Army installations and thereby increase awareness of those services across the department. (Recommendation 20) The Secretary of the Navy should continue to develop a memorandum of understanding with the National Children s Alliance that makes children s advocacy center services available to all Navy installations and thereby increase awareness of those services across the department. (Recommendation 21) The Secretary of the Navy should ensure that the Commandant of the Marine Corps continues to develop a memorandum of understanding with the National Children s Alliance that makes children s advocacy center services available to all Marine Corps installations and thereby increase awareness of those services across the service. (Recommendation 22) The Secretary of the Air Force should seek to develop a memorandum of understanding with the National Children s Alliance that makes children s advocacy center services available to all Air Force installations and thereby increase awareness of those services across the department. (Recommendation 23) <7. Agency Comments and Our Evaluation> We provided a draft of this report to DOD for review and comment. In its written comments, DOD concurred with 16 recommendations, partially concurred with six recommendations, and did not concur with one recommendation. DOD also provided technical comments (referred to as enclosure 1 in its written comments), which we incorporated as appropriate. DOD s written comments are summarized below and reprinted in appendix VI. For the 16 recommendations with which DOD concurred, DOD s written comments discuss ongoing and planned efforts to implement our recommendations, and in some cases provide target completion dates. DOD did not concur with our first recommendation to expand the scope of its centralized database on problematic sexual behavior in children and youth to track information on all incidents involving the abuse of a child reported to the FAP or investigated by a military law enforcement organization. In its written comments, DOD stated concerns related to privacy and protecting information collected and shared on the alleged conduct of juveniles. DOD also cited a statutory requirement to not disclose directly or indirectly certain juvenile records during the course of juvenile delinquency proceedings and stated that it is the department s position that it is imperative to protect sensitive juvenile data with any database. We agree that protecting sensitive juvenile data is imperative and acknowledge in the report that privacy and data-safeguarding precautions such as role-based permissions and other physical, technical, and administrative controls will need to be taken, as they were in the development of the Defense Sexual Assault Incident Database. In addition, as discussed in the report, the department already maintains databases that include information about both adults and children, such as the service FAPs central registries and the databases of the various military criminal investigative organizations, which contain data on both adult and juvenile offenders and victims. DOD does not assert that it would be impossible to establish role-based permissions and the sorts of physical, technical, and administrative controls that would protect the privacy rights of individuals whose information appeared in a central database like the one we recommend. Moreover, the existence of other DOD databases that incorporate such measures supports the notion that it is possible to develop such a database in this situation. Doing so would provide the information needed to track the extent to which children have been affected by abuse and problematic sexual behavior, while safeguarding the personal information of minors. DOD s written comments also stated that the report conflates three separate and distinct constructs of behavior: juvenile misconduct, problematic sexual behavior in children and youth, and child abuse and neglect committed by adults. As described in our scope and methodology, the scope of our review included child abuse inflicted by both adults and children, which, according to DOD definitions, includes the three categories of behavior noted above. As stated in our report, information is tracked in multiple standalone databases, due, in part, to who inflicted the abuse; as a result, it is difficult to know the extent to which children have been affected by abuse on military installations or as military dependents. In addition, while the response process differs between incidents of adult- inflicted child abuse and incidents of problematic sexual behavior, DOD officials acknowledged that the organizations involved in the response process and the primary data sources are the same. As we also noted, officials stated that a centralized database for all child abuse incidents, tracking the FAP s determination about whether an incident met DOD s criteria for abuse, the investigation, and resolution, would be beneficial in determining what happened in a particular case. These officials further stated that such a database would benefit commanders oversight of cases for which they are responsible. In addition, without a centralized database that tracks all incidents of abuse involving children, DOD and Congress do not know the extent to which children have been affected by abuse on military installations or as military dependents, or how such incidents have been responded to and resolved. This makes it difficult to identify and address trends that could lead to further prevention efforts. As such, we continue to believe that this recommendation is valid and should be implemented. DOD partially concurred with recommendation 5 to direct the service FAPs and military law enforcement organizations to document in their respective databases the date they notified the other entity of a reported incident of child abuse. In its written comments, DOD stated that it will analyze the efficiency, cost, and feasibility of recording the notification date to law enforcement in FAP databases and that it plans to incorporate a notification field as part of new data standards for DOD s criminal justice agencies. Similarly, DOD also partially concurred with recommendation 6 to issue guidance that describes the process through which the service FAPs receive and incorporate information into their central registries regarding child abuse allegations and determinations involving their servicemembers and dependents that were recorded by another service s installation FAP, and that the guidance include a mechanism to monitor that the process is occurring consistently. DOD stated that it will review FAP data reporting policy to explore the potential to reference this process in the scheduled reissuance of DOD policy in 2023. DOD further stated that such information sharing is limited to reported incidents of child abuse that were determined to have met DOD s criteria for abuse rather than all abuse allegations. We continue to believe that issuing guidance that extends to both allegations and determinations would provide better assurance that the services have complete and up-to-date information about their personnel and their dependents, and increase their visibility over incidents of child abuse. DOD partially concurred with recommendation 12 to expand the voting membership of the IDC to include medical personnel with the requisite knowledge and experience. In its written comments, DOD agreed that the inclusion and consideration of medical information in the determination process is important, and stated that the current process includes medical personnel as nonvoting members. DOD also stated that it will engage the researchers who developed the IDC algorithm and process, as well as other stakeholders including the Defense Health Agency and the military services for collaborative input and guidance for a forthcoming revision of the relevant DOD Manual. However, as discussed in the report, medical personnel we spoke with at installations stated that they are not always included in the process, and if they are, their medical expertise is not always included as part of the final determination, contrary to best practices for substantiating child abuse allegations. Further, if medical personnel are not voting members, other clinical duties may take precedence. Therefore, we continue to believe that this recommendation is valid. For recommendations 13, 14, and 16, the Army, the Navy, and the Air Force concurred that they should establish efforts to comprehensively inform victims families about how reported incidents of child abuse will be addressed following the report, such as a comprehensive guide that explains the process and available victim services. However, the Marine Corps partially concurred with the related recommendation 15, stating that it is out of scope for the FAP to explain the processes that law enforcement organizations will follow. However, our recommendations state only that the FAP and military law enforcement processes should be effectively communicated to the families, not that the FAP would have to determine or communicate the law enforcement processes to affected families. Further, DOD s written comments stated that Marine Corps Order 1754.11 addresses the recommendation because it directs victim advocates to be assigned to the non-offending parent of a victim of child abuse who requests services. However, parents we spoke with indicated that they were not aware of all available services and resources offered by the military, and that a comprehensive guide outlining the process would have helped them understand what was going to happen. For these reasons, we continue to believe that the recommendation is valid. For recommendations 20 through 23, DOD s written comments stated that the services concurred with the overall recommendation to seek to establish memorandums of understanding with the National Children s Alliance that make children s advocacy center services available to all military installations and thereby increase awareness of those services across the department. While the Marine Corps and the Air Force concurred (recommendations 22 and 23), DOD noted that individual service differences in organizational structure and process are reflected in the nuances of their responses. For example, the Army partially concurred with recommendation 20. DOD stated that the Army is working with the National Children s Alliance to develop a broad memorandum of understanding to support partnerships between military installations and local children s advocacy centers. The agreement is intended to assist in providing support services, education, and prevention services to military families and investigations of child abuse and problematic sexual behavior with a goal to finalize the agreement in fiscal year 2021. The Army also plans to pursue local agreements with children s advocacy centers who may not participate in the broader service-wide agreement. We believe that such local agreements, in addition to a broader memorandum of understanding with the National Children s Alliance, would be beneficial and that these actions would meet the intent of our recommendation. Likewise, the Navy partially concurred with recommendation 21. DOD s written comments stated that the Navy seeks to develop memorandums of understanding both broadly with the National Children s Alliance, as well as with local children s advocacy centers who may not be accredited through the National Children s Alliance. Similar to the Army, we believe that such local agreements would be beneficial in addition to a broader agreement with the National Children s Alliance and, that together, they would meet the intent of our recommendation. DOD s comments also stated that the Navy s planned agreement with the National Children s Alliance will outline services and support to families affected by problematic sexual behavior. However, as previously discussed in this report, we believe that such an agreement should not be restricted to problematic sexual behavior given the need for services associated with any type of abuse. As such, we continue to believe that the recommendation is valid. We are sending copies of this report to the appropriate congressional committees, the Attorney General of the United States, the Secretary of Defense, the Secretary of the Army, the Secretary of the Navy, the Secretary of the Air Force, and the Commandant of the Marine Corps. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or members of your staff have any questions regarding this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix VII. Appendix I: Scope and Methodology Department of Defense (DOD) policy defines child abuse as the physical, sexual, or emotional abuse, or neglect of a child by a parent, guardian, foster parent, or caregiver. Our review included any abuse of a child (emotional, physical, or sexual abuse, or neglect) by an adult regardless of their caregiving status and child-on-child abuse any physical or sexual abuse of a child (under the age of 18) by another child. To assess the extent to which DOD has visibility over reported incidents of child abuse, including child-on-child abuse, occurring on military installations or involving military dependents, we analyzed data from the three primary organizations that DOD officials identified as having responsibility for tracking these incidents: (1) the military services Family Advocacy Programs (FAP), (2) the military criminal investigative organizations, and (3) the DOD Education Activity (DODEA). First, we analyzed FAP data from the Army, the Navy, the Marine Corps, and the Air Force on all reported incidents of child abuse for fiscal years 2014 through 2018. We selected this timeframe to evaluate trends over 5 years, and fiscal year 2018 was the most recent year for which complete data were available at the time of our review. Specifically, we analyzed the data to determine the number of reported incidents of child abuse by service and the percent of those that met DOD s criteria for child abuse. Because the services are required to track more detailed information about incidents of child abuse that they determined met DOD s criteria for child abuse, we conducted a more detailed analysis of these incidents to describe their characteristics, such as the status of the offender, the relationship between the offender and the victim, the age of the victim, and the type of abuse (emotional, physical, sexual, or neglect). To assess the reliability of the service FAPs child abuse data, we reviewed related documentation; assessed the data for errors, omissions, and inconsistencies; and interviewed officials. We determined that the data were sufficiently reliable to describe trends in reported incidents of child abuse across the services and characteristics of such incidents from fiscal years 2014 through 2018. Second, we analyzed data from the military criminal investigative organizations the Army Criminal Investigation Command, the Naval Criminal Investigative Service, and the Air Force Office of Special Investigations for the same time period for all investigations with a child victim. We also analyzed child victim investigation data from the U.S. Marine Corps Criminal Investigation Division, a federal law enforcement agency that also investigates some offenses involving child victims. Specifically, we analyzed the data to identify trends in the number of investigations over the past 5 fiscal years. We also analyzed the investigation data to identify key characteristics of the investigations, such as the status of the offender, relationship between the victim and offender, and primary investigative agency. To assess the reliability of the military criminal investigative organizations child victim investigation data, as well as that of the U.S. Marine Corps Criminal Investigation Division, we assessed the data for errors, omissions, and inconsistencies, and interviewed officials. We determined that the data were sufficiently reliable to describe trends in child victim investigations across the services and the characteristics of such investigations from fiscal years 2014 through 2018. Third, we analyzed three sources of DODEA data: (1) child abuse reports from school years 2014-2015 through 2017-2018, (2) serious incident reports from school years 2013-2014 through 2017-2018, and (3) student misconduct records from school years 2016-2017 through 2017-2018. We selected these timeframes to evaluate serious incident report trends over 5 years and to align with the FAP and investigation data; school year 2017-2018 was the most recent year for which complete data were available at the time of our review. All DODEA records were redacted by DODEA personnel to ensure the privacy of students and DODEA personnel. DODEA child abuse reports track information about incidents of suspected or alleged child abuse or neglect. We analyzed DODEA s child abuse reports over 4 school years to identify trends in the number and type of child abuse reports as well as to describe characteristics of the reports. Specifically, we analyzed characteristics such as the relationship between the victim and the offender, the location of the reported abuse, and notifications by DODEA to external organizations, such as the FAP. To assess the reliability of DODEA s child abuse reports, we reviewed related documentation; assessed the data for errors, omissions, and inconsistencies; and interviewed officials. We determined that the data were sufficiently reliable to describe trends in and characteristics of child abuse reports from school years 2014-2015 through 2017-2018. DODEA serious incident reports track information about alleged or suspected serious incidents resulting in consequences greater than those normally addressed through routine administrative actions. We analyzed DODEA s serious incident reports relating to child-on-child abuse involving a violation of law or a sexual event over the past 5 school years to identify trends in the number and type of serious incident reports as well as to describe characteristics of the reports. Specifically, we analyzed the type of serious incident (assault/battery, child pornography, nonconsensual sexual contact, etc.), whether police were notified, whether the police investigated, and the type of school filing the report. To assess the reliability of DODEA s serious incident reports, we reviewed related documentation; assessed the data for errors, omissions, and inconsistencies; and interviewed officials. We determined that the data were sufficiently reliable to describe trends in and characteristics of serious incident reports from school years 2013-2014 through 2017-2018, and to compare serious incident reports to DODEA student misconduct records from school years 2016-2017 through 2017-2018. DODEA s student misconduct records are separate from child abuse reports and serious incident reports but may also be filed in relation to a serious incident and track information regarding disciplinary actions and the triggering incident, such as an abusive or indecent act. We requested and received all redacted DODEA student misconduct records over the past 5 school years that involved at least one of 26 incident types that we determined, through conversations with DODEA officials familiar with the records, could relate to a child-on-child serious incident. We received over 26,000 records, some of which related to the same incident, for example, according to DODEA officials, when more than one student was involved. For school years 2016-2017 and 2017-2018, we conducted a content analysis of the student misconduct records, using DODEA s Serious Incident Reporting Procedures, to determine the number of student misconduct records that school administrators, using that guidance, could have reasonably categorized as a violation of law or sexual event and filed a serious incident report. We selected these 2 school years for the analysis because DODEA s updated serious incident reporting guidance was issued in August 2016 and was in place for both school years. Because of the large number of DODEA student misconduct records, we conducted our content analysis in two stages. We first conducted an electronic search to identify potentially-relevant records and then conducted a manual review of all potentially-relevant records. One of our data analysts electronically searched the student misconduct record descriptions for key terms that could potentially signify that the incident was of a nature serious enough to warrant the filing of a serious incident report, per DODEA guidance. We selected the search terms using the DODEA guidance (e.g., assault, battery, and rape); additional terms that may signify a medical or police response (e.g., nurse, ambulance, blood, and police) because incidents resulting in an injury may be considered to be serious incidents per the guidance; and terms for common social media outlets (e.g., Facebook and Snapchat) because taking or sharing nude photos of another student without their knowledge is an example of a noncontact sexual act that should result in the filing of a serious incident report. This search resulted in 2,619 student misconduct records after removing duplicate records that we then manually reviewed. It is possible that we did not identify some student misconduct records that could have been categorized as serious incidents because we did not include some search terms that would have identified more. Two analysts independently reviewed each of the 2,619 student misconduct records, using the DODEA guidance, and recorded their determination that a record (a) could have been classified as a serious incident report per DODEA s guidance, (b) was unclear whether it could be classified as a serious incident report, or (c) should not have been classified as a serious incident report per DODEA s guidance. For records where the two analysts did not initially agree on a determination, they met and discussed the records and reached a final determination. We then compared the number of student misconduct records which we determined school administrators, using the guidance, could have reasonably categorized as a violation of law or sexual event and filed a serious incident report with the number of serious incidents recorded by DODEA for the same time period to determine the extent of DODEA s visibility into serious incidents. We discussed the student misconduct records, the content analysis, and the comparison to serious incident reports with DODEA officials. Further, we interviewed relevant DOD and service officials at the headquarters level and at a nongeneralizable sample of seven military installations to identify how DOD tracks reported incidents of child abuse from the time of a report to an ultimate adjudication, including how information is communicated within and across the services. We selected at least one installation per service as well as two joint installations, and selected locations based on the number of reported child abuse incidents and the number of investigated child-on-child abuse incidents over the past 5 fiscal years, as well as other factors. Specifically, we selected installations that over the past 5 fiscal years had a high number of reported incidents of child abuse or a high number of child-on-child abuse investigations or both in order to maximize the possibility we would interview officials, responders, and care providers who had responded to reported incidents of child abuse. Other selection factors included a mix of types of legislative jurisdiction (such as exclusive and concurrent jurisdiction), at least some installations with DODEA schools, a high number of DODEA serious incident reports, and a mix of geographic locations in the United States and overseas. Because we did not select locations using a statistically representative sampling method, the comments provided during our interviews with installation officials are nongeneralizable and therefore cannot be projected across DOD or a service, or any other installations. While the information obtained was not generalizable, it provided perspectives from installation officials that have assisted with the response to reported incidents of child abuse. We compared information from our data analyses and interviews to DOD guidance; GAO-identified practices for developing and maintaining a reliable schedule; GAO-identified leading practices for results-oriented management; and Standards for Internal Control in the Federal Government related to quality information, control activities, and monitoring activities. To assess the extent to which DOD has developed and implemented policies and procedures to respond to and resolve incidents of child abuse, including child-on-child abuse, occurring on military installations or involving military dependents, we reviewed relevant DOD and service policies, guidance, reports, and memoranda on child abuse, juvenile misconduct, and problematic sexual behavior in children and youth. We also conducted work at a nongeneralizable sample of seven military installations in the United States and overseas. At the installations, we interviewed FAP personnel, medical and mental health personnel, military law enforcement officials, legal personnel, Special Assistant United States Attorneys, military criminal investigators, chaplains, child development center personnel, school liaison officers, military family life counselors, DODEA personnel, and commanders about how they prevent, track, respond to, and resolve these incidents. To obtain the perspectives of parents and guardians of children who have been affected by abuse on military installations or while they were military dependents (either by an adult or another child), we interviewed 20 parents and guardians by phone that volunteered to speak with us about their perspectives on available resources and assistance, case communication, and the investigative and adjudicative processes. To develop the interview protocol for parents and guardians, we reviewed DOD and service policies, interviewed DOD officials, and reviewed our prior work related to sexual assault in the military. We also consulted with a GAO mental health professional on the appropriateness of the instrument as well as guidance on resources to offer participants if relevant. A survey specialist helped to design the interview protocol, another survey specialist reviewed it for methodological issues, and an attorney reviewed it for legal terminology and any other issues. Prior to interviewing parents and guardians, we pretested the interview protocol with three analysts who had children and had experience as a military servicemember or military dependent. We used the pretests to determine whether: (1) the questions were clear, (2) the terms used were precise, (3) respondents were able to provide information that we were seeking, and (4) the questions were unbiased. We made changes to the content and format of the interview protocol based on the results of our pretesting. Further, each team member was trained on the interview protocol to assure its consistent implementation across interviewers and participants. Due to the sensitivity of the information being discussed, we took several steps to help ensure a confidential and safe environment during the phone interviews. All information provided was handled confidentially callers names and contact information were not recorded in our notes and we did not audio record the interviews. We conducted interviews from June to September 2019. We took interview notes on paper and later entered them into a Microsoft Word form. Data entry was verified by the same analyst. The data were electronically extracted from the Word forms into a comma-delimited file that was then imported into Excel for analysis. We summarized the answers to questions about the characteristics of the incidents discussed, such as whether the offender was a child or an adult, the location of the incident, the military dependent status of the victim, and the servicemember status of the offender. Quantitative data analyses were conducted by one analyst and verified by a second analyst. We also conducted a content analysis of the narrative information to identify common themes related to items such as parents awareness of available victim services, the clarity of the response process, and areas for improvement. Two analysts reviewed the data collected from the interviews and agreed on the themes into which callers comments would be categorized. Standardized coding instructions were developed and tested. One analyst reviewed all the callers narrative comments and indicated in the spreadsheet if a theme was present or absent. A different analyst reviewed the first analyst s coding to see if they reached the same determination. For records where the two analysts did not initially agree on a determination, they met and discussed the records and reached a final determination. The codes were then counted to assess how many callers mentioned a given theme. Because we did not select participants using a statistically representative sampling method, the perspectives obtained are nongeneralizable and therefore cannot be projected across DOD, a military service, or installation. While the information obtained was not generalizable, it provided perspectives from parents and guardians who were willing to discuss their experiences with the reporting, response, and resolution processes. Additionally, we observed each service s Incident Determination Committee (IDC) process through which installations determine whether an incident meets DOD s criteria for child abuse at a total of four installations. We also attended a symposium hosted by the National Center on Sexual Exploitation on problematic sexual behavior in children and youth. We compared the information from the selected installations, observations, and interviews to GAO-developed practices to enhance and sustain collaboration in interagency groups, Department of Justice (DOJ) best practices for sexual assault forensic examination kits, and Standards for Internal Control in the Federal Government related to quality information. To assess the extent to which DOD collaborates with other governmental and nongovernmental organizations to address incidents of child abuse, including child-on-child abuse, occurring on military installations or involving military dependents, we reviewed written agreements in place with civilian organizations at the nongeneralizable sample of U.S. installations in our review, such as agreements with local civilian law enforcement and state and local child welfare agencies about how incidents of child abuse on the installation are to be addressed. We also interviewed relevant officials from civilian organizations near the five U.S. installations in our review, such as state child welfare agencies, law enforcement organizations, prosecuting attorneys offices, and children s advocacy centers (CAC) to determine the extent of their collaboration with the military and any related challenges. In addition, we interviewed a senior official from the Defense State Liaison Office regarding their outreach to states to increase information sharing with state child welfare agencies. Further, we interviewed DOJ officials regarding the prosecution of juvenile crimes committed on overseas installations and on some U.S. installations and its coordination with DOD to address these incidents. Finally, we contacted officials from the National Children s Alliance, which accredits CACs, about its efforts with DOD to improve collaboration between the military and CACs. We compared the agreements and information obtained through interviews with DOJ Principles of Federal Prosecution, GAO-developed key considerations for interagency collaborative mechanisms, and Standards for Internal Control in the Federal Government related to quality information. Tables 2 and 3 present the DOD and non-DOD organizations we visited or contacted during our review to address our three objectives. We conducted this performance audit from January 2019 to February 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Questionnaire for Interviews with Parents and Guardians of Children Affected by Abuse on Military Installations or While They Were Military Dependents To obtain the perspectives of parents and guardians of children affected by abuse on military installations or while they were military dependents, we interviewed parents and guardians who volunteered to speak with us about their perspectives on available resources and assistance, case communication, and the investigative and adjudicative processes. We announced our interest in anonymously interviewing parents and guardians of children affected by abuse on military installations or while they were military dependents and provided a toll-free telephone number and email address for volunteers to contact us. Department of Defense Military Community and Family Policy officials, who are responsible for Military OneSource a 24/7 connection for military families to information, answers and support agreed to post our announcement on the Military OneSource website. We also posted our announcement on our agency social media platforms and disseminated it through officials at some of the installations we visited. It was also featured in an article by a military- focused news outlet. Further details about our methodology for these interviews can be found in appendix I. The interview questionnaire follows. 1. In what state are you currently located, or if you re overseas, in what 2. Are you currently, or were you previously, associated with a particular military service, including as a military dependent? Which service? ______________ 3. Are you calling about abuse that your child experienced, or a child for whom you are a guardian experienced, or are you calling about someone else s child s experience? My child/a child for whom I am a guardian Someone else s experience Go to Question 4 End discussion 4. In what year did the abuse occur? (If multiple years, write the range.) Year provided 5. Was the abuse reported to any military or civilian government office? Continue to a a. In what year was the abuse first reported? 6. Did the abuse occur on the property of a military installation, including in military housing? What installation was it? _______________ a. Was the child who was affected by abuse a military dependent at the time of the incident? If no here and Q6 above is no, don t know, or prefer not to answer, End discussion Don t Know know, or prefer not to answer, End discussion If don t know here and Q6 above is no, don t If prefer not to answer here and Q6 above is no, don t know, or prefer not to answer, 7. Did the abuse occur in a child care facility, a home, a DOD school, or somewhere else? Interviewer: Check all they mention a. (Skip if the abuse occurred on an installation and the installation was provided in Q6) In what state or country did the abuse occur? 8. Was the individual who perpetrated the abuse a servicemember at the time of the incident? Continue to a Continue to a Continue to a a. Was the individual who perpetrated the abuse another child under the age of 18 at the time of the incident? Prefer not to answer 9. Was the individual who perpetrated the abuse a parent, guardian, foster parent, or someone in a caregiving role at the time of the incident, including an older sibling babysitting, or a teacher, etc.? 10. What organization was the abuse first reported to? For example, was it first reported to the Family Advocacy Program, military law enforcement, military criminal investigators, civilian law enforcement, the chain of command, Child Protective Services, or some other organization? Family Advocacy Program (FAP) Continue to a Military law enforcement (Security Forces, Military Police, Provost s Office, Master-at-Arms, etc.) Continue to a Military criminal investigators (CID, OSI, NCIS, Marine Corps CID) Continue to a Continue to a Chain of Command (to include Commander, Unit, Wing, etc.) Continue to a Child Protective Services (CPS) Skip to b Chaplain military or civilian government organization? Did you or the chaplain report the abuse to any other What office? What office? _______ then continue to a Was the abuse ever reported to a civilian Child Protective Services agency? Continue to b Continue to b Continue to b Continue to b Was the abuse ever reported to the Family Advocacy Program? Continue to c Continue to c Continue to c c. Had you ever heard of the Family Advocacy Program before this interview? Prefer not to answer 11. Are you aware that the Family Advocacy Program is responsible for assessing and providing support services to military families affected by child abuse? 12. (If child was abused by a parent/guardian/foster parent/someone in a caregiving role If Q9 = Yes) Were you notified by the Family Advocacy Program about whether the incident was considered to be child abuse, according to DOD criteria and policy? Continue to a Skip to Response to Abuse section Skip to Response to Abuse section Skip to Response to Abuse section a. Was the Family Advocacy Program s process for assessing the report of abuse and determining whether it met criteria to be considered child abuse clear to you? b. Is there anything that the Family Advocacy Program could do to clarify the process or make the process clearer? III. Response to Abuse 13. Did the child or family receive any services from the military related to the abuse, for example, psychological or legal counseling or medical care? Continue to a a. What services did the child or family receive from the military? b. What, if any, services provided by the military were particularly helpful? c. What, if any, services were provided by the military but did not meet your family s needs? i. Why didn t those services meet your family s needs? _________________________________________ d. What, if anything, could be improved about the services you received from the military, such as the services themselves, or the ease of access or timeliness of the services provided? 14. Were there services that your child or family were offered by the military, but that you did not receive, either because you did not need them or because of some other factor? Continue to a a. What type of services were offered but not received? b. Why did you not receive these services for example, was it by choice or was there some factor that prevented you from receiving them? _________________________________________ 15. Did your child or family receive any services from civilian organizations or providers related to the abuse, for example, psychological or legal counseling or medical care? Continue to a a. What services did your child or family receive from civilian organizations or providers? 16. Were there any services either through the military or a civilian agency that you think would have been helpful, but were not available? Continue to a a. What services? _________________________________________ IV. Investigation/Resolution of Abuse 17. Was the incident of abuse investigated by any law enforcement organization, including military or civilian law enforcement? For example, was the incident of abuse investigated by the military police, a military investigative organization, civilian state or local law enforcement, the Federal Bureau of Investigation, or some other law enforcement organization? Continue to a Skip to Miscellaneous Questions section Skip to Miscellaneous Questions section Skip to Miscellaneous Questions section a. What law enforcement organization or organizations conducted an investigation? If more than one law enforcement organization conducted an investigation, please tell me all the organizations. Military police (Security Forces, Military Police, Provost s Office, Marshal-at-Arms, etc) Military investigative organization (CID, OSI, NCIS, Marine Corps CID) 18. (If military conducted an investigation, see response to Q17a) What type of information did you receive from the investigating military organization during the course of the investigation, if any, such as status updates by phone, e-mail, or letter? _________________________________________ a. Did you have a point of contact that you could reach out to at the investigating military organization with any questions or for status updates? 19. After the investigation ended, were you informed about the outcome or informed of any next steps regarding any potential criminal or administrative action against the individual that perpetrated the abuse? Continue to a Continue to a a. Did you have a point of contact that you could reach out to with any questions about the outcome of the investigation or next steps? Prefer not to answer 20. What, if anything, would you recommend that DOD or the military services do to be more responsive to families of children who have been affected by abuse on military installations or as military dependents? 21. What, if anything, would you recommend DOD or the military services do to help prevent child abuse or child-on-child abuse? 22. Is there anything related to child abuse on military installations or of military dependents that we did not discuss but you think we should be aware of? 23. One last question: Was there anyone else present with you during any part of our conversation? Continue to a Appendix III: Characteristics of Incidents of Child Abuse Reported to the Military Services Family Advocacy Programs, Fiscal Years 2014-2018 Each military service s Family Advocacy Program (FAP) has a database referred to as the central registry where it tracks (1) reports of abuse that did not meet the Department of Defense s (DOD) criteria for child abuse, about which no identifiable individual information is tracked; and (2) information on reports of abuse that met DOD s criteria for abuse, which is linked to identifiable servicemembers, their family members, and the alleged offenders. Per DOD guidance, the services are to track 46 data elements on all reported incidents of child abuse that met DOD s criteria for abuse. The service FAPs only track information in their central registries related to child abuse where the offender was a parent, guardian, foster parent, or someone in a caregiving role. The following describes key characteristics of incidents of child abuse that met DOD s criteria for abuse as reported to the Army, the Navy, the Marine Corps, and the Air Force FAPs from fiscal years 2014 through 2018. Army FAP. Over the past 5 fiscal years, the Army FAP recorded 32,386 reported incidents of child abuse, of which 50 percent met DOD s criteria for child abuse. Of the incidents that met DOD s criteria for abuse, 66 percent involved neglect, 20 percent involved physical abuse, 17 percent involved emotional abuse, and 5 percent involved sexual abuse. The majority of incidents (97 percent) were intrafamilial meaning that the victim and the offender were from the same family, such as a parent or sibling and 2 percent of the incidents were extrafamilial or external to the family, such as a babysitter or a childcare provider. Half of the victims and 52 percent of the offenders were male. In addition, a quarter of offenders had prior FAP cases related to child abuse or domestic abuse that met DOD s criteria for abuse. Figure 6 depicts characteristics of incidents reported to the Army FAP that met DOD s criteria for child abuse over the past 5 fiscal years. Navy FAP. From fiscal years 2014 through 2018, the Navy FAP recorded 10,744 reported incidents of child abuse, of which 51 percent met DOD s criteria for child abuse. Of the incidents that met DOD s criteria for abuse, 59 percent involved neglect, 33 percent involved physical abuse, 14 percent involved emotional abuse, and 6 percent involved sexual abuse. The majority of incidents (96 percent) were intrafamilial and 4 percent of the incidents were extrafamilial. Slightly over half of the victims and offenders were male (52 percent). Additionally, since fiscal year 2017, when the Navy began tracking whether offenders had prior FAP cases related to child abuse or domestic abuse that met DOD s criteria for abuse, 1 percent of offenders had prior cases. Figure 7 depicts characteristics of incidents reported to the Navy FAP that met DOD s criteria for child abuse over the past 5 fiscal years. Marine Corps FAP. Over the past 5 fiscal years, the Marine Corps FAP recorded 8,356 reported incidents of child abuse, of which 54 percent met DOD s criteria for child abuse. Of the incidents that met DOD s criteria for abuse, 62 percent involved neglect, 20 percent involved emotional abuse, 15 percent involved physical abuse, and 2 percent involved sexual abuse. The majority of incidents (96 percent) were intrafamilial and 4 percent of the incidents were extrafamilial. Slightly over half of the victims and offenders were male (52 percent) and 7 percent of offenders had prior FAP cases related to child abuse or domestic abuse that met DOD s criteria for abuse. Figure 8 depicts characteristics of incidents reported to the Marine Corps FAP that met DOD s criteria for child abuse over the past 5 fiscal years. Air Force FAP. From fiscal years 2014 through 2018, the Air Force FAP recorded 17,836 reported incidents of child abuse, of which 41 percent met DOD s criteria for child abuse. Of the incidents that met DOD s criteria for abuse, 55 percent involved neglect, 25 percent involved physical abuse, 22 percent involved emotional abuse, and 4 percent involved sexual abuse. The majority of incidents (95 percent) were intrafamilial and 4 percent of the incidents were extrafamilial. Slightly over half of the victims and offenders were male (51 percent and 53 percent, respectively). In addition, 0 percent of offenders had prior FAP cases related to child abuse or domestic abuse that met DOD s criteria for abuse. Figure 9 depicts characteristics of incidents reported to the Air Force FAP that met DOD s criteria for child abuse over the past 5 fiscal years. Appendix IV: Characteristics of Military Criminal Investigative Organization Investigations Involving Child Victims, Fiscal Years 2014-2018 Each military criminal investigative organization the Army Criminal Investigation Command, the Naval Criminal Investigative Service, and the Air Force Office of Special Investigations maintains an investigative case management system where it tracks information about the investigation, such as the offense(s), victim(s), and alleged offender(s), among other things. According to military criminal investigative organization officials, they primarily investigate felony level crimes as well as any type of sexual offense. The following are key characteristics of investigations involving child victims investigated by each of the three military criminal investigative organizations from fiscal years 2014 through 2018. Army Criminal Investigation Command. Over the past 5 fiscal years, the Army Criminal Investigation Command conducted or monitored 5,565 investigations involving a child victim. Some of those investigations involved multiple victims, offenders, and offenses. Specifically, those 5,565 investigations included 6,535 victims, 5,965 alleged offenders, and 8,483 offenses. The Army Criminal Investigation Command was the primary investigative organization for almost three-quarters of the investigations (74 percent). For the rest of the investigations, the primary investigative organization was another federal, state, or local civilian law enforcement organization, such as the Federal Bureau of Investigation, which conducted 4 percent of the investigations. Additionally, 42 percent of the investigations involved an intrafamilial relationship meaning that the victim and the alleged offender were from the same family, such as a parent or sibling between at least one of the alleged offenders and victims. Figure 10 depicts characteristics of the Army Criminal Investigation Command s investigations involving a child victim over the past 5 fiscal years. Naval Criminal Investigative Service. From fiscal years 2014 through 2018, the Naval Criminal Investigative Service conducted or monitored 1,513 investigations involving a child victim. Some of those investigations involved multiple victims, offenders, and offenses. Specifically, those 1,513 investigations included 1,731 victims, 1,618 alleged offenders, and 1,812 offenses. The Naval Criminal Investigative Service was the primary investigative organization for about half of the investigations (54 percent). The remainder of the investigations were either joint with another law enforcement organization or the Naval Criminal Investigative Service was only monitoring the investigation. Additionally, 40 percent of the investigations involved an intrafamilial relationship between at least one of the alleged offenders and victims. Figure 11 depicts characteristics of the Naval Criminal Investigative Service s investigations involving a child victim over the past 5 fiscal years. Air Force Office of Special Investigations. Over the past 5 fiscal years, the Air Force Office of Special Investigations conducted or monitored 1,304 investigations involving a child victim. Some of those investigations involved multiple victims, offenders, and offenses. Specifically those 1,304 investigations included 1,549 victims, 1,384 alleged offenders, and 1,649 offenses 12 percent of investigations involved more than one victim. In addition, 42 percent of investigations involved an intrafamilial relationship between at least one of the alleged offenders and victims. Figure 12 depicts characteristics of the Air Force Office of Special Investigations investigations involving a child victim over the past 5 fiscal years. Appendix V: Characteristics of Department of Defense Education Activity Child Abuse Reports and Serious Incident Reports, School Years 2013-2014 through 2017-2018 The Department of Defense Education Activity (DODEA) tracks suspected or alleged abuse of students through (1) child abuse reports, and (2) serious incident reports. Child abuse reports. DODEA guidance defines child abuse as the physical injury, sexual maltreatment, emotional maltreatment, deprivation of necessities, or combinations for a child by an individual responsible for the child s welfare under circumstances indicating that the child s welfare is harmed or threatened. The term encompasses both acts and omissions on the part of the responsible person. Child abuse reports are to be submitted on any incidents of suspected or alleged child abuse to DODEA headquarters within 24 hours of the occurrence or notification of the incident. From school years 2014-2015 through 2017-2018, DODEA reported 254 suspected or alleged incidents of child abuse. Of DODEA s 163 schools, 115 reported an incident of child abuse over these 4 school years. Reported child abuse included a range of incidents, such as parents leaving their children unattended, parents physically abusing their children, teachers using physical force on students, and teachers inappropriately touching students. The most common types of abuse were physical abuse (51 percent of reported incidents), multiple types of abuse (11 percent), and sexual abuse (9 percent). The majority of the reported incidents involved the abuse of a child by a parent or guardian (55 percent) or abuse by DODEA personnel (31 percent). Figure 13 depicts characteristics of incidents of child abuse reported by DODEA over 4 school years. Serious incident reports. DODEA guidance defines a serious incident as an event or allegation that impacts school readiness, or the health, safety, and security of DODEA-affiliated personnel, facilities, and property resulting in consequences greater than those normally addressed through routine administrative or preventive maintenance actions. Serious incident reports are normally submitted by the school principal, assistant principal, or designated administrative officer within 2 business days after the event is brought to the attention of the first-line supervisor. DODEA has different categories of serious incidents, such as drug and alcohol events, violation of law events, sexual events, and security incidents. Serious child-on- child abuse incidents are reported as either violation of law events, such as assault and battery or sexual events. From school years 2013-2014 through 2017-2018, DODEA reported 167 serious incidents involving either an alleged violation of law or an alleged sexual event. Only 74 of DODEA s 163 schools reported such an incident over the past 5 school years. Reported serious incidents included a range of incidents, such as students posting nude photos and videos of other students on social media, inappropriate touching on the school bus, physical assaults, and rape. Of the serious incident reports we received, the most common types were nonconsensual sexual contact (35 percent of reported incidents), assault and battery (25 percent), rape (16 percent), and child pornography (15 percent). The majority of the reported serious incidents involved a single victim (68 percent), but 13 percent of the incidents involved more than one victim and 20 percent did not specify a victim. Figure 14 depicts characteristics of serious incidents involving an alleged violation of law or an alleged sexual event reported by DODEA over the past 5 school years. According to DODEA officials, DODEA implemented a new database for reporting serious incidents in August 2019. These officials noted that one of the goals of the system is to make reporting more straightforward for school administrators and to standardize serious incident reports across schools. DODEA officials anticipate adding child abuse reports to the new database in late calendar year 2019 or early 2020. Appendix VI: Comments from the Department of Defense Appendix VII: GAO Contact and Staff Acknowledgments <8. GAO Contact> <9. Staff Acknowledgments> In addition to the contact named above, Kimberly Mayo (Assistant Director), Molly Callaghan (Analyst in Charge), Vincent M. Buquicchio, Christopher Gezon, Grant Mallie, Joseph Neumeier, Kya Palomaki, Paul Seely, Mike Silver, and Lillian M. Yob made significant contributions to this report. Related GAO Products Military Justice: DOD and the Coast Guard Need to Improve Their Capabilities to Assess Racial and Gender Disparities. GAO-19-344. Washington, D.C.: May 30, 2019. Children Affected by Trauma: Selected States Report Various Approaches and Challenges to Supporting Children. GAO-19-388. Washington, D.C.: April 24, 2019. Sexual Violence: Actions Needed to Improve DOD s Efforts to Address the Continuum of Unwanted Sexual Behaviors. GAO-18-33. Washington, D.C.: December 18, 2017. Child Well-Being: Key Considerations for Policymakers, Including the Need for a Federal Cross-Agency Priority Goal. GAO-18-41SP. Washington, D.C.: November 9, 2017. Military Personnel: DOD Has Processes for Operating and Managing Its Sexual Assault Incident Database. GAO-17-99. Washington, D.C.: January 10, 2017. Sexual Violence Data: Actions Needed to Improve Clarity and Address Differences Across Federal Data Collection Efforts. GAO-16-546. Washington, D.C.: July 19, 2016. Sexual Assault: Actions Needed to Improve DOD s Prevention Strategy and to Help Ensure It Is Effectively Implemented. GAO-16-61. Washington, D.C.: November 4, 2015. Youth Athletes: Sports Programs Guidance, Practices, and Policies to Help Prevent and Respond to Sexual Abuse. GAO-15-418. Washington, D.C.: May 29, 2015. Military Personnel: Actions Needed to Address Sexual Assaults of Male Servicemembers. GAO-15-284. Washington, D.C.: March 19, 2015. Child Welfare: Federal Agencies Can Better Support State Efforts to Prevent and Respond to Sexual Abuse by School Personnel. GAO-14-42. Washington, D.C.: January 27, 2014. Child Maltreatment: Strengthening National Data on Child Fatalities Could Aid in Prevention. GAO-11-599. Washington, D.C.: July 7, 2011. Military Justice: Oversight and Better Collaboration Needed for Sexual Assault Investigations and Adjudications. GAO-11-579. Washington, D.C.: June 22, 2011. | Why GAO Did This Study
With more than 1.2 million school-age military dependents worldwide, per DOD, the department's organizations work to prevent, respond to, and resolve incidents of child abuse. Incidents of child abuse, including child-on-child abuse, can cause a range of emotional and physical trauma for military families, ultimately affecting servicemember performance.
GAO was asked to review how DOD addresses incidents of child abuse and child-on-child abuse occurring on a military installation or involving military dependents. This report examines, among other things, the extent to which DOD has (1) visibility over such reported incidents, and (2) developed and implemented policies and procedures to respond to and resolve these incidents. GAO reviewed relevant policies and guidance; interviewed officials at a nongeneralizable sample of seven military installations; analyzed program data; interviewed parents of children affected by abuse; and interviewed DOD, service, and civilian officials, including at children's advocacy centers.
What GAO Found
The Department of Defense (DOD) has limited visibility over reported incidents of child abuse—physical, sexual, or emotional abuse, or neglect by a caregiver—and child-on-child abuse due to standalone databases, information sharing challenges, and installation discretion. From fiscal years 2014 through 2018, the military services recorded more than 69,000 reported incidents of child abuse (see figure). However, personnel at all seven installations in GAO's review stated that they use discretion to determine which incidents to present to the Incident Determination Committee (IDC)—the installation-based committee responsible for reviewing reports and determining whether they meet DOD's criteria for abuse (an act of abuse and an actual or potential impact, e.g., spanking that left a welt). Per DOD guidance, every reported incident must be presented to the IDC unless there is no possibility that it could meet any of the criteria for abuse. However, personnel described incidents they had screened out that, per DOD guidance, should have been presented to the IDC. Without the services developing a process to monitor how incidents are screened at installations, DOD does not know the total number of reported child abuse incidents across the department.
While DOD has expanded its child abuse policies and procedures to include child-on-child sexual abuse, gaps exist. For example, DOD standardized the IDC process in 2016, but the new structure does not include medical personnel with expertise, contrary to best practices for substantiating child abuse allegations. Without expanding the IDC membership to include medical personnel, members may not have all of the relevant information needed to make fully informed decisions, potentially affecting confidence in the efficacy of the committee's decisions. GAO also found that the availability of certified pediatric sexual assault forensic examiners across DOD is limited—according to DOD officials, there are only 11 in comparison to 1,448 incidents of child sexual abuse that met DOD's criteria for abuse from fiscal years 2014 through 2018. Without processes that help ensure timely access to certified pediatric examiners, child victims of sexual abuse overseas may not receive exams in time for evidence to be collected for use in prosecution, increasing the stress and trauma of affected victims.
What GAO Recommends
GAO is making 23 recommendations, including that the military services develop a process to monitor how reported incidents are screened at installations, that DOD expand the membership of the IDC to include medical personnel, and that DOD establish processes that help ensure timely access to certified pediatric examiners overseas. DOD concurred with 16, partially concurred with six, and did not concur with one of GAO's recommendations, which GAO continues to believe are valid, as discussed in the report. |
gao_GAO-20-324 | gao_GAO-20-324_0 | <1. Background> <1.1. U.S. Sanctions> Sanctions are imposed pursuant to statute, executive order, or other authorities. For example, the President may use authorities granted in the International Emergency Economic Powers Act (IEEPA) and the National Emergencies Act (NEA) to issue executive orders authorizing sanctions. The United Nations Participation Act of 1945 provides the basis for the U.S. s implementation of United Nations Security Council sanctions mandated under Article 41 of the United Nations Charter. Sanctions provide a range of tools that Congress and the President may use to attempt to alter or deter the behavior of a foreign government, an individual, or an entity in furtherance of U.S. national security or foreign policy objectives. For example, sanctions may be imposed in response to human rights abuses, weapons proliferation, or occupation of a foreign country. Sanctions may include actions such as limiting trade; blocking assets and interests in assets subject to U.S. jurisdiction; limiting access to the U.S. financial system, including limiting or prohibiting transactions involving U.S. individuals and businesses; restricting private and government loans, investments, insurance, and underwriting; and denying foreign assistance and government procurement contracts. The United States imposes comprehensive sanctions and targeted sanctions. Comprehensive sanctions generally include broad-based trade restrictions and prohibit commercial activity with an entire country. Examples of comprehensive sanctions include U.S. sanctions against Iran and Cuba. Targeted sanctions restrict transactions of, and with, specific persons or entities. For example, the U.S. sanctions program related to Somalia targets persons engaging in acts threatening the peace, security, or stability of that country. Sectoral sanctions are a form of targeted sanctions directed at a specified sector, or sectors, of a target s economy. For instance, Executive Order 13662 authorized sanctions targeting persons operating in certain sectors of the Russian economy as might later be determined by the Secretary of the Treasury in consultation with the Secretary of State, such as the financial services, energy, mining, and defense and related materiel sectors. Supplementary sanctions, also known as secondary sanctions, target third-party actors doing business with, supporting, or facilitating targeted regimes, persons, and organizations. For example, in February 2017, Treasury imposed sanctions against 13 individuals and 12 entities for their involvement in, or support for, Iran s ballistic missile program as well as for acting for or on behalf of, or providing support to, Iran s Islamic Revolutionary Guard Corps Qods Force. OFAC s implementation of sanctions includes publishing the Specially Designated Nationals and Blocked Persons List of individuals, groups, and entities whose assets in the United States are blocked and with whom U.S. persons are prohibited from dealing. The addition of an individual, group, or entity to this list is referred to as a sanctions designation. Agencies may issue licenses to authorize transactions with sanctioned entities that otherwise would be prohibited by existing sanctions. According to OFAC, many of its licensing determinations are guided by U.S. foreign policy and national security concerns. In making these determinations, OFAC must often coordinate with State and other government agencies, such as Commerce. OFAC issues two types of licenses: (1) general licenses, which authorize a particular type of transaction for a class of persons without the need to apply for a specific license, and (2) specific licenses, which OFAC issues to a particular person or entity to authorize a particular transaction. Commerce s Bureau of Industry and Security (BIS) issues two forms of authorization: (1) an individual validated license requiring an application and (2) a license exception allowing an export or reexport, under stated conditions, for which no application is required. <1.2. Agency Roles and Selected Mandated Resource and Activity Reporting> Laws and executive orders establishing sanctions may designate agency implementation roles. Some sanctions-related executive orders designate both primary and consultative agencies. For example, Executive Order 13818 establishes sanctions that include blocking the U.S. assets of persons whom the Secretary of the Treasury, in consultation with the Secretary of State and the Attorney General, determines to be responsible for, or complicit in, serious human rights abuse, among other measures. Executive orders may also broadly direct U.S. government agencies to take appropriate measures within their authorities to perform specified functions and duties. When roles are not assigned by the law or executive order authorizing the sanctions, agency roles are typically assigned through an interagency process. The IEEPA and the NEA mandate that the President report to Congress when using authorities granted under those laws. The IEEPA requires the President to report, among other things, actions taken in the exercise of IEEPA authorities to Congress at least once during each succeeding 6- month period following the administration s initial reporting of the authorities use. The NEA requires the President to transmit a report to Congress within 90 days after the end of each 6-month period following a declaration of a national emergency, providing the total U.S. government expenditures that are directly attributable to the exercise of powers and authorities conferred by declaration of the emergency. The President has delegated responsibility for many of these reports to the Secretary of the Treasury. However, the President delegated responsibility for the report on the National Emergency With Respect to Proliferation of Weapons of Mass Destruction, Executive Order 12938, to the Secretary of State. The Foreign Narcotics Kingpin Designation Act (Kingpin Act), enacted in 1999, mandates that the President prepare classified reports by July 1 of each year that include the number of new Kingpin Act designations and the personnel and resources directed toward the imposition of Kingpin sanctions. The Trade Sanctions Reform and Export Enhancement Act of 2000 (TSRA) mandates that the applicable department or agency submit quarterly and biennial reports on activity under the act regarding the department or agency s determinations and processing of license applications for export of agricultural commodities, medicines, and medical devices to specified entities and destinations, including state sponsors of terrorism. OFAC and Commerce s BIS submit reports in response to the TSRA. <1.3. Strategic Workforce Planning> To implement sanctions, agencies need to identify the human resources needed for the work. Strategic workforce planning focuses on developing long-term strategies for acquiring, developing, and retaining an organization s total workforce to meet the needs of the future. Agency approaches to such planning can vary with each agency s particular needs and mission. We have previously identified five principles that a strategic workforce planning process should address: 1) Involve top management, employees, and other stakeholders. 2) Determine the critical skills and competencies that will be needed. 3) Develop strategies that are tailored to address gaps in number, deployment, and alignment of human capital approaches. 4) Build the capability needed to address administrative, educational, and other requirements important to support workforce strategies. 5) Monitor and evaluate progress toward human capital goals and the contribution that human capital results have made toward achieving programmatic goals. <2. Treasury, State, and Commerce Have Units with Roles in Sanctions Implementation> Treasury, State and Commerce have units dedicated primarily to sanctions implementation and also have units with roles in sanctions implementation in addition to other responsibilities. Other agencies, including the Departments of Defense, Energy, Homeland Security, and Justice and federal financial regulatory agencies, play specific roles in sanctions implementation based on their expertise or broader duties. <2.1. Agencies May Have One or More Roles in Sanctions Implementation> Agencies roles in sanctions implementation may be assigned to them in legislation, by executive order, in presidential memorandums, or through the interagency process. Table 1 shows the roles that agencies may have in sanctions implementation and examples of agency actions associated with each role. <2.2. Treasury, State, and Commerce Have Units Dedicated Primarily to Sanctions Implementation> Treasury, State, and Commerce each have units that focus primarily on sanctions implementation and that act in all five of the roles we identified. Treasury. Treasury s OFAC, part of the department s Office of Terrorism and Financial Intelligence (TFI), administers and enforces economic sanctions based on U.S. foreign policy and national security through consultation with the Secretary of State. OFAC acts under presidential national emergency powers, as well as authority granted by specific legislation, to impose controls on transactions and freeze assets under U.S. jurisdiction. OFAC consists of four offices: The Office of Sanctions Policy and Implementation leads OFAC s design, implementation, and evaluation of sanctions programs and develops OFAC s public guidance, licenses, and regulations. The Office of Compliance and Enforcement works to promote compliance with OFAC s sanctions programs and investigates apparent violations. The Office of Global Targeting works with other units within TFI, other U.S. agencies, and foreign partners to identify and investigate targets for sanctions designation. The Office of Sanctions Support and Operations supports all sanctions-related functions at OFAC, including human capital and budgetary functions. State. State s Office of Economic Sanctions Policy and Implementation (SPI) housed in the Bureau of Economic and Business Affairs, Division for Counter Threat Finance and Sanctions is responsible for providing foreign policy guidance for the vast majority of sanctions programs and obtaining international cooperation with U.S. agencies enforcing sanctions. According to SPI, it acts as State s central coordinating office for 25 of the 30 sanctions programs that were active as of April 2019. SPI also implements sanctions under authorities delegated to the Secretary of State, including sanctions on Iran and Syria. Commerce. In Commerce s BIS, the Foreign Policy Division (FPD) of the Office of Nonproliferation and Treaty Compliance is one of the components that implements sanctions through U.S. export controls. The division is responsible for developing, analyzing, evaluating, and coordinating export controls related to sanctions policy. In addition to having units that primarily focus on sanctions, Treasury, State, and Commerce have units that carry out roles in sanctions implementation in addition to other responsibilities. Treasury. Treasury has several other units that support sanctions implementation. For example, in TFI, the Office of Intelligence and Analysis examines classified and unclassified reporting, financial transactions, and open-source databases for evidence of sanctions violations. The Financial Crimes Enforcement Network monitors and analyzes financial information on threats, producing intelligence reports that may identify targets for designation and sanctions violators. In addition to TFI units, the Internal Revenue Service, the Office of International Affairs, and the Office of the General Counsel also have roles in sanctions implementation. For example, the Office of International Affairs helps to assess the likely impact of sanctions and conducts outreach to foreign counterparts regarding sanctions implementation. State. Units at State have sanctions implementation roles related to their expertise. Some of these units take actions in all five of the sanctions roles shown in table 1 and are responsible for specific sanctions authorities within State, according to State officials. For example, the Bureau of International Narcotics and Law Enforcement Affairs is responsible for coordinating and communicating State s position on existing or proposed new sanctions in relation to the Kingpin Act and transnational criminal organizations. According to State officials, the Bureau of Counterterrorism and Countering Violent Extremism leads State in designating Specially Designated Global Terrorists under Executive Order 13224 and Foreign Terrorist Organizations under Section 219 of the Immigration and Nationality Act. The Bureau of Economic and Business Affairs Office of Threat Finance Countermeasures has a primary role in implementing sanctions under Executive Order 13224, which targets terrorist financiers and others who provide material support to terrorists. Commerce. Commerce has several other units that support sanctions implementation. For example, the Office of Export Enforcement provides input regarding sanctions proposals and feedback regarding any adverse impact to existing investigations. The Office of National Security and Technology Transfer Controls implements primarily sectoral sanctions by providing technical analyses of items and recommendations during sanctions development. The Office of Exporter Services provides a range of resources, including electronic resources and educational seminars, which provide exporters with guidance on export compliance processes and procedures. Table 2 provides an overview of the various roles that Treasury, State, and Commerce units play in sanctions implementation. See appendix II for additional details. <2.3. Other Agencies Have Roles in Sanctions Implementation in Addition to Other Responsibilities> Several other agencies have more-specific roles in sanctions implementation, with the extent of their involvement dependent largely on their area of expertise. These agencies carry out their sanctions-related roles in addition to other responsibilities. Department of Defense. The Office of the Under Secretary of Defense for Policy contributes to sanctions implementation, participating in all roles except targeting. The office coordinates department units reviews of sanctions proposals, provides the department s recommendation to interagency partners during sanctions development, and represents the department during interagency discussions regarding sanctions enforcement. Department of Energy. The National Nuclear Security Administration supports sanctions by providing technical analyses of weapons of mass destruction and conventional arms transactions that may be subject to sanctions and by providing recommendations during sanctions development. The National Nuclear Security Administration also reviews export licenses for munitions and items with both military and commercial applications, known as dual-use items, which may include parties subject to sanctions. Department of Homeland Security. Units of the Department of Homeland Security also have varied roles in sanctions implementation. For example, the Human Rights Violators and War Crimes Unit in U.S. Immigration and Customs Enforcement s Homeland Security Investigations includes a Global Magnitsky investigative support team, which targets serious human rights abusers and corrupt foreign officials through OFAC sanctions and visa denials. Units in U.S. Customs and Border Protection maintain a list of sanctioned countries and couriers for which shipment applications are rejected and use an automated targeting system to identify high- risk shipments and coordinate appropriate enforcement actions. Department of Justice. Multiple Department of Justice units contribute to sanctions implementation, participating in all roles except licensing. For example, the National Security Division works with law enforcement partners to facilitate the investigation and prosecution of sanctions violators. Financial regulatory agencies. Financial regulatory agencies with roles in sanctions implementation may review the compliance programs of the institutions they oversee with respect to OFAC requirements. Some of these agencies can also enforce penalties for significant deficiencies in institutions OFAC compliance programs. Financial regulatory agencies generally examine institutions compliance with OFAC policies concurrently with examinations for compliance with the Bank Secrecy Act (BSA) and anti money laundering (AML) statutes. Table 3 provides an overview of the various roles of these agencies in sanctions implementation. Also see appendix II for additional details of agency units sanctions implementation roles. See appendix III for information about agency units number of personnel with sanctions implementation responsibilities. <3. Sanctions Implementation Units at Treasury, State, and Commerce Have Received Steady or Increasing Resources but Faced Challenges in Filling Some Positions> All three of the sanctions implementation units we reviewed have generally received steady or increasing resources since fiscal year 2015 but have faced challenges in filling some positions. OFAC has received increasing inflation-adjusted budgetary and authorized human resources each fiscal year since 2015 but has consistently experienced a gap between the number of authorized and actual full-time equivalents (FTEs). OFAC officials attributed the gap to challenges in hiring due to competition from other agencies and the private sector and the time needed for new hires to obtain security clearances. State SPI has also generally received additional authorized inflation-adjusted budgetary and human resources but has not been fully staffed in recent years. Commerce s FPD has received relatively steady inflation-adjusted budgetary resources but, according to Commerce officials, lacks funding to fill one of its 10 positions. <3.1. Treasury s OFAC Received Increasing Resources in Fiscal Years 2015-2019 but Faces Hiring Challenges> OFAC received increasing budgetary resources in each of the last 5 fiscal years. In inflation-adjusted terms, OFAC s budgetary resources increased by a total of 58 percent, from approximately $29.7 million in fiscal year 2014 to approximately $46.8 million in fiscal year 2019. (See fig. 1.) OFAC has also received authority to hire additional FTEs since fiscal year 2014, yet a number of the additional authorized positions have remained unfilled. According to OFAC officials, OFAC allocated most of its additional authorized FTEs to the Office of Global Targeting, which is responsible for conducting investigations of sanctions targets. At the start of fiscal year 2014, 10 of OFAC s 173 authorized positions (6 percent) were unfilled. By the start of fiscal year 2020, 55 of OFAC s 259 authorized positions (21 percent) were unfilled. In the intervening period, the gap between authorized and actual FTEs at the start of each fiscal year ranged from 34 to 58 positions (14 to 26 percent of authorized FTEs). (See fig. 2.) Despite the increase in authorized FTEs, OFAC has faced challenges in filling the additional positions. At the start of fiscal year 2020, 21 percent of OFAC s authorized sanctions investigator positions (13 of 62) were not filled. Also unfilled were nine of 25 OFAC sanctions licensing officer positions, three of 18 enforcement officer positions, two of 15 sanctions policy analyst positions, and six of 14 sanctions compliance officer positions. Officials of both OFAC and Treasury s Office of the Assistant Secretary for Management cited three primary challenges in hiring candidates with the necessary qualifications: Competition with other agencies, including those in the intelligence community, which can use direct-hire authority to expedite the hiring process Competition with the private sector, which offers higher salaries The time required for security clearance processing, which delays hiring for positions, such as sanctions investigators, who need a special sensitive investigation that must be adjudicated at the top secret/sensitive compartmented information levelTreasury does not currently have direct-hire authority for OFAC but can use other authorities to address hiring challenges. OFAC can use TFI s agency-specific schedule A authority, which excepts up to 100 positions at TFI from competitive selection requirements; schedule A authority is not specific to OFAC. In August 2019, officials of Treasury s Office of the Assistant Secretary for Management stated that the office was not seeking direct-hire authority through the Office of Personnel Management. Additionally, the officials noted that Treasury has used flexibilities such as veterans hiring preferences to fill positions. However, in December 2019, Treasury officials stated that they had determined to seek direct-hire authority and would support the passage of legislation providing such authority. <3.2. State s Office of Economic Sanctions Policy and Implementation Received an Overall Increase in Resources in Fiscal Years 2014-2019, but More Than Half of Its Positions Are Vacant> SPI received annual budgetary resource increases in fiscal years 2015 through 2018, before a slight decline in fiscal year 2019. In inflation- adjusted terms, SPI budgetary resources increased overall by 42 percent, from $2.3 million in fiscal year 2014 to $3.2 million in fiscal year 2019. (See fig. 3.) SPI has received authority to hire six additional FTEs for fiscal year 2020, but more than half of its authorized positions were vacant at the start of the year. SPI s authorized FTEs ranged from 13 to 16 in fiscal years 2014 to 2019 and increased to 21 FTEs for fiscal year 2020. At the start of each fiscal year from 2014 to 2019, SPI had one to three fewer actual FTEs than authorized. However, the increase in authorized FTEs for fiscal year 2020 followed a decline in the number of filled positions during fiscal year 2019, when SPI lost more than a third of its staff. As a result, as of the beginning of fiscal year 2020, more than half of SPI s 21 authorized FTEs were unfilled. (See fig. 4.) According to SPI officials, the departures during fiscal year 2019 were for the most part unscheduled and resulted from staff promotions, moves to elsewhere in State, or resignations to accept positions in other agencies or the private sector. SPI officials added that a department-wide backlog in hiring constrained SPI s ability to fill these gaps and that the office would have to pay for the additional six FTEs without an increased budget. As of December 2019, State was recruiting to fill some of these positions, according to SPI officials. SPI expected one staff member to start in early January, had extended an offer to another, and was advertising to fill four additional positions. While SPI has generally received increased budgetary resources and authorized FTEs in recent years, State discontinued the Office of the Coordinator for Sanctions Policy, formerly housed in the Office of the Secretary. The office was responsible for, among other things, coordinating sanctions strategies, integrating sanctions into foreign policy plans, and analyzing the effects of sanctions. According to data that State provided, the office had an authorized staff of seven FTEs at the start of each fiscal year from 2014 through 2018, with the exception of fiscal year 2016, when eight FTEs were authorized. State also reported that the office had one to four unfilled positions at the start of each fiscal year during this period. <3.3. Commerce s FPD Has Received Relatively Constant Resources since Fiscal Year 2015> FPD received an overall increase in budgetary resources from fiscal year 2014 to fiscal year 2019, but most of the increase occurred from fiscal year 2014 to fiscal year 2015. Overall, FPD s budgetary resources increased by 28 percent, adjusted for inflation, from fiscal year 2014 to fiscal year 2019. However, after a 24 percent increase in fiscal year 2015, resources remained steady through fiscal year 2019 at approximately $1.4 million per year, adjusted for inflation. (See fig. 5.) FPD has had the same number of authorized FTEs since fiscal year 2014, maintaining an authorized level of 10 FTEs from fiscal year 2014 to fiscal year 2020. FPD generally had one fewer actual FTE than authorized as of the beginning of each fiscal year. (See fig. 6). At the beginning of fiscal year 2020, according to Commerce officials, the Foreign Policy Division lacked funding to advertise and hire for the vacant position. According to Commerce officials, FPD receives a funding amount for personnel and the funding they have received is sufficient for nine FTEs. <4. Agencies Assess Resource Needs through the Annual Budget Process and OFAC Has Begun Workforce Planning, but All Agencies Face Challenges in Determining Needs> Officials at sanctions-focused units at Treasury, State, and Commerce all described their use of the annual budget process to assess their resource needs, and Treasury and Commerce have undertaken broader planning efforts. Treasury s OFAC has begun an internal workforce planning process that, if implemented as described, would satisfy principles for strategic workforce planning that we have previously identified. According to State SPI officials, SPI assesses its resources in the annual budget formulation process and has been able to add temporary positions in response to workforce needs. Commerce BIS officials stated that they shift resources in response to needs, and BIS has previously prepared a budget strategy that included its office primarily responsible for sanctions implementation. Treasury, State, and Commerce all face challenges in measuring changes in their sanctions workload over time. <4.1. Treasury OFAC Assesses Resources through Budget Development and Has an Additional Ongoing Workforce Planning Effort> Treasury s OFAC reviews and requests resources as part of the annual TFI budget development process, which considers OFAC s requests along with those of other TFI components. According to OFAC officials, OFAC submits its funding and resource needs to TFI for consideration. The OFAC budget justification for TFI includes the number of positions requested for all OFAC components as well as a description of each request. According to OFAC, once TFI has considered all of its component submissions, TFI submits its budget request to the Assistant Secretary for Management, who considers it as part of Treasury s larger budget request. OFAC also stated that it has also used quarterly meetings and discussions as part of Treasury s quarterly performance reviews to review resource needs and challenges. In addition to undertaking reviews as part of the budget process, OFAC launched a workforce planning effort in fiscal year 2019 and stated that it would be led by OFAC s Office of Sanctions Support and Operations. As part of this effort, the Office of Sanctions Support and Operations stated that it plans to use Treasury s department-wide workforce planning model and tools to gather information from OFAC s component offices as a basis for, among other things, analyzing risks to OFAC s mission, identifying resource gaps, and developing an action plan to address them. OFAC further stated that it plans to use its ongoing workforce planning model to assess the effectiveness of its current hiring authorities. In October 2019, OFAC officials stated that they expected to submit preliminary recommendations for each OFAC component to OFAC leadership by the end of December 2019. However, OFAC officials later stated that, because of the departure of the Assistant Director of Management Programs the OFAC senior leader responsible for implementing the workforce planning initiative on October 1, 2019, the planned date to submit preliminary recommendations to OFAC leadership was rescheduled to March 31, 2020. We analyzed the model and tools that OFAC is using for its ongoing resource analysis, to determine whether the process they set out would address five principles for strategic workforce planning that we had previously identified. We concluded that, if it were implemented as OFAC documents describe, the process would satisfy these principles. For example, the process calls for involving management and employees during its development and implementation and calls on managers to consider critical skills and competencies in their workforce analysis. <4.2. State SPI Assesses Workforce Needs through the Budget Process and Has Filled Positions on a Temporary Basis> State SPI requests resources as part of its annual budget process. State does not request a separate budget for SPI but instead combines SPI with the Office of Threat Finance Countermeasures (TFC) in its annual budget request. According to SPI officials, State sends the combined request for TFC and SPI to the Office of Management and Budget (OMB) every year, although the resources obtained may not reflect SPI s original request. For example, SPI officials stated that SPI requested a greater increase in authorized positions for fiscal year 2020 than it ultimately received. SPI officials described ways that they assess staff workloads and seek to add or adjust resources on a continual basis. According to SPI officials, they have worked to fill positions on a temporary basis in response to rising needs. For example, SPI was authorized to add three temporary positions to cover the additional workload from Iran and Venezuela sanctions in early 2019. According to SPI officials, in justifying the request for additional temporary positions, SPI noted a significant increase in officer workload during the reimposition of sanctions against Iran, as well as maximum-pressure campaigns against Iran and Venezuela and increased activity related to existing and new sanctions authorities. As of October 2019, State planned to convert the three positions to permanent positions. Similarly, SPI officials stated that SPI justified its request for an increase in positions for fiscal year 2020 by noting an increasing use of sanctions as part of U.S. maximum economic pressure campaigns across multiple regions. Agency approaches to workforce planning can vary depending on each agency s particular needs and mission. For subunits such as SPI, using the budget process, identifying changing priorities, and responding flexibly to those changes can address workforce planning needs. SPI officials further stated that SPI expects to review its workforce needs and structure if new executive orders delegate additional sanctions authorities to the Secretary of State. <4.3. Commerce Assesses Needs through the Budget Process and Shifts Personnel in Response to Demands> Commerce BIS units such as FPD assess and communicate their resource needs as part of the annual budget formulation process, according to BIS officials. BIS officials described budget formulation at Commerce as a bottom-up process, with BIS units providing information that is folded into Commerce s overall budget. During this process, BIS budget office staff meet with program staff, review budget guidance provided by OMB as well as BIS s own guidance, and ask program officials to identify any new initiatives or any new requirements for resources. According to BIS officials, each program office prepares a summary description of the request and needed resources for approval by the Assistant and Deputy Assistant Secretary for that office, the Deputy Under Secretary, and ultimately the BIS Under Secretary. According to BIS officials, BIS s budget office then requests additional information about the approved activities. BIS s Budget Office in turn submits the materials to the Commerce Departmental Budget Officer, who takes into account any known OMB and congressional viewpoints and department priorities. According to BIS officials, because of competing priorities, BIS funding priorities are not always carried over into the department s overall request. BIS officials noted that, absent additional resources, they have some flexibility to shift personnel within the bureau to address periods of increased sanctions-related demand. For smaller units such as FPD, using the budget process, identifying changing priorities, and responding flexibly to those changes can address their workforce planning needs. Commerce previously prepared a multiyear budget strategy that assessed workforce needs throughout BIS, including FPD. In 2016, a contractor that Commerce hired prepared a Five-Year Budget Strategy Plan, which included workforce planning and projections. As part of the assessment, the plan analyzed BIS license volume and estimated the amount of time that staff in the BIS Export Administration s Office of Nonproliferation and Treaty Compliance (which includes FPD) spent on particular tasks, such as conducting license application reviews, making license determinations, and developing regulations related to sanctioned countries. The plan projected future BIS license volume, external factors that would affect BIS workload, and the future FTEs that BIS would need to perform its mission. The plan examined the workload projection and the effect of attrition and concluded that FPD would need 0.5 additional FTEs by 2020 and 1.25 additional FTEs by 2022. BIS officials stated that they initially used the budget strategy plan to help with budgeting. However, according to the officials, the plan and its assumptions quickly became obsolete and they did not use it in subsequent years. In addition, BIS officials stated that the plan did not recognize BIS s ability to shift resources or request appropriations as needed. <4.4. Agencies Face Challenges in Measuring Workload to Assess Resource Needs> Treasury, State, and Commerce units that focus primarily on sanctions implementation have information that can measure changes in agency workload over time; however, agency officials cited challenges in using this information as accurate measures of workload for the purpose of informing resource needs. For example, counting the number of individual actions taken to implement sanctions (e.g., designations, licenses, or the imposition of a penalty) does not capture the actions varying complexity or the time spent on developing potential actions that are ultimately not taken. Agency officials noted that, in general, the drivers of their workloads are global events and U.S. foreign policy priorities that may lead to more or less sanctions activity. Table 4 shows (1) selected information that can be used to measure changes in agency workload over time and (2) the potential weaknesses of these measures. <5. Agencies Provide Information on Sanctions Activities and Expenses in Selected Mandated Reports> OFAC and State each prepare and submit reports in response to the requirements of the IEEPA and the NEA. Both OFAC and State report sanctions implementation actions in response to the requirements of the IEEPA. OFAC s NEA-mandated reports generally include information on expenditures reported by Treasury and State and by any other agencies identified in the relevant executive order. However, according to State s most recent NEA reports, no specific State expenditures were directly attributable to the exercise of authorities conferred by the declaration of a national emergency under the NEA during the reporting period. In previous reviews, we and Treasury s Office of Inspector General have found weaknesses in the consistency and timeliness of OFAC reports mandated by the Kingpin Act and the TSRA, respectively. <5.1. OFAC s Mandated NEA Reports Include Expenses for Agencies with Roles in Sanctions Implementation, while State s Have Reported No Expenditures> <5.1.1. IEEPA Reporting on Sanctions Activities> Both OFAC and State include information on actions taken to implement sanctions programs in response to the requirements of the IEEPA. OFAC s reports on sanctions programs under the IEEPA include data on the number of designations and the type of entity designated, the number of licensing actions, and the number and value of blocked transactions for sanctions programs authorized by the IEEPA. State s IEEPA-mandated report for a weapons of mass destruction sanctions program (Executive Order 12938), prepared by State s Bureau of International Security and Nonproliferation (ISN), summarizes the actions State has taken to address nonproliferation through bilateral and multilateral channels, including actions taken against Russia, North Korea, Syria, and the reimposition of nuclear-related sanctions on entities in Iran. Both OFAC and State included the reports responding to IEEPA requirements as part of the same document submitted in response to the NEA report requirements. <5.1.2. NEA Reporting on Sanctions Expenditures> OFAC s reports on sanctions programs under the NEA include a summary total of expenditures reported by various agencies to implement those programs, as well as a listing of the agencies whose expenditures are included in the reports. The reports state that the expenditures included are predominantly personnel wage and salary costs. OFAC contacts multiple agencies to compile estimates of total expenditures for its NEA reports. According to OFAC officials, OFAC contacts an agency about its expenditures if the relevant executive orders have delegated sanctions implementation authority to the agency or tasked it with certain duties. Using a standardized request message, OFAC asks such agencies to estimate their expenditures for the national emergency by, for example, estimating the hours spent by staff members on activities related to the emergency and multiplying that number by appropriate hourly compensation rates. OFAC stated that it always asks State to provide estimated expenditure information and contacts other agencies to seek their expenditures on a program-by-program basis. OFAC s NEA reports include Treasury and other agencies. All 25 of the NEA reports from mid- to late 2018 that we reviewed included Treasury expenditures, which were in many cases limited to OFAC and the Treasury Office of General Counsel. All but one report included State expenditures. Three reports included Commerce expenditures, five included Department of Homeland Security expenditures, and 12 included Department of Justice expenditures. While the reports did not include other agencies expenditures, some of the reports explicitly acknowledged that they did not reflect certain operating costs incurred by the intelligence and law enforcement communities. State ISN s May 2019 NEA-mandated report for Executive Order 12938 stated that there were no specific expenditures directly attributable to the exercise of authorities conferred by the declaration of a national emergency under the NEA during the 6-month reporting period. The prior two reports also stated that there were no specific expenditures directly attributable to the sanctions program. The reports included no other information about the program expenditures. In response to our requests, State officials provided additional information about the NEA reporting of expenditures. According to the officials, State reported no expenditures for implementation activities for Executive Order 12938 because those activities have been subsumed into expenditures for normal, daily work similar to overhead expenses. Expenditures for the implementation activities are mixed with, and indivisible from, the ongoing programming activities of the relevant offices and agencies. State officials indicated that State would report an amount other than zero if funds were reprogrammed, additional staff were required, or staff engaged in activities in addition to daily, normal work to implement the executive order. State officials also told us that they consulted State s Bureau of Arms Control, Verification and Compliance, regional bureaus, and offices in the Departments of Commerce, Defense, and Energy in preparing the report. However, State s reports have not included any of these additional statements about the information that State considered in concluding there were no specific expenditures attributable to the sanctions program. Standards for Internal Control in the Federal Government states that management should externally communicate the necessary quality information to achieve the entity s objectives so that external parties can help the entity achieve its objectives and address related risks. Because State s reports do not include the additional information that State considered, Congress lacks complete information regarding sanctions implementation expenditures. <5.2. Prior Studies Have Noted Limitations in Other Required OFAC Sanctions Reporting> <5.2.1. Kingpin Act Reports Do Not Provide Consistent Expenditure Data> We have previously found that agencies do not report expenditures in response to OFAC s Kingpin Act data requests in a consistent fashion. The Kingpin Act mandates that the President prepare a classified report to the Permanent Select Committee on Intelligence of the House of Representatives and the Select Committee on Intelligence of the Senate by July 1 of each year that, among other things, includes the status of sanctions imposed under the Kingpin Act and the personnel and resources directed to the imposition of Kingpin sanctions. OFAC compiles and submits these reports. OFAC s Kingpin reports include previous year and cumulative data on the number of asset-blocking actions and Kingpin designations. The reports also include Treasury, State, DOD, and Justice expenditures, which the reports indicate are mostly personnel salary costs. However, we recently found that the agencies did not use consistent methods, across agencies and time, in providing their expenditures to OFAC for Kingpin Act program activities. We recommended that the Secretary of the Treasury (1) ensure that OFAC provide its partner agencies more specific guidance regarding Kingpin Act related expenditure data to improve the consistency of data submitted by these agencies and (2) disclose information about limitations in the consistency and reliability of the agency expenditure data in its annual reports to Congress. <5.2.2. Treasury s Inspector General Has Recommended OFAC Improve Timeliness of TSRA- Mandated Reports> Treasury OFAC and Commerce BIS each submit reports to Congress mandated by the TSRA. Treasury s Inspector General found that OFAC had not submitted its reports in a timely fashion and recommended OFAC take steps to improve the timeliness of its submissions. OFAC. OFAC s TSRA-mandated reports include information about its determinations regarding applications for licenses as well as the time it spent processing the applications. In April 2018, Treasury s Office of Inspector General found that OFAC had not issued these reports in a timely manner and recommended that OFAC provide guidance to ensure that future TSRA-mandated reports are timely. According to the Treasury Office of Inspector General, Treasury s actions in response bringing its submission of the TSRA-mandated reports up to date and revising its TSRA report procedures satisfied the intent of the office s recommendation, but the Inspector General would continue to follow up. However, OFAC s submission of the TSRA- mandated reports has continued to lag. OFAC released the TSRA- mandated reports for the second, third, and fourth quarters of fiscal year 2018 (i.e., January through September 2018) in November 2019; released the report for the first quarter of fiscal year 2019 in December 2019; and released the report for the second quarter of fiscal year 2019 in February 2020. OFAC s most recent biennial report, for October 2014 through September 2016, was issued in August 2019. BIS. BIS s TSRA-mandated reports include information about the licensing actions taken by BIS in relation to exports of agricultural commodities to Cuba, as well as processing times for those actions. BIS submitted its most recent report on January 17, 2020, covering the period from October 1 to December 31, 2019. BIS s most recent biennial report, for October 2016 through September 2018, was issued in November 2018. <6. Conclusion> The United States has increasingly relied on sanctions as a means to achieve important foreign policy goals. Implementing these sanctions involves multiple government agencies, some of which have multiple units with roles in sanctions implementation. Key agencies that implement sanctions have generally received steady or growing resources in recent years, but Treasury and State have staffing gaps and face challenges in securing the staff needed to fill their authorized positions. Treasury OFAC has an ongoing effort to assess its workforce needs, and Treasury, State, and Commerce all assess workforce needs through the budget process. The IEEPA and NEA each include requirements for reports to Congress that Congress can use to review the activities and expenditures that have been used for implementing these sanctions. However, State s reports for Executive Order 12938 have not explained the information that State considered in reporting no expenditures. As a result, Congress does not have complete information about the data that State considers in calculating its sanctions implementation resources, which Congress could use to inform its review of agency resource requests. <7. Recommendation for Executive Action> The Secretary of State should direct the Assistant Secretary for International Security and Nonproliferation to include additional information about the expenditures it considers in its NEA-mandated reporting for Executive Order 12938. <8. Agency Comments> We provided a draft of this report to the Departments of Commerce, Defense, Energy, Homeland Security, Justice, State, and the Treasury, as well as the Commodity Futures Trading Commission, Federal Deposit Insurance Corporation, Federal Reserve System, Internal Revenue Service, National Credit Union Administration, Office of the Comptroller of the Currency, and Securities and Exchange Commission for review and comment. State provided official comments, which are reproduced in appendix IV. State concurred with our recommendation and indicated that it will provide additional clarity on its procedures in future NEA-mandated reporting for Executive Order 12938. The Departments of Commerce, Homeland Security, Justice, State, and the Treasury, as well as the Internal Revenue Service, Office of the Comptroller of the Currency, and Securities and Exchange Commission also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and to the Secretaries of Commerce, Defense, Energy, Homeland Security, Justice, State, and the Treasury, as well as the Chairman and Chief Executive of the Commodity Futures Trading Commission, Chairman of the Federal Deposit Insurance Corporation, Chair of the Board of Governors of the Federal Reserve System, Commissioner of the Internal Revenue Service, Chairman of the National Credit Union Administration, the Comptroller of the Currency, and the Chairman of the Securities and Exchange Commission. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8612, or GianopoulosK@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology Our objectives were to examine (1) agencies roles in sanctions implementation, (2) the resources available to agency units that focus primarily on sanctions implementation, (3) the extent to which agency units that primarily focus on sanctions implementation have assessed their resource needs, and (4) agencies reporting to Congress on sanctions implementation expenses and activities. To examine agencies roles in sanctions implementation, we identified agencies involved in sanctions implementation by reviewing sanctions authorities, including statutes and executive orders, and agency documents and websites and interviewing agency officials. We used these documents and interviews to summarize agencies principal roles in sanctions implementation, and we vetted this summary with the Departments of the Treasury (Treasury), State (State), and Commerce (Commerce), which we had identified through our initial interviews and review of background materials as having units that focus primarily on sanctions implementation. We then prepared a data collection instrument to obtain information on sanctions implementation from agencies across the government. Using this instrument, we requested information about the specific actions these agencies performed for each of the roles we identified, the number of staff they devoted to sanctions implementation, and the estimated percentage of time these staff spent on sanctions implementation in fiscal year 2019. We also requested information about the sources and methods that agencies or agency units used to produce these estimates. We pretested the instrument with the Office of the Comptroller of the Currency and the Department of Homeland Security s U.S. Customs and Border Protection and made changes based on the results of the pretest before sending the instrument to all agencies or agency units that we had identified as having a role in sanctions implementation. To estimate in full-time equivalents (FTE) the staff resources that agencies devoted to sanctions implementation, we multiplied agencies estimates of the number of staff devoted to sanctions implementation by the agencies estimates of the percentage of time those staff spent on sanctions-related duties. To examine the resources available to agency units that focus primarily on sanctions implementation, we reviewed congressional budget justifications and used a data collection instrument to obtain information on (1) funding for units that focused primarily on sanctions implementation at Treasury, State, and Commerce in fiscal years 2014 through 2019 and (2) personnel in these units as of the beginning of fiscal years 2014 through 2020. We compared the information that agencies provided with data in their congressional budget justifications and determined that these data were sufficiently reliable for reporting on trends in funding, authorized FTEs, and filled positions at these agency units. We then examined challenges associated with hiring for, and filling, positions at these agency units by interviewing agency officials and reviewing agencies responses to our written questions. To examine the extent to which agency units that primarily focus on sanctions implementation have assessed their resource needs, we interviewed agency officials and reviewed their written responses to our questions about their budget development processes and any relevant workforce analyses and plans they had prepared. We reviewed documentation of Treasury s ongoing workforce planning process against criteria for strategic workforce planning that we had previously identified, to assess whether the process, if completed according to plan, would address principles of strategic workforce planning that we had previously identified. We reviewed agency performance reports and annual reports and interviewed agency officials representing Treasury, State, and Commerce units that focus primarily on sanctions implementation, to identify any additional information the agencies had that could measure changes in agency workload over time. We then reviewed that information and interviewed agency officials to assess how accurately the measures reflected each agency s sanctions workload. To examine agency reporting to Congress on sanctions implementation expenses and activities, we reviewed background information on sanctions implementation to identify mandated reports that included information on sanctions expenses and activities. We confirmed our list of the mandated reports that included sanctions expenses and activities with Treasury s Office of Foreign Assets Control. We also reviewed sanctions legislation such as the International Emergency Economic Powers Act, the National Emergencies Act, the Foreign Narcotics Kingpin Designation Act and the Trade Sanctions Reform and Export Enhancement Act of 2000 to identify the specific requirements for those mandated reports on agency expenses and activities. We then requested from agency officials copies of the agencies most recently submitted mandated reports as of January 2019 and analyzed the agencies and types of expenses the reports identified. We requested information from agency officials and reviewed supporting documentation in order to describe how agencies estimated their expenses for sanctions implementation. We conducted this performance audit from October 2018 to March 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Agency Roles in Sanctions Implementation To determine agencies roles in sanctions implementation, we sent a data collection instrument to all agency units that we had identified as having a role in sanctions implementation, requesting information on the specific actions the agency units perform for each role. Tables 5 through 12 summarize the information provided in the agency units responses to the data collection instrument. Appendix III: Agency Personnel with Sanctions Implementation Duties We identified units of 13 agencies that have a role in sanctions implementation, and we requested that each unit report the number of personnel with sanctions-related duties and the estimated percentage of time these personnel spent on such duties in fiscal year 2019. The agency units used various methods to generate their estimates. Several of the units were unable to estimate numbers of personnel with sanctions- related duties or the percentage of time these personnel spent on sanctions-related duties. In many cases, agency units were unable to disaggregate the relatively minimal resources devoted to sanctions implementation from the resources for wider duties related to their mission. The following provides information about each agency or agency unit. Department of State (State). All nine units that State identified as having a role in sanctions implementation were able to estimate the number of personnel with sanctions implementation duties in fiscal year 2019. The units used sources such as position descriptions and management surveys of staff to generate the estimates. Department of the Treasury (Treasury). Of the seven Treasury units from which we received information, five were able to estimate the number of personnel with sanctions implementation duties in fiscal year 2019. Officials of the sixth unit stated that they could not provide such an estimate. The seventh unit, the Office of Intelligence and Analysis of the Office of Terrorism and Financial Intelligence (TFI), provided an estimate of the percentage of time its analytic staff devoted to sanctions but, because of sensitivity concerns, did not provide estimates of the number of personnel with sanctions implementation duties. Department of Commerce (Commerce). Of the six Commerce units from which we received information, five were able to estimate the number of personnel with sanctions implementation duties in fiscal year 2019. However, Export Enforcement, a much larger BIS unit with over 170 employees, was not able to disaggregate the time its personnel spent on sanctions implementation from its broader export control enforcement activities. According to Export Enforcement officials, its investigative management system does not record whether its activities respond to potential violations of Office of Foreign Assets Control (OFAC) sanctions or violations of the Export Administration Regulations. Many of the cases that the office investigates include potential violations of both sanctions and the regulations. Department of Defense. The Department of Defense s Office of the Under Secretary of Defense for Policy, which includes the Defense Technology Security Administration, was able to estimate the number of personnel with sanctions implementation duties. To generate the estimates, the office used sources including position descriptions and management judgement of time spent by individual action officers on sanctions. Department of Energy. The Department of Energy s National Nuclear Security Administration relied on management judgment to estimate the number of personnel with sanctions duties. Department of Homeland Security. At the Department of Homeland Security, units in U.S. Immigration and Customs Enforcement were able to estimate the number of personnel with sanctions implementation duties in fiscal year 2019 by analyzing their investigative case management database. However, other department units were unable to provide such estimates. For example, Coast Guard officials reported that it would be difficult to estimate the number of personnel with sanctions implementation duties because these personnel are located throughout the United States and the world and do not record the time they spend on sanctions. Department of Justice. At the Department of Justice, the National Security Division and most sections of the Criminal Division were able to estimate the number of personnel with sanctions implementation duties in fiscal year 2019. However, other department units were unable to provide such estimates. For example, Drug Enforcement Administration (DEA) officials stated that it is difficult to quantify the time that DEA s special agents dedicate specifically to sanctions. According to agency officials, investigations and operations to secure evidence for indictments can also be used to support sanctions designations. As a result, according to DEA, the agents spend minimal time on sanctions implementation that they would not have spent on their work in any case. The Federal Bureau of Investigation noted the same justification for why the bureau was unable to provide estimates of the number of personnel with sanctions-related duties. Financial regulatory agencies. The six financial regulatory agencies identified as having a role in sanctions implementation were unable to estimate numbers of personnel with sanctions implementation duties. Financial regulators were generally unable to disaggregate the time that personnel spent on OFAC compliance examinations because these are often performed concurrently with broader Bank Secrecy Act/Anti Money Laundering examinations. See table 13 for additional information about each agency unit. Appendix IV: Comments from the Department of State Appendix V: GAO Contact and Staff Acknowledgments <9. GAO Contact> <10. Staff Acknowledgements> In addition to the contact named above, Drew Lindsey (Assistant Director), Michael Simon (Analyst-in-Charge), Neil Doherty, Justin Fisher, Reid Lowe, Grace Lui, Christina Pineda, Julia Robertson, and Paul Sturm made key contributions to this report. | Why GAO Did This Study
The United States has implemented dozens of sanctions programs to counteract activities that threaten U.S. national interests. Sanctions may place restrictions on entire countries, sectors of countries' economies, or specific corporations or individuals. Examples of restrictions include limiting access to the U.S. financial system, freezing assets under U.S. jurisdiction, and restricting trade. The United States has implemented an increasing number of sanctions in recent years, including sanctions on countries that conduct a significant amount of international trade, such as Russia, Venezuela, and Iran.
GAO was asked to examine the resources U.S. agencies have devoted to sanctions implementation. This report examines (1) agencies' roles in sanctions implementation, (2) resources available to agency units that focus primarily on sanctions implementation, (3) the extent to which agency units that focus primarily on sanctions implementation have assessed their resource needs, and (4) agencies' reporting to Congress on sanctions implementation expenses and activities. GAO gathered data from 13 agencies and their sub-units to identify their roles and the personnel they used for sanctions implementation. GAO also reviewed agency reporting, planning, and budget documents and interviewed agency officials.
What GAO Found
Agencies may have one or more roles in sanctions implementation—for example, developing policy and investigating, enforcing, and prosecuting violations. The Departments of the Treasury, State, and Commerce each have a unit focused primarily on sanctions—Treasury's Office of Foreign Assets Control (OFAC), State's Office of Economic Sanctions Policy and Implementation (SPI), and Commerce's Bureau of Industry and Security's (BIS) Foreign Policy Division (FPD). GAO identified 10 other agencies with roles in sanctions implementation.
OFAC, SPI, and FPD generally received steady or growing resources in recent years, but OFAC and SPI face hiring challenges. In fiscal years 2014 to 2019, OFAC received a 58 percent budget increase and additional hiring authority, but vacancies ranged from 6 to 26 percent of its authorized full time equivalents (FTEs). OFAC attributed its hiring challenges to competition from other agencies and the private sector and the time needed for security clearances. State SPI received authority to hire six additional FTEs in fiscal year 2020, for a total of 21, but more than half of its authorized positions were vacant at the start of the fiscal year. FPD lacks funding to fill one of its 10 authorized positions.
OFAC, SPI, and FPD all consider resource needs as part of annual budget processes, and OFAC has an ongoing process to assess its workforce needs. OFAC began its workforce planning process in fiscal year 2019 and expects to make preliminary recommendations in March 2020. According to SPI officials, SPI cited the increasing use of sanctions across multiple regions in justifying its request for additional fiscal year 2020 positions. BIS prepared a 2016 plan that assessed its workforce, including FPD, but stated that it no longer uses the plan.
Agencies provide information on selected sanctions expenses and activities in mandated reports. Treasury's reports on 25 sanctions programs include expenses for Treasury, State, and other agencies if relevant executive orders identify them. State reported activities for a weapons of mass destruction sanctions program but also reported no specific expenditures for the program. State reviewed program information to prepare the reports, but the reports do not describe what it considered, limiting information available to Congress.
What GAO Recommends
GAO recommends that State include additional information about the expenditures it considers in its reporting for the Proliferation of Weapons of Mass Destruction sanctions program. State concurred with the recommendation. |
gao_GAO-19-617T | gao_GAO-19-617T_0 | <1. FEMA Has Taken Steps to Strengthen Disaster Resilience and Preparedness, but Additional Steps are Needed to Fully Address Remaining Challenges> We have previously reported on various aspects of national preparedness, including examining the extent to which FEMA programs encourage disaster resilience and identifying gaps in federal preparedness capabilities. We have found that when federal, state, and local efforts aligned to focus on improving disaster resilience and preparedness, there was a noticeable reduction in the effects of the disaster. However, our prior and ongoing work also highlight opportunities to improve disaster resilience and preparedness nationwide. <1.1. Disaster Resilience> Hazard mitigation is a key step in building resilience and preparedness against future disasters. In July 2015, we found that states and localities experienced challenges when trying to use federal funds to maximize resilient rebuilding in the wake of a disaster. In particular, they had difficulty navigating multiple federal grant programs and applying federal resources towards their most salient risks because of the fragmented and reactionary nature of the funding. In our 2015 report, we recommended that the Mitigation Framework Leadership group an interagency body chaired by FEMA create a National Mitigation Investment Strategy to help federal, state, and local officials plan for and prioritize disaster resilience. As of May 2019, according to FEMA officials, the Mitigation Framework Leadership group is on track to address the recommendation, and they expect the strategy to be published by July 2019. In September 2017, we reported that the methods used to estimate the potential economic effects of climate change in the United States using linked climate science and economics models could inform decision makers about significant potential damages in different U.S. sectors or regions, despite the limitations. For example, for 2020 through 2039, one study estimated between $4 billion and $6 billion in annual coastal property damages from sea level rise and more frequent and intense storms. We found that the federal government has not undertaken strategic government-wide planning on the potential economic effects of climate change to identify significant risks and craft appropriate federal responses. As a result, we recommended the Executive Office of the President, among others, should use information on the potential economic effects of climate change to help identify significant climate risks facing the federal government and craft appropriate federal responses, such as establishing a strategy to identify, prioritize, and guide federal investments to enhance resilience against future disasters; however, as of June 2019, officials have not taken action to address this recommendation. In November 2017, we found that FEMA had taken some actions to better promote hazard mitigation as part of its Public Assistance grant program. However, we also reported that more consistent planning for, and more specific performance measures related to, hazard mitigation could help ensure that mitigation is incorporated into recovery efforts. We recommended, among other things, that FEMA (1) standardize planning efforts for hazard mitigation after a disaster and (2) develop performance measures for the Public Assistance grant program to better align with FEMA s strategic goal for hazard mitigation in the recovery process. FEMA concurred with our recommendations, and as of March 2019, officials have reported taking steps to increase coordination across its Public Assistance, mitigation, and field operations to ensure hazard mitigation efforts are standardized and integrated into the recovery process. Additionally, FEMA officials reported taking actions to begin developing disaster-specific mitigation performance measures. However, FEMA has yet to finalize these actions, such as by proposing performance measures to FEMA senior leadership. As such, we are continuing to monitor FEMA s efforts to address these recommendations. <1.2. Disaster Preparedness> In March 2011, we reported that FEMA had not completed a comprehensive and measurable national preparedness assessment of capability gaps for example the amount of resources required to save lives, protect property and the environment, and meet basic human needs after an incident has occurred. Developing such an assessment would help FEMA to identify what capability gaps exist and what level of resources are needed to close such gaps. Accordingly, we suggested that FEMA complete a national preparedness assessment to evaluate capability requirements and gaps at each level of government to enable FEMA to prioritize grant funding. As of December 2018, FEMA had efforts underway to assess urban area, state, territory, and tribal preparedness capabilities to inform the prioritization of grant funding; however, the agency had not yet completed a national preparedness assessment with clear, objective, and quantifiable capability requirements against which to assess preparedness. We are continuing to monitor FEMA s efforts to complete such an assessment. Furthermore, in March 2015, we reviewed selected states approaches to budgeting for disaster costs to help inform congressional consideration of the balance between federal and state roles in funding disaster assistance. Specifically, we reported that none of the 10 states in our review maintained reserves dedicated solely for future disasters, and some state officials reported that they could cover disaster costs without dedicated disaster reserves because they generally relied on the federal government to fund most of the costs associated with disaster response and recovery. In response to the 2017 disasters, we also have ongoing work to review national preparedness capabilities to assist communities in responding to and recovering from disasters. Based on our preliminary observations, some states and localities we interviewed reported that while they are prepared to deal with immediate response issues in the aftermath of a disaster, gaps exist in their capacity to support longer term recovery. One reason for this, according to these state and local officials, is because federal preparedness grant funds are largely dedicated to maintaining response capabilities and sustaining personnel costs for local emergency management officials. While these preparedness grants fund critical elements of the national preparedness system, there are some limitations to using them. Specifically, some state and local officials told us that the preparedness grant activities are generally focused on terrorism issues rather than all-hazards. In addition, they reported that the preparedness grants are generally spent on maintaining response capabilities rather than to enhance their capacity for disaster recovery such as additional training and exercises. In addition to the state, territory, and urban region assessments that FEMA is conducting, FEMA is currently in the process of developing the first national Threat and Hazard Identification and Risk Assessment. This national assessment may help FEMA and policymakers better understand how to target federal resources in a way that enhances the nation s capacity to respond and recover from future catastrophic or sequential disasters. We are continuing to evaluate national preparedness efforts and plan to report on FEMA s Threat and Hazard Identification and Risk Assessment process in January 2020. <2. FEMA s Response to the 2017 Disasters Highlighted Some Areas of Progress, But also Identified Significant Weaknesses> <2.1. FEMA s Response to the 2017 Disasters> In September 2018, we reported that the response to the 2017 hurricanes and wildfires in Texas, Florida, and California showed progress made since the 2005 federal response to Hurricane Katrina. We also found that FEMA coordinated closely with Texas, Florida, and California emergency management officials and other federal, local, and volunteer emergency partners to implement various emergency preparedness actions prior to the 2017 disasters in each state, and to respond to these disasters. According to FEMA and state officials, these actions helped officials begin addressing a number of challenges they faced such as meeting the demand for a sufficient and adequately-trained disaster workforce and complex issues related to removing debris in a timely manner after the hurricanes and wildfires. In contrast, we also reported in September 2018, that in Puerto Rico and the USVI a variety of challenges such as the far distance of the territories from the U.S. mainland, limited local preparedness for a major hurricane, and outdated local infrastructure complicated response efforts to hurricanes Irma and Maria. Many of the challenges we identified are also described in FEMA s 2017 Hurricane Season FEMA After-Action Report, including: the sequential and overlapping timing of the three hurricanes with Maria being the last of the three caused staffing shortages and required FEMA to shift staff to the territories that were already deployed to other disasters; the far distance of both territories from the U.S. mainland complicated efforts to deploy federal resources and personnel quickly; and the incapacitation of local response functions due to widespread devastation and loss of power and communications, and limited preparedness by Puerto Rico and the USVI for a category 5 hurricane resulted in FEMA having to assume response functions that territories would usually perform themselves. We also reported that FEMA s 2017 Hurricane Season FEMA After-Action Report noted that FEMA could have better leveraged information from preparedness exercises in the Caribbean, including a 2011 exercise after- action report for Puerto Rico which indicated that the territory would require extensive federal support during a large scale disaster in moving commodities from the mainland to the territory and to distribution points throughout. In our September 2018 report, we also found that FEMA s efforts in Puerto Rico after Hurricane Maria were the largest and longest single response in the agency s history. According to FEMA, the agency s response included, among other things, bringing in approximately $1 billion in food and supplies; and distributing food, commodities, and medicine via approximately 1,400 flights, which constituted the longest sustained air operations in U.S. disaster history. FEMA officials explained that the agency essentially served as the first responder in the early response efforts in Puerto Rico, and many of services FEMA provided such as power restoration, debris removal, and commodity distribution were typically provided by territorial or local governments. We also reported in September 2018, that in the USVI, recent disaster training and the pre-positioning of supplies due to the anticipated impact of Hurricane Irma facilitated the response efforts for Hurricane Maria, which made landfall less than two weeks later. According to FEMA s federal coordinating officer, the lead federal official in charge of response for the USVI, the federal government deployed assets, including urban search and rescue teams and medical assistance teams. In addition, due to the sequence of Hurricane Irma hitting the USVI immediately before Hurricane Maria, the Department of Defense (DOD) already had personnel and resources (i.e., ships) deployed to the area, which enabled DOD to respond to Hurricane Maria faster than it otherwise would have. Additional challenges we have reported on regarding response operations have included providing short-term housing and sheltering for disaster survivors. The Department of Homeland Security s (DHS) 2017 National Preparedness Report states that providing effective and affordable short- term housing for disaster survivors has been a longstanding and continuing challenge. For example, following the California wildfires, local officials faced challenges identifying shelter for displaced survivors, in part due to a housing shortage that existed before the wildfires. Federal, state, and local officials formed housing task forces which facilitated a joint decision-making approach to address these challenges. While this approach has enabled the state to meet its most pressing short-term housing needs, according to FEMA officials, the state faces other challenges in the long term. For example, FEMA officials in the region covering California told us that because of the nature of damage following a wildfire and because of housing shortages in California, some of FEMA s forms of housing assistance have been less relevant in the wake of the California wildfires than for other disasters. We will continue to evaluate these and other challenges and plan to report in fall 2019. We also have ongoing work to review efforts to provide mass care which includes sheltering, feeding and providing emergency supplies following the 2017 hurricanes. Our preliminary observations indicate that during and immediately following the hurricanes, the number of people seeking public shelters outpaced the capacity. In Texas and Florida, emergency managers we spoke with described having unprecedented numbers of residents needing shelters but not always enough staff initially to operate the shelters. In Texas, Puerto Rico, and the USVI, hurricanes Harvey, Irma, and Maria flooded or destroyed many buildings planned for use as shelters, according to emergency management and local government officials in these areas. As a result, some remaining shelters were at maximum capacity. In the USVI, residents of some public housing units that had sustained significant damages sought help at the territory s Department of Human Services because there was no more space in the shelters, according to local government officials. While they were turned away from the shelters, these families were able to take refuge in the lobby of the Department of Human Services building. We will continue to evaluate these and other challenges and plan to report in summer 2019. <2.2. FEMA Disaster Contracting> In December 2018 and April 2019, we reported that, in response to hurricanes Harvey, Irma, and Maria, as well as the 2017 California wildfires, FEMA and other federal partners relied heavily on advance contracts which are established before a disaster to provide for life- sustaining goods and services such as food, water and transportation typically needed immediately after a disaster and post disaster contracts which can be used for various goods and services, such as debris removal and installation of power transmission equipment. FEMA is required to coordinate with states and localities and encourage them to establish their own advance contracts with vendors. In December 2018, we reported on inconsistencies we found in that coordination and in the information FEMA used to coordinate with states and localities on advance contracts. As a result of this and other challenges identified, we made nine recommendations to FEMA, including that it update its strategy and guidance to clarify the use of advance contracts, improve the timeliness of its acquisition planning activities, revise its methodology for reporting disaster contracting actions to Congress, and provide more consistent guidance and information for contracting officers in coordinating with states and localities to establish advance contracts. FEMA concurred with all of these recommendations, and we are continuing to monitor its efforts to implement each recommendation. Furthermore, in April 2019, we reported on challenges that we found in the federal government s use of post-disaster contracts. These challenges included a lack of transparency about contract actions, challenges with requirements development, and with interagency coordination. In our report, we found that FEMA had begun taking some steps to address the consistency of post-disaster contract requirements with contracting officers, but that inaccurate or untimely estimates in the contracts we reviewed sometimes resulted in delays meeting the needs of survivors. As a result of our findings in this report, we made 10 recommendations to FEMA and other federal agencies that use these post-disaster contracts related to improving the management of such contracts. FEMA and other agency officials concurred with nine of the recommendations and have reported taking actions to begin implementing them. We will continue to monitor FEMA s progress in fully addressing these recommendations. <3. FEMA Provides Long Term Disaster Recovery Support, but State and Local Officials Cited Continued Challenges Managing Complex Recovery Assistance Programs> FEMA provides multiple forms of disaster recovery assistance after a major disaster has been declared, including Public Assistance and Individual Assistance. Through these grant programs, FEMA obligates billions of dollars to state, tribal, territorial, and local governments, certain nonprofit organizations, and individuals that have suffered injury or damages from major disaster or emergency incidents, such as hurricanes, tornados, or wildfires. In September 2016, we reported that, from fiscal years 2005 through 2014, FEMA obligated almost $46 billion for the Public Assistance program and over $25 billion for the Individual Assistance program. According to FEMA s May 2019 Disaster Relief Fund report, total projected obligations through fiscal year 2019 for the Public Assistance and Individual Assistance programs for just the 2017 hurricanes Harvey, Irma, and Maria are roughly $16 billion and $7 billion, respectively. Given the high cost of these programs, it is imperative that FEMA continue to make progress on the challenges we have identified in our prior and ongoing work regarding its recovery efforts. <3.1. FEMA Public Assistance Grants for Disaster Recovery> FEMA s Public Assistance program provides grants to state, tribal, territorial, and local governments for debris removal; emergency protective measures; and the repair, replacement, or restoration of disaster-damaged, publicly owned facilities. It is a complex and multistep program administered through a partnership among FEMA, the state, and local officials. Prior to implementing the Public Assistance program, FEMA determines a state, territorial or tribal government s eligibility for the program using the per capita damage indicator. In our September 2018 report on federal response and recovery efforts for the 2017 hurricanes and wildfires, we reported on FEMA s implementation of the Public Assistance program, which has recently undergone significant changes as a result of federal legislation and agency initiatives. Specifically, we reported on FEMA s use of its redesigned delivery model for providing grants under the Public Assistance program, as well as the alternative procedures for administering or receiving such grant funds that FEMA allows states, territories, and local governments to use for their recovery. Our prior and ongoing work highlights both progress and challenges with FEMA s Public Assistance program, including the agency s methodology for determining program eligibility, the redesigned delivery model, and the program s alternative procedures. FEMA s Public Assistance program provides grants to repair public infrastructure such as water storage systems, roads, and power lines. In September 2012, we found that FEMA primarily relied on a single criterion, the per capita damage indicator, to determine a jurisdiction s eligibility for Public Assistance funding. However, because FEMA s current per capita indicator, set at $1 in 1986, does not reflect the rise in (1) per capita personal income since it was created in 1986 or (2) inflation from 1986 to 1999, the indicator is artificially low. Our analysis of actual and projected obligations for 508 disaster declarations in which Public Assistance was awarded during fiscal years 2004 through 2011 showed that fewer disasters would have met either the personal income-adjusted or the inflation-adjusted Public Assistance per capita indicators for the years in which the disaster was declared. Thus, had the indicator been adjusted annually since 1986 for personal income or inflation, fewer jurisdictions would have met the eligibility criteria that FEMA primarily used to determine whether federal assistance should be provided, which would have likely resulted in fewer disaster declarations and lower federal costs. We recommended, among other things, that FEMA develop and implement a methodology that more comprehensively assesses a jurisdiction s capacity to respond to and recover from a disaster without federal assistance, including fiscal capacity and consideration of response and recovery capabilities. DHS concurred with our recommendation and, in January 2016, FEMA was considering establishing a disaster deductible, which would have required a predetermined level of financial or other commitment before FEMA would have provided assistance under the Public Assistance program. In August 2018, FEMA told us that it was no longer pursuing its proposed disaster deductible due to concerns about the complexity of the proposal. FEMA is considering options that leverage similar approaches, but does not have an estimated completion date for implementation. In addition, the DRRA requires FEMA to initiate rulemaking to (1) update the factors considered when evaluating requests for major disaster declarations, including reviewing how FEMA estimates the cost of major disaster assistance, and (2) consider other impacts on the capacity of a jurisdiction to respond to disasters, by October 2020. Until FEMA implements a new methodology, the agency will not have an accurate assessment of a jurisdiction s capabilities and runs the risk of recommending that the President award Public Assistance to jurisdictions that have the capacity to respond and recover on their own. <3.1.1. Redesigned Public Assistance Delivery Model> Prior to our September 2018 report, we had previously reported on the Public Assistance program in November 2017. Specifically, we reported that FEMA redesigned the delivery model for providing grants under the Public Assistance program. As part of the redesign effort, FEMA developed a new, web-based case management system to address past challenges, such as difficulties in sharing grant documentation among FEMA, state, and local officials and tracking the status of Public Assistance projects. Both FEMA and state officials involved in testing of the redesigned delivery model stated that the new case management system s capabilities could lead to greater transparency and efficiencies in the program. However, we found that FEMA had not fully addressed two key information technology management controls that are necessary to ensure systems work effectively and meet user needs. We recommended, among other things, that FEMA (1) establish controls for tracking the development of system requirements, and (2) establish system testing criteria, roles and responsibilities, and the sequence and schedule for integration of other relevant systems. FEMA concurred with these recommendations and has fully implemented the first recommendation. Regarding the second recommendation, FEMA has not yet finalized its decision on whether to integrate its new case management system with its current grants management system. As of March 2019, we are awaiting a final decision from officials to determine whether their actions fully address our recommendation. FEMA s original intention was to implement the redesigned delivery model for all future disasters beginning in January 2018. However, in September 2017, FEMA expedited full implementation of the redesigned model shortly after Hurricane Harvey made landfall. In September 2018, we reported that local officials continued to experience challenges with using the new Public Assistance web-based, case management system following the 2017 disasters, such as not having sufficient guidance on how to use the new system and delays with FEMA s processing of their projects. <3.1.2. Public Assistance Alternative Procedures in the United States Virgin Islands and Puerto Rico> In February 2019, we also reported that FEMA and the USVI were transitioning from using the standard Public Assistance program to using Public Assistance alternative procedures. FEMA and USVI officials stated that the alternative procedures will give the USVI more flexibility in determining when and how to fund projects and allow the territory to use any excess funds for cost-effective hazard mitigation measures, among other uses. Further, when using the alternative procedures, the Bipartisan Budget Act of 2018 allows FEMA, the USVI and Puerto Rico to repair and rebuild critical services infrastructure such as medical and education facilities so it meets industry standards without regard to pre-disaster condition (see Figure 1). Regarding the implementation of the Public Assistance program in Puerto Rico, in March 2019, we reported that Puerto Rico established a central recovery office to oversee federal recovery funds and was developing an internal controls plan to help ensure better management and accountability of the funds. In the interim, FEMA instituted a manual process for reviewing each reimbursement request before providing Public Assistance funds to mitigate risk and help ensure financial accountability. We also reported that officials we interviewed from FEMA, Puerto Rico s central recovery office, and municipalities said they experienced initial challenges with the recovery process, including concerns about lack of experience and knowledge of the alternative procedures; concerns about missing, incomplete, or conflicting guidance on the alternative procedures; and concerns that municipalities had not been fully reimbursed for work already completed after the hurricanes, causing financial hardships in some municipalities. FEMA officials stated that the agency is taking actions to address reported recovery challenges, such as additional training for new FEMA employees and drafting supplemental guidance for the alternative procedures process. We continue to monitor FEMA s efforts in our ongoing work. As part of our ongoing work, we are continuing to examine hurricane recovery efforts in the USVI and Puerto Rico. Our preliminary observations indicate that the USVI plans to take a cautious approach in pursuing permanent work projects using the Public Assistance alternative procedures program, which requires the use of fixed-cost estimates. Specifically, USVI officials we interviewed told us that developing such fixed-cost estimates that accurately incorporate the future impact of inflation and increases in materials and labor costs for certain projects was difficult. Further, these officials stated that since the territory is financially responsible for any costs that exceed these fixed-cost estimates, the USVI plans to pursue projects that do not include high levels of complexity or uncertainty to reduce the risk of cost overruns. From our ongoing work on Puerto Rico s recovery efforts, we have learned that, in March 2019, Puerto Rico s central recovery office released the Disaster Recovery Federal Funds Management Guide, including an internal controls plan for the operation of the recovery office. On April 1, 2019, FEMA removed the manual reimbursement process and began a transition to allow the central recovery office to take responsibility for review and reimbursement approval of federal recovery funds. We will review this transition process as a part of our ongoing work. Our preliminary observations also indicate that some of the challenges we reported in our March 2019 report continue. For example, officials from Puerto Rico s central government agencies told us they did not feel they had sufficient guidance on the FEMA Public Assistance program and where they did, written and verbal FEMA guidance was inconsistent or conflicting. For example, officials from one agency expressed their desire for more FEMA guidance communicated in writing as it frequently happened that different FEMA officials would interpret existing guidance differently. Similarly, officials from two agencies described situations where they had initially been directed to follow one interpretation of a policy, only to be directed to follow a different, conflicting interpretation in the subsequent months. Puerto Rico agency officials also stated that the lack of sufficient instruction led to a back and forth with FEMA for clarifications, which led to delays in the phases of project development. FEMA officials in Puerto Rico stated that the agency has developed specific guidance for disaster recovery in Puerto Rico and that there are various ways, such as in-person meetings, where officials from Puerto Rico can obtain clarification. We are continuing to examine this issue as part of our ongoing review of Puerto Rico s recovery. In addition, our preliminary observations from our ongoing work for both the USVI and Puerto Rico indicate that FEMA, USVI and Puerto Rico officials have reported challenges with the implementation of the flexibilities authorized by section 20601 of the Bipartisan Budget Act. This section of the Act allows for the provision of assistance under the Public Assistance alternative procedures to restore disaster-damaged facilities or systems that provide critical services to an industry standard without regard to pre-disaster condition. Officials from Puerto Rico s central government stated that they disagreed with FEMA s interpretation of the types of damages covered by section 20601 of the Bipartisan Budget Act of 2018. In response, FEMA officials in Puerto Rico stated they held several briefings with Puerto Rico s central recovery office to explain FEMA s interpretation of the section. Further, FEMA officials in the USVI told us that initially, they had difficulty obtaining clarification from FEMA headquarters regarding how to implement key components of section 20601 of the Act. As of May 2019, FEMA officials in the USVI stated that they continue to move forward with developing alternative procedures projects. USVI officials also told us that FEMA had been responsive and helpful in identifying its options for using the new authorities the Act provides. We will continue to evaluate these identified challenges and any efforts to address them, as well as other aspects of recovery efforts in the USVI and Puerto Rico, and plan to report our findings in late 2019 and early 2020, respectively. <3.2. FEMA Individual Assistance> FEMA s Individuals and Households Program provides individuals with financial assistance, such as grants to help repair or replace damaged homes, and temporary direct housing assistance, such as recreational vehicles. The Individual Assistance program provides financial and direct assistance to disaster victims for expenses and needs that cannot be met through other means, such as insurance. In May 2019, we reported on FEMA s effort to provide disaster assistance under the Individual Assistance program to older adults and people with disabilities following the 2017 hurricanes. We found that aspects of the application process for FEMA assistance were challenging for older individuals and those with disabilities. Further, according to stakeholders and FEMA officials, disability-related questions in the Individual Assistance registration materials were confusing and easily misinterpreted. While FEMA had made some efforts to help registrants interpret the questions, we recommended, among other things, that FEMA (1) implement new registration-intake questions that improve FEMA s ability to identify and address survivors disability-related needs, and (2) improve communication of registrants disability-related information across FEMA programs. DHS concurred with the first recommendation and described steps FEMA plans to take, or is in the process of taking, to address it. However, DHS did not concur with the second recommendation, noting that it lacks specific funding to augment its legacy data systems. FEMA officials stated that they began a long-term data management improvement initiative in April 2017, which they expect will ease efforts to share and flag specific disability-related data. While we acknowledge FEMA s concerns about changing legacy systems when it has existing plans to replace those systems, we continue to believe there are other cost-effective ways that are likely to improve communication of registrants disability-related information prior to implementing the system upgrades. For example, FEMA could revise its guidance to remind program officials to review the survivor case file notes to identify whether there is a record of any disability-related needs. We also have work underway to assess FEMA s Individuals and Households Program, a component program of Individual Assistance. Through this program, as of April 2019, FEMA had awarded roughly $4.7 billion in assistance to almost 1.8 million individuals and households for federally-declared disasters occurring in 2017 and 2018. Specifically, we are analyzing Individuals and Households Program expenditures and registration data for recent years; reviewing FEMA s processes, policies, and procedures for making eligibility and award determinations; and examining survivors reported experiences with this program, including any challenges, for major disaster declarations occurring in recent years. We plan to report our findings in early 2020. <4. Longstanding Workforce Management and Information Technology Challenges Exacerbate Key Issues with Response and Recovery Operations> <4.1. FEMA Workforce Management Challenges> FEMA s experiences during the 2017 disasters highlight the importance of continuing to make progress on addressing the long-standing workforce management challenges we have previously reported on and continue to observe in our ongoing work. In September 2018, we reported that the 2017 disasters hurricanes Harvey, Irma, and Maria, as well as the California wildfires resulted in unprecedented FEMA workforce management challenges, including recruiting, maintaining, and deploying a sufficient and adequately-trained FEMA disaster workforce. FEMA s available workforce was overwhelmed by the response needs caused by the sequential and overlapping timing of the three hurricanes. For example, at the height of FEMA workforce deployments in October 2017, 54 percent of staff were serving in a capacity in which they did not hold the title of Qualified according to FEMA s qualification system standards a past challenge we identified. FEMA officials noted that staff shortages, and lack of trained personnel with program expertise led to complications in its response efforts, particularly after Hurricane Maria. In February 2016, we reported on, among other things, FEMA s efforts to implement, assess, and improve its Incident Management Assistance Team program. We found that while FEMA used some leading practices in managing the program, it lacked a standardized plan to ensure that all national and regional Incident Management Assistance Team members received required training. Further, we found that the program had experienced high attrition since its implementation in fiscal year 2013. We recommended, among other things, that FEMA develop (1) a plan to ensure that Incident Management Assistance Teams receive required training, and (2) a workforce strategy for retaining Incident Management Assistance Team staff. DHS concurred with the recommendations. FEMA fully implemented our first recommendation by developing an Incident Management Assistance Team Training and Readiness Manual and providing a training schedule for fiscal year 2017. In response to the second recommendation, FEMA officials stated in July 2018 that they plan to develop policies that will provide guidance on a new workforce structure, incentives for Incident Management Assistance Team personnel, and pay-for-performance and all other human resource actions. We are continuing to monitor FEMA s efforts to address this recommendation. In November and December 2017, we reported on staffing challenges in FEMA s Public Assistance program. In November 2017, we reported on FEMA s efforts to address past workforce management challenges through its redesigned Public Assistance delivery model. As part of the redesign effort, FEMA created consolidated resource centers to standardize and centralize Public Assistance staff responsible for managing grant applications, and new specialized positions to ensure more consistent guidance to applicants. However, we found that FEMA had not assessed the workforce needed to fully implement the redesigned model, such as the number of staff needed to fill certain new positions, or to achieve staffing goals. Further, in December 2017, we reported on FEMA s management of its Public Assistance appeals process, including that FEMA increased staffing levels for the appeals process from 2015 to 2017. However, we found that FEMA continued to face a number of workforce challenges, such as staff vacancies, turnover, and delays in training, which contributed to processing delays. Based on our findings from our November and December 2017 reports, we recommended, among other things, that FEMA (1) complete workforce staffing assessments that identify the appropriate number of staff needed to implement the redesigned Public Assistance delivery model, and (2) document steps for hiring, training, and retaining key appeals staff, and address staff transitions resulting from deployments to disasters. FEMA concurred with our recommendations to address workforce management challenges in the Public Assistance program and have reported taking some actions in response. For example, to address the first recommendation, FEMA officials have developed preliminary models and estimates of staffing needs across various programs, including Public Assistance, and plan to reevaluate the appropriate number of staff needed and present recommendations to senior leadership by the end of June 2019. To address the second recommendation, FEMA has collected information on the amount of time regional appeals analysts spend on appeals, and the inventory and timeliness of different types of appeals. FEMA officials stated in September 2018 that they plan to assess this information to prepare a detailed regional workforce plan. As of June 2019, we are evaluating plans and documents provided by FEMA to determine whether they have fully addressed this recommendation. In our March 2019 report on the status of recovery efforts in Puerto Rico, we also reported Puerto Rico officials concerns about FEMA staff turnover and lack of knowledge among FEMA staff about how the Public Assistance alternative procedures are to be applied in Puerto Rico. As part of our ongoing work, we are continuing to examine recovery efforts in Puerto Rico. Our preliminary observations indicate that the concerns we reported on in our March 2019 report continue. For example, Puerto Rico agency officials said that the lack of continuity in FEMA personnel has been a challenge for communication and project development. Further, officials from all seven Puerto Rico government agencies we interviewed felt that the FEMA staff they interacted with did not have a complete understanding of FEMA processes and policies. We are continuing to evaluate FEMA s recovery efforts in Puerto Rico and plan to issue our findings in late 2019. In April 2019, we reported on the federal government s contracting efforts for preparedness, response, and recovery efforts related to the 2017 hurricanes and California wildfires. We found, among other things, that contracting workforce shortages continue to be a challenge for disaster response and recovery. Further, although FEMA s 2017 after-action report recommended increasing contract support capacities, it did not provide a specific plan to do so. We also found that while FEMA evaluated its contracting workforce needs in a 2014 workforce analysis, it did not specifically consider contracting workforce needs in the regional offices or address Disaster Acquisition Response Team employees. In our April 2019 report, we recommended, among other things, that FEMA assess its workforce needs including staffing levels, mission needs, and skill gaps for contracting staff, to include regional offices and Disaster Acquisition Response Teams, and develop a plan, including timelines, to address any gaps. FEMA concurred with this recommendation and estimates that it will implement it in September 2019. In our May 2019 report on FEMA disaster assistance to older adults and people with disabilities following the 2017 hurricanes, we found that FEMA began implementing a new approach to assist individuals with disabilities in June 2018, which shifted the responsibility for directly assisting individuals with disabilities from Disability Integration Advisors which are staff FEMA deploys specifically to identify and recommend actions needed to support survivors with disabilities to all FEMA staff. To implement this new approach, FEMA planned to train all of the agency s deployable staff and staff in programmatic offices on disability issues during response and recovery deployments. According to FEMA, a number of Disability Integration Advisors would also deploy to advise FEMA leadership in the field during disaster response and recovery. We found that while FEMA has taken some initial steps to provide training on the changes, it has not established a plan for delivering comprehensive disability-related training to all staff who will be directly interacting with individuals with disabilities. We recommended, among other things, that FEMA develop a plan for delivering training to FEMA staff that promotes competency in disability awareness and includes milestones and performance measures, and outlines how performance will be monitored. DHS concurred with this recommendation; however, officials stated that FEMA is developing a plan to include a disability integration competency in the guidance provided for all deployable staff, rather than through training. We will monitor FEMA s efforts to develop this plan and fully address our recommendation. In addition to our prior work on FEMA s workforce management challenges related to specific programs and functions, we are continuing to evaluate FEMA s workforce capacity and training efforts during the 2017 and 2018 disaster seasons. Our preliminary observations indicate that there were challenges in FEMA s ability to deploy staff with the right kinds of skills and training at the right time to best meet the needs of various disaster events. For example, according to FEMA field leadership we interviewed, for some of the functions FEMA performs in the field, FEMA had too few staff with the right technical skills to perform their missions such as inspections of damaged properties efficiently and effectively. For other functions, these managers also reported that they had too many staff in the early stages of the disaster, which created challenges with assigning duties and providing on-the-job training. For example, some managers reported that they were allocated more staff than needed in the initial phases of the disaster, but many lacked experience and were without someone to provide direction and mentoring to ensure they used their time efficiently and gained competence more quickly. Groups of FEMA field managers we interviewed told us that difficulties deploying the right mix of staff with the right skills led to challenges such as making purchases to support FEMA operations, problems with properly registering applicants for FEMA programs, or poor communication with nonfederal partners. Nonetheless, FEMA staff have noted that, despite any suboptimal circumstances during disaster response, they aimed to and have been able to find a way to deliver the mission. As part of this ongoing work, FEMA field leadership and managers also reported challenges using agency systems to ensure the availability of the right staff with the right skills in the right place and time. FEMA uses a system called the Deployment Tracking System to, among other things, help identify staff available to be deployed and activate and track deployments. To help gauge the experience level and training needs of its staff, the agency established the FEMA Qualification System (FQS), which is a set of processes and criteria to monitor staff experience in competently performing tasks and completing training that correspond to their job titles. According to the FQS guidance, staff who have been able to demonstrate proficient performance of all the relevant tasks and complete required training receive the designation qualified, and are expected to be ready and able to competently fulfill their responsibilities. Those who have not, receive the designation trainee, and can be expected to need additional guidance and on-the-job training. FQS designations feed into the Deployment Tracking System as one key variable in how the tracking system deploys staff. Among other challenges with FEMA s Deployment Tracking System and Qualification System, FEMA managers and staff in the field told us an employee s recorded qualification status was not a reliable indicator of the level at which deployed personnel would be capable of performing specific duties and responsibilities or their general proficiency in their positions, making it more difficult for managers to know the specialized skills or experience of staff and effectively build teams. We are continuing to assess these and other reported workforce challenges and plan to report our findings in January 2020. <4.2. FEMA Information Technology Challenges> In April 2019, we reported on FEMA s Grants Management Modernization program, which is intended to replace the agency s 10 legacy grants management systems and modernize and streamline the grants management environment. We found that, of six important leading practices for effective business process reengineering and information technology requirements management, FEMA fully implemented four and partially implemented two for the Grants Management Modernization program. The two partially implemented leading practices were (1) establishing plans for implementing new business processes and (2) establishing complete traceability of information technology requirements. In addition, we found that the program s initial May 2017 cost estimate of about $251 million was generally consistent with leading practices for a reliable, high-quality estimate; however, it no longer reflected the current assumptions about the program at the time of our review. Moreover, the program s schedule specifically its final delivery date of September 2020 did not reflect leading practices for project schedules, as the date was not informed by a realistic assessment of development activities. Lastly, we found that FEMA fully addressed three and partially addressed two of five key cybersecurity practices. The two partially addressed practices were (1) assessing security controls, and (2) obtaining an authorization to operate the system. We made 8 recommendations to FEMA to implement leading practices related to reengineering processes, managing information technology requirements, scheduling system development activities, and implementing cybersecurity. DHS concurred with all of our recommendations and provided estimated completion dates for implementing each of them through July 2020. Thank you, Chairman Rouda, Ranking Member Comer and Members of the Subcommittee. This concludes my prepared statement. I would be happy to respond to any questions you may have at this time. <5. GAO Contact and Staff Acknowledgements> If you or your staff has any questions concerning this testimony, please contact Christopher P. Currie at (404) 679-1875 or curriec@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this statement were Joel Aldape (Assistant Director), Matthew T. Lowney (Analyst-in-Charge), Rebecca Mendelsohn, David (Ben) Nelson, and Amanda R. Parker. In addition, Aditi Archer, Bryan Bourgault, Lorraine Ettaro, Aaron Gluck, Kathryn Godfrey, Taylor Hadfield, Eric Hauswirth, Robert (Denton) Herring, Adam Hoffman, Susan Hsu, Sara Kelly, Amy Moran Lowe, Heidi Nielson, Danielle Pakdaman, Sara Pelton, Amanda Prichard, and Johanna Wong made contributions to this statement. Key contributors for the previous work that this is based on are listed in each product. Enclosure I: Related GAO Products Previously Issued Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP, March 1, 2011. Federal Disaster Assistance: Improved Criteria Needed to Assess a Jurisdiction s Capability to Respond and Recover on Its Own. GAO-12- 838, September 12, 2012. Fiscal Exposures: Improving Cost Recognition in the Federal Budget. GAO-14-28, October 29, 2013. Emergency Preparedness: Opportunities Exist to Strengthen Interagency Assessments and Accountability for Closing Capability Gaps. GAO-15-20, December 4, 2014. High-Risk Series: An Update. GAO-15-290, February 11, 2015. Budgeting for Disasters: Approaches to Budgeting for Disasters in Selected States. GAO-15-424, March 26, 2015. Hurricane Sandy: An Investment Strategy Could Help the Federal Government Enhance National Resilience for Future Disasters. GAO-15-515, July 30, 2015. Disaster Response: FEMA Has Made Progress Implementing Key Programs, but Opportunities for Improvement Exist. GAO-16-87, February 5, 2016. Disaster Recovery: FEMA Needs to Assess Its Effectiveness in Implementing the National Disaster Recovery Framework. GAO-16-476, May 26, 2016. Federal Disaster Assistance: Federal Departments and Agencies Obligated at Least $277.6 Billion during Fiscal Years 2005 through 2014. GAO-16-797, September 22, 2016. Climate Change: Information on Potential Economic Effects Could Help Guide Federal Efforts to Reduce Fiscal Exposure. GAO-17-720, September 28, 2017. Disaster Assistance: Opportunities to Enhance Implementation of the Redesigned Public Assistance Grant Program. GAO-18-30, November 8, 2017. Disaster Recovery: Additional Actions Would Improve Data Quality and Timeliness of FEMA s Public Assistance Appeals Processing. GAO-18- 143, December 15, 2017. 2017 Disaster Contracting: Observations on Federal Contracting for Response and Recovery Efforts. GAO-18-335, February 28, 2018. Federal Disaster Assistance: Individual Assistance Requests Often Granted but FEMA Could Better Document Factors Considered. GAO-18- 366, May 31, 2018. 2017 Hurricanes and Wildfires: Initial Observations on the Federal Response and Key Recovery Challenges. GAO-18-472, September 4, 2018. Homeland Security Grant Program: Additional Actions Could Further Enhance FEMA s Risk-Based Grant Assessment Model. GAO-18-354, September 6, 2018. Continuity of Operations: Actions Needed to Strengthen FEMA s Oversight and Coordination of Executive Branch Readiness. GAO-19- 18SU, November 26, 2018. 2017 Disaster Contracting: Action Needed to Better Ensure More Effective Use and Management of Advance Contracts. GAO-19-93, December 6, 2018. U.S. Virgin Islands Recovery: Status of FEMA Public Assistance Funding and Implementation. GAO-19-253, February 25, 2019. High-Risk Series: Substantial Efforts Needed to Achieve Greater Progress on High-Risk Areas. GAO-19-157SP, March 6, 2019. Puerto Rico Hurricanes: Status of FEMA Funding, Oversight, and Recovery Challenges. GAO-19-256, March 14, 2019. Huracanes de Puerto Rico: Estado de Financiamiento de FEMA, Supervisi n y Desaf os de Recuperaci n. GAO-19-331, March 14, 2019. Disaster Recovery: Better Monitoring of Block Grant Funds Is Needed. GAO-19-232, March 25, 2019. FEMA Grants Modernization: Improvements Needed to Strengthen Program Management and Cybersecurity. GAO-19-164, April 9, 2019. 2017 Hurricane Season: Federal Support for Electricity Grid Restoration in the U.S. Virgin Islands and Puerto Rico. GAO-19-296, April 18, 2019. Disaster Contracting: Actions Needed to Improve the Use of Post- Disaster Contracts to Support Response and Recovery, GAO-19-281, April 24, 2019. Disaster Assistance: FEMA Action Needed to Better Support Individuals Who Are Older or Have Disabilities. GAO-19-318, May 14, 2019. Enclosure II: Ongoing GAO Reviews 1. Review of U.S. Virgin Islands recovery planning and progress; 2. Puerto Rico disaster recovery planning and progress; 3. 2017 wildfire response and recovery; 4. Federal internal control plans for disaster assistance funding; 5. Electricity grid restoration and resilience after the 2017 hurricane 6. Mass care sheltering and feeding challenges during the 2017 7. Department of Transportation highway and transit emergency relief 8. Drinking water and wastewater utility resilience; 9. Review of disaster death count information in selected states and 10. Department of Health and Human Services disaster response efforts; 11. Disaster and climate change impacts on Superfund sites; 12. FEMA Public Assistance program fraud risk management efforts; 13. Wildland fire collaboration on fuel reduction efforts; 14. Preparedness challenges and lessons learned from the 2017 15. FEMA workforce management and challenges; 16. Small Business Administration response to 2017 disasters; 17. Development of the GAO disaster resilience framework; 18. FEMA Individuals and Households Program operations and 19. National Flood Insurance Program post-flood enforcement; 20. Emergency alerting capabilities and progress; 21. National Flood Insurance Program buyouts and property acquisitions; 22. Economic costs of large-scale natural disasters and impacts on 23. Community Development Block Grants disaster recovery; and 24. Disaster Housing Assistance Program. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
Recent hurricanes, wildfires, and flooding have highlighted the challenges the federal government faces in responding effectively to natural disasters. The 2017 and 2018 hurricanes and wildfires affected millions of individuals and caused billions of dollars in damages. In March 2019, the Midwest experienced historic flooding that affected millions of acres of agriculture and damaged significant infrastructure. Since 2005, federal funding for disaster assistance is at least $450 billion. Increasing reliance on federal help to address natural disasters is a key source of federal fiscal exposure, particularly as certain extreme weather events become more frequent and intense due to climate change.
This statement discusses, among other things, FEMA's progress and challenges related to disaster resilience, response, recovery, and workforce management. This statement is based on GAO reports issued from March 2011 through May 2019, and also includes preliminary observations from ongoing GAO reviews of FEMA operations. For ongoing work, GAO reviewed federal laws; analyzed documents; interviewed agency officials; and visited disaster damaged areas in California, Florida, South Carolina, North Carolina, Puerto Rico, Texas, and the U.S. Virgin Islands, where GAO also interviewed FEMA and local officials.
What GAO Found
GAO's issued and ongoing work identified progress and challenges in the Federal Emergency Management Agency's (FEMA) disaster resilience, response, recovery, and workforce management efforts, as discussed below.
Disaster Resilience. GAO found that federal and local efforts to improve resilience can reduce the effects and costs of future disasters. FEMA has made progress in this area, but in July 2015, GAO found that states and localities faced challenges using federal funds to maximize resilient rebuilding following a disaster. GAO recommended that the Mitigation Framework Leadership Group—an interagency body chaired by FEMA—create a national strategy to better plan for and invest in disaster resilience. FEMA is working to address this recommendation and plans to publish the strategy by July 2019.
Response and Recovery. In September 2018, GAO reported that the response to the 2017 disasters in Texas, Florida, and California showed progress since Hurricane Katrina in 2005. Specifically, FEMA and state officials' pre-existing relationships and exercises aided the response and helped address various challenges. However, GAO and FEMA identified challenges that slowed and complicated FEMA's response to Hurricane Maria, particularly in Puerto Rico. GAO's issued and ongoing work also identified challenges in implementing FEMA Public Assistance grants. For example, FEMA and Puerto Rico officials identified challenges with Public Assistance policies and guidance that have complicated and slowed the recovery. GAO did not make recommendations, but continues to evaluate recovery efforts and will report its findings later this year.
FEMA Workforce Management. GAO has previously reported on long-standing workforce management challenges, such as ensuring an adequately-staffed and trained workforce. For example, GAO reported in September 2018 that the 2017 disasters overwhelmed FEMA's workforce and a lack of trained personnel with program expertise led to complications in its response efforts, particularly after Hurricane Maria. While FEMA has taken actions to address several of GAO's workforce management-related recommendations since 2016, a number of recommendations remain open as the 2019 hurricane season begins. Also, GAO is currently reviewing FEMA's workforce management efforts and lessons learned from the 2017 disasters and will report its findings early next year.
What GAO Recommends
GAO has made numerous recommendations in its prior reports to FEMA designed to address the challenges discussed in this statement. As of May 2019, FEMA has addressed about half of these recommendations and GAO is monitoring FEMA's ongoing efforts. |
gao_GAO-20-59 | gao_GAO-20-59_0 | <1. Background> Records are the foundation of open government, supporting the principles of transparency, participation, and collaboration. Effective management of federal agency records is important for efficient government operations: it ensures that sufficient documentation is created; that agencies can efficiently locate and retrieve records needed in the daily performance of their missions; and that records of historical significance are identified, preserved, and made available to the public. Requirements for managing federal records include the following: The FRA establishes requirements for the management of records in federal agencies. Every federal agency is required to preserve records that document the organization, functions, policies, decisions, procedures, and essential transactions of the agency to furnish the information necessary to protect the legal and financial rights of the government and of persons directly affected by the agency s activities. The act also gives NARA regulatory responsibilities for records management as well as general responsibilities for archiving records. In response to a presidential memorandum to begin an executive branch effort to reform records management policies and develop a framework for the management of electronic government records, the Director of OMB and the Archivist of the United States jointly issued the Managing Government Records Directive to heads of federal departments and agencies. This directive aimed at creating a robust records management framework for electronic records that complied with statutes and regulations to achieve the benefits outlined in the presidential memorandum. It required agencies to eliminate paper and use electronic recordkeeping to the fullest extent possible. Among other things, the directive identified two requirements related to electronic records that agencies were to implement between December 2016 and December 2019. By December 31, 2016, federal agencies were to manage all permanent and temporary email records in an accessible electronic format. Email records were to be retained in an appropriate electronic system that supports records management litigation requirements, including the capability to identify, retrieve, and retain the records for as long as they are needed. By December 31, 2019, federal agencies are to manage all permanent electronic records in an electronic format to the fullest extent possible for eventual transfer and accessioning by NARA. <1.1. NARA and Federal Agencies Share Responsibilities for Federal Records Management> Under the FRA and its implementing regulations, NARA has general oversight responsibilities for records management and the preservation of permanent records documenting the activities of the government in the National Archives of the United States. Thus, NARA is responsible for overseeing agency management of temporary and permanent records used in everyday operations and, ultimately, for taking control of permanent agency records judged to be of historic value. In particular, NARA is responsible for: issuing records management guidance covering topics such as managing electronic records; assigning an appraisal archivist to each agency to answer agency questions about federal records management; providing services to agencies, such as records scheduling and working with agencies to implement effective controls over the creation, maintenance, and use of records in the conduct of agency business; approving the disposition (destruction or preservation) of records; providing storage facilities for agency records; and conducting inspections or surveys of agency records and records management programs. NARA is also responsible for reporting to Congress on the state of federal records management. It accomplishes this responsibility, in part, by requiring all federal agencies to submit an annual report to the Office of the Chief Records Officer for the federal government. As part of these annual reports, agencies are required to include three submissions: The Senior Agency Official Records Management Report includes responses about the agency s progress toward the targets and requirements in the Managing Government Records Directive. The Federal Email Management Report includes a self-evaluation of their email management. The Records Management Self-Assessment includes a self- evaluation of their compliance with federal records management statutes, regulations, and program functions. In addition to NARA s responsibilities, the FRA requires each federal agency to make and preserve records that document the organization, functions, policies, decisions, procedures, and essential transactions of the agency. Effective Records Management Must Address Electronic Records, Including Email The FRA covers documentary material, regardless of physical form or media, although, until the advent of computers, records management and archiving mostly focused on handling paper documents. However, as information is increasingly created and stored electronically, records management has had to take into account the creation of records in various electronic formats, including email messages. As such, agencies need to adapt their records management practices to manage those electronic files that may be federal records. NARA s implementing regulations and guidance, such as periodic NARA bulletins, provide direction to agencies about the management of electronic records. To ensure that the management of agency electronic records is consistent with provisions of the FRA, NARA requires each agency to maintain an inventory of all agency information systems that identifies basic facts about each system, such as technical characteristics and the electronic records it contains. NARA also requires that agencies maintain all federal records, including those in electronic format, in its systems. Further, NARA requires agencies to provide instructions to staff regarding how to maintain the agency s operational records and what to do when they are no longer needed for current business. Like other records, electronic records must be scheduled either under agency- specific schedules or pursuant to a general records schedule. Further, in order to effectively address NARA regulations, agencies are to establish policies and procedures that provide for appropriate retention and disposition of their electronic records. Disposition involves transferring records of permanent, historical value to NARA for the archiving of records (preservation) and the destruction of all other records that are no longer needed for agency operations. In addition to adherence to general requirements governing electronic records, according to the electronic records management regulation, agencies are to also issue instructions to staff that specifically address retention and management of their email records. The regulation requires agencies email records to be managed as are other federal records with regard to the adequacy of documentation, recordkeeping requirements, agency records management responsibilities, and records disposition. The FRA Amendments enacted on November 26, 2014, include, among other things, disclosure requirements for official business conducted using a non-official electronic messaging account. The law states an officer or employee of an executive agency may not create or send a record using a non-official electronic messaging account unless the officer or employee (1) includes a copy to an official electronic messaging account of the officer or employee in the original creation or transmission of the record or (2) forwards a complete copy of the record to an official electronic messaging account of the officer or employee not later than 20 days after the original creation or transmission of the record. <1.2. Our Prior Work Has Addressed Electronic Records Management> In 2015, we reported that the 24 major federal departments and agencies covered by the Chief Financial Officers Act of 1990 had taken action in response to the Managing Government Records Directive, but not all of the agencies met all of the requirements. In that report, we stated that most of the agencies, including the Department of Commerce (Commerce) and the National Aeronautics and Space Administration (NASA), described plans to manage permanent electronic records, reported progress in managing permanent and temporary email records, and identified unscheduled records. We also noted that certain requirements were not fully met by a few agencies, including the National Science Foundation (NSF) and Office of Personnel Management (OPM), because these agencies were either still working on addressing the requirement, or did not view the requirement as being mandatory. Specifically, we reported that NSF did not submit a Senior Agency Official report that would have provided information to NARA on how it intended to manage permanent records electronically. In addition, we reported that NSF did not report to NARA on its possession of permanent 30-year-old records, and had not completed its identification of, or reported on, any portion of its unscheduled records. As a result, we recommended that NSF establish a date by which the agency would complete, and then report to NARA, its plans for managing permanent records electronically and its progress toward managing permanent and temporary email records in an electronic format. We also recommended that the agency complete the identification of unscheduled records stored at agency records storage facilities. NSF concurred with our recommendations and, in response, completed its plans for managing permanent records electronically and managing permanent and temporary email records in an electronic format. We verified in February 2017 that the agency reported these plans to NARA. For OPM, the agency had not designated their Senior Agency Official at the assistant secretary level or its equivalent because they did not view the requirement as mandatory. We recommended that the designated Senior Agency Official be at or equivalent to the level of an assistant secretary. OPM concurred with our recommendations and, in response, designated the Chief Information Officer as the Senior Agency Official with direct responsibility for ensuring that OPM efficiently and appropriately complies with all applicable records management statutes, regulations, and NARA policy. <2. Federal Agencies Policies and Procedures Did Not Fully Address Electronic Recordkeeping Requirements> The 17 agencies selected for review varied in the extent to which their records management policies, procedures, and documentation addressed 10 key requirements in the Managing Government Records Directive, the FRA and its amendments, and implementing regulations related to electronic records. Specifically, most of the selected agencies addressed the requirements related to establishing records management programs, submitting records schedules to NARA, incorporating activities for electronic records into their overall records management program, developing plans for managing permanent electronic records in an electronic format, managing email records in an electronic format, and using non-official electronic messaging. However, agencies did not fully address the requirements related to maintaining an inventory of electronic information systems, establishing controls and preservation considerations for their electronic information systems, and issuing retention and management requirements for email. <2.1. Most Agencies Established Records Management Programs and Developed Records Schedules> According to the FRA and its amendments, agencies are to establish effective records management programs, which includes developing comprehensive records schedules, in order to achieve adequate and proper documentation of the policies and transactions of the federal government and to aid in the effective and economical management of agency operations. Specifically, each agency is required to: establish and maintain an active, continuing records management program that, among other things, includes effective controls over the creation, maintenance, and use of records and submit lists and schedules of records to the Archivist of the United States that describe, among other things, when eligible temporary records must be disposed of. As shown in figure 1, the majority of the 17 selected agencies addressed these requirements. Establishing a records management program: Fourteen of the 17 selected agencies Armed Forces Retirement Home (AFRH), Consumer Financial Protection Bureau (CFPB), Commerce, U.S. Election Assistance Commission (EAC), Federal Housing Finance Agency (FHFA), Federal Trade Commission (FTC), NASA, NSF, Office of Management and Budget (OMB), Office of National Drug Control Policy (ONDCP), Overseas Private Investment Corporation (OPIC), OPM, Peace Corps, and Special Inspector General for Afghanistan Reconstruction (SIGAR) had developed policies and procedures that outlined their records management program. The agencies records management documentation discussed, among other things, the requirement for effective controls over the creation, maintenance, and use of records at the agency. However, three agencies Marine Mammal Commission, Presidio Trust, and the Morris K. Udall and Stewart L. Udall Foundation (Udall Foundation) did not have an active, continuing agency records management program, including documentation that described effective controls over the creation, maintenance, and use of records at the agency. All three agencies indicated that they have taken or intend to take actions to establish such a program. Marine Mammal Commission officials responsible for records management stated that the agency had engaged a contractor who completed and submitted for agency review and approval a draft policy that would govern its records management program. As of January 2020, the Executive Director stated that the commission has a signed policy and draft handbook to govern its records management program and that it is working towards full implementation and compliance by December of 2022. Presidio Trust officials responsible for records management stated that the agency intends to address the requirements and plans to have records management policies and procedures at the agency in fiscal years 2020 and 2021. Udall Foundation officials responsible for records management stated that the agency had entered into an interagency agreement with NARA for consulting services to assess its current records management environment. According to the same officials, their intent is to review NARA s recommendations and develop a plan to comply with the FRA, federal regulations, and NARA guidelines as they relate to records management. The agency did not provide an estimated date for completing these activities. Until these agencies establish an active and continuing records management program, they cannot provide assurance that, among other things, effective controls are in place over the creation, maintenance, and use of records in the conduct of current business. Submitting lists and schedules of records to the Archivist: Thirteen of the 17 selected agencies AFRH, CFPB, EAC, FHFA, FTC, Marine Mammal Commission, NASA, ONDCP, OPIC, Peace Corps, Presidio Trust, SIGAR, and the Udall Foundation had submitted a comprehensive list of records and disposition schedules to the Archivist. The remaining four agencies Commerce, NSF, OMB, and OPM had partially addressed this requirement because they had submitted only partial lists and schedules to the Archivist. Each of these agencies acknowledged they did not provide comprehensive lists of records and disposition schedules and stated they were currently working toward submitting them to the Archivist. OMB officials stated that they plan to complete this task by the end of calendar year 2019, while the other agencies did not provide an estimated date for completion. Without submitting lists of records and disposition schedules to the Archivist, Commerce, NSF, OMB, and OPM are at risk of maintaining records that are no longer relevant or needed. <2.2. Agencies Varied in Addressing Requirements for Managing Electronic Records> The Managing Government Records Directive was aimed at creating a robust records management framework for electronic records that complies with statutes and regulations. In order to ensure transparency, efficiency, and accountability, the directive instructed agencies to manage all permanent and temporary e-mail records in an accessible electronic format by December 2016 and manage all permanent electronic records in an electronic format to the fullest extent possible by December 2019. The directive also required NARA to develop revised guidance for transferring permanent electronic records and issue new guidance describing methods for managing, disposing of, and transferring e-mail. Accordingly, NARA regulations and guidance outline requirements for agencies to establish a framework for managing electronic records, including requirements pertaining to electronic systems and email. Additionally, the FRA Amendments described the disclosure requirements for official business conducted using non-official electronic messaging accounts. Based on our analysis, these documents identify, among other things, eight key requirements that agencies should include in their policies and procedures to ensure that they can effectively manage electronic records. These requirements are summarized in table 1. The 14 agencies with an established records management program varied greatly in the extent to which they addressed these electronic records requirements, as seen in figure 2. <2.2.1. Management Requirements> Incorporate activities for electronic records into the agency s overall records management program: Thirteen of 14 agencies that had established records management programs AFRH, Commerce, CFPB, EAC, FHFA, FTC, NASA, NSF, OPIC, ONDCP, OPM, Peace Corps, and SIGAR developed written policies and procedures that incorporated the management of electronic records into their records management program. The remaining agency OMB did not address this requirement. Staff from OMB responsible for records management stated that the Executive Office of the President s (EOP) Office of Administration is responsible for records management for all Executive Office components and has procedures that incorporate the management of electronic records into their records management program. However, the officials did not provide evidence that the existing policies and procedures incorporated the management of electronic records into their records program. Without being able to ensure that records management considerations are incorporated into the design and implementation of electronic systems, OMB risks not being positioned to properly manage records electronically. Maintain an inventory of electronic systems: Three of the 14 agencies that had established records management programs Commerce, FHFA, and SIGAR also maintained an inventory of electronic information systems that documented the information and records produced and maintained by each application. Officials responsible for records management at these agencies stated that their inventory was maintained with the agency s security plans. Additionally, three of the 14 agencies that had established a records management program FTC, NSF, and Peace Corps partially addressed the requirement, as their policies and procedures addressed some, but not all, of the necessary elements. More specifically: FTC documented various technical characteristics, such as authorizations, purpose and function of the electronic information systems, and authorized procedures for the disposition of records. However, the agency did not include the characteristics for reading and processing the records contained in the system, inputs and outputs, contents of the files and records, and cycle updates. NSF documented the categories of records in the electronic information systems, record access procedures, purpose of the systems, and retention and disposition of the system s records. However, the agency did not specify the technical characteristics of the systems, identify inputs and outputs, or describe update cycles. Peace Corps documented update cycles and the purpose of the electronic information systems. However, the documentation did not specify the technical characteristics necessary for reading and processing the records contained in the system, identify system inputs and outputs, define the contents of the files and records, determine restrictions on access to and use of the system, and specify how the agency ensures the timely disposition of records. According to officials responsible for records management at each of these agencies, they intend to address or would consider addressing the requirement. However, none of them provided a time frame for doing so. The remaining eight agencies AFRH, CFPB, EAC, NASA, OMB, ONDCP, OPM, and OPIC either did not maintain an inventory of electronic information systems or did not provide documentation that outlined the technical characteristics, such as identifying all inputs and outputs necessary for reading and processing records contained in the system. Records management officials at AFRH, CFPB, EAC, NASA, OPM, and OPIC stated that they intend to address the requirement, but did not provide a time frame for doing so. Staff from OMB and ONDCP responsible for records management stated that EOP s Office of Administration is responsible for records management for all components and maintains an inventory of electronic information systems. However, the officials did not provide evidence of this inventory. Without maintaining an inventory and documentation of electronic information systems used to store agency records, these agencies are at a heightened risk of records being lost and not identified and scheduled in accordance with agency records schedules. <2.2.2. Electronic System Requirements> Manage permanent electronic records in an electronic format: The Managing Government Records Directive requires each agency to develop and begin to implement plans to manage all permanent records in an electronic format. In accordance with this requirement, 12 of the 14 agencies that had established records management programs AFRH, CFPB, Commerce, FHFA, FTC, NASA, NSF, OMB, ONDCP, OPIC, Peace Corps, and SIGAR described their efforts to address the requirement in their Senior Agency Official reports to NARA. For example, these agencies described, among other things, plans on how permanent electronic records were being captured, retained, searched, and retrieved. However, two agencies EAC and OPM did not address how they plan to manage permanent electronic records in their Senior Agency Official reports or other agency documentation. EAC officials stated that they were still deciding on a solution to manage permanent records, and OPM officials stated they were planning to update policies to ensure automated systems incorporate proper records management life cycle controls. Further, neither agency provided a time frame for developing and implementing a plan. By not having a plan to manage their permanent records in an electronic format, these agencies face an increased risk that they may not be positioned to manage permanent electronic records. Incorporate required recordkeeping functionalities: Eight of the 14 selected agencies that established records management programs Commerce, CFPB, FHFA, FTC, NASA, NSF, ONDCP, and SIGAR had documented policies, procedures, or other records management documentation that addressed the required functionalities for recordkeeping systems. Additionally, one agency OPIC partially addressed this requirement because it included some, but not all, of the required functionality. More specifically, the agency did not identify whether the system could declare records and assign unique identifiers, capture records, maintain security, and preserve records. According to OPIC officials, the agency intends to work toward having better documentation outlining system functionalities in alignment with the requirements; however, the officials did not provide a time frame for completing this documentation. Further, five of the 14 agencies that had established records management programs AFRH, EAC, OMB, OPM, and Peace Corps did not address this requirement. Officials responsible for records management at each of these agencies stated that their records management system encompassed all of the aforementioned functionality or that the agency was working toward a full electronic records management system. However, these agencies policies and procedures did not include the required functionalities for recordkeeping systems. According to the same officials, each agency intends to have written documentation that outlines the records management functionalities; however, they did not provide a time frame in which the documentation will be completed. Without using electronic recordkeeping systems with appropriate functionalities, these agencies face increased risk of not being able to reliably access and retrieve the records needed to conduct agency business. Establish records management controls and preservation considerations: Seven of the 14 agencies that had established a records management program CFPB, Commerce, FHFA, FTC, NASA, OPM, and SIGAR included all records management controls in their electronic information systems policy and included preservation considerations in the design, development, and implementation of electronic information systems. Additionally, six of the 14 agencies that had established records management programs AFRH, EAC, OMB, ONDCP, OPIC, and the Peace Corps had policies that partially addressed establishing the records management controls for their electronic information systems. More specifically: AFRH records management documentation included information controls to ensure the reliability, authenticity, and integrity of records. However, the documentation did not define controls for usability, content, context, and structure. EAC s documentation included controls for reliability, authenticity, integrity, and usability. However, the agency did not define controls for content, context, and structure. OMB and ONDCP s documentation outlined controls for authenticity, integrity, usability and content. However, the documentation did not define controls for reliability, context, and structure. Staff stated that both offices records management was handled by the Office of Administration in the EOP and that the office had acquired an object-based data storage system that was expected to address all of the required controls. However, the offices did not provide any evidence that the new system or the associated policies and procedures would address the required controls. OPIC s documentation defined controls for authenticity, integrity, and usability. However, the documentation did not define controls for reliability, content, context, and structure. Peace Corps documentation included controls for reliability, authenticity, integrity, and content. However, the agency did not define controls for usability, context, and structure. Additionally, each agency s documentation did not describe how the agency ensures that records in the system are retrievable and useable for as long as needed to conduct agency business. Records management officials at each of the agencies acknowledged that not all of the controls or preservation considerations were included in their systems and that they planned to work toward implementing all of the controls; however, the agencies did not provide a time frame for documenting the controls. The remaining agency NSF did not address this requirement because its existing policies and procedures did not demonstrate that the agency had established the required controls. NSF officials stated that they intend to comply with this requirement but did not provide a time frame for doing so. Without ensuring that records management controls and preservation considerations are incorporated into electronic information systems, the agencies cannot ensure these systems can produce retrievable and useable records for as long as needed to conduct agency business. <2.2.3. Email Requirements> Manage permanent and temporary email records in an electronic format: Thirteen agencies AFRH, CFPB, EAC, FHFA, FTC, NASA, NSF, OMB, ONDCP, OPIC, OPM, Peace Corps, and SIGAR addressed this requirement. The remaining agency Commerce did not address this requirement. Officials responsible for records management at Commerce stated that they use an email management system for email, email preservation, and litigation holds. However, their policies and procedures did not show how the agency managed both permanent and temporary email records in an accessible electronic format. Until Commerce ensures that its systems are capable of managing permanent and temporary email records and have the capability to identify, retrieve, and retain these records, the agency faces an increased risk that its emails are not able to be preserved or accessed when needed. Issue retention and management requirements: Nine of the 14 agencies that had established records management programs AFRH, Commerce, CFPB, EAC, FHFA, FTC, NASA, Peace Corps, and SIGAR issued instructions or had policies on retention and management requirements for electronic mail. Additionally, two of the 14 agencies that had established records management programs OPIC and OPM had policies that partially addressed this requirement. More specifically: OPIC s policies and procedures documented that agency email messages and attachments that meet the statutory definition of a record are to be documented as an official record. However, the agency documentation did not discuss retention requirements for calendars. Officials responsible for records management stated that they intend to update the records and information management handbook to include the calendar requirement, but did not provide a time frame for updating the handbook. OPM s policies and procedures described how employees were to ensure that email records included most of the requirements, but the policies and procedures did not address retaining calendars and draft documents. Officials responsible for records management stated that they intend to review and update its records management policy, but did not provide a time frame for doing so. The policies of the remaining three agencies NSF, OMB, and ONDCP did not address this requirement for various reasons. NSF officials responsible for records management stated that the agency issued instructions regarding record retention and management of email to staff through memos and bulletins. However, these documents did not include instructions to staff that ensured the names and addresses of the sender, date of message, attachments, calendars, and draft documents would be retained. Additionally, staff from OMB and ONDCP responsible for records management stated that the Office of Administration within the EOP captured and managed all email on behalf of all components. According to these staff, email is permanent until the end of the presidential administration, at which time the email is transferred to NARA in accordance with each component s records schedules. However, the staff did not provide evidence that the existing policies and procedures included these instructions. By not issuing instruction to staff on retention and management requirements for email, agencies are at risk of not being able to retrieve email and its associated metadata when needed to conduct agency business. Use of non-official electronic messaging: Twelve of the 14 agencies that had established records management programs CFPB, Commerce, FHFA, FTC, NASA, NSF, OMB, ONDCP, OPM, OPIC, Peace Corps, and SIGAR had policies and procedures outlining the rules that their employees are to follow when creating records using a non-official electronic messaging account. The remaining two of 14 agencies that had established records management programs AFRH and EAC did not have written documentation describing the agencies disclosure requirements for official business conducted using non-official electronic messaging accounts. The EAC records management officials acknowledged that the agency did not outline this requirement and stated that policies and procedures were being drafted to address this requirement; however, the officials did not provide an estimated completion date. AFRH stated that it had updated its Network Rules of Behavior document and its IT information security awareness training to new employees to reflect the requirement, but we were unable to verify the updates. Without establishing rules for employees on the use of non-official electronic messaging accounts, agencies are at risk of not retaining email records sent from personal accounts. The 10 aforementioned requirements are important elements to address while establishing a framework for managing electronic records. While most of the selected agencies had established policies and procedures addressing the requirements, some had not. Until these agencies do so, they will lack assurance that electronic records are being managed in a way that promotes openness and accountability in documenting agency actions and decisions. <3. NARA Assisted Selected Agencies in Managing Electronic Records, but Did Not Ensure Agencies Addressed Identified Weaknesses NARA Issued Guidance and Provided Assistance to the Selected Agencies> NARA provided various forms of assistance to the selected agencies, which included issuing guidance regarding electronic records management, training, and professional development. In addition, NARA monitored the selected agencies compliance with records management regulations and implementation of policies, guidance, and other records management best practices through its self-assessment program. However, NARA had not ensured that any of the selected small or micro agencies that self-assessed to be at high risk of improper records management in calendar year 2017 were taking appropriate actions to improve their records management program. According to the FRA, NARA is responsible for providing guidance and assistance to federal agencies with respect to ensuring economical and effective records management, adequate and proper documentation of the policies and transactions of the federal government, and proper records disposition. In accordance with its responsibilities, NARA provided guidance and assistance to the selected agencies through various methods. All of the selected agencies stated that NARA guidance and assistance were generally helpful and that they relied on it to some extent for implementing the electronic records management requirements discussed in this report. Specifically, NARA issued guidance particular to electronic records creation, policies and procedures, management, and disposition. Officials from the selected agencies found NARA s guidance related to managing email and its December 2017 General Records Schedule to be helpful when fulfilling their responsibilities with respect to electronic records. The guidance related to managing email describes federal agencies responsibilities for email management and the Capstone approach to email management. The Records Officer at AFRH stated that the agency used guidance that described the minimum set of metadata elements that must accompany transfers of permanent electronic records to NARA. The General Records Schedule provides mandatory disposition instructions for records that are common to several or all federal agencies. An FHFA official responsible for records and information management stated this guidance was useful because it was used at the agency during regular records management activities, such as records disposition. Further, NARA provided assistance to the selected agencies such as professional development training and assigning an archivist to assist each agency. Officials responsible for records management at the selected agencies stated that NARA offers agencies records management training and professional development to federal employees and contractors. For example, NARA provides a certificate program for Federal Agency Records Officers and records management professionals to manage information collected by their agency. In addition, these officials stated that NARA offers a bi-monthly Records Information Discussion Group where individuals involved with federal records management can share their experiences and discuss the latest developments from NARA. Additionally, officials responsible for records management at the selected agencies stated that NARA assigns an archivist to each agency to field questions about federal records management, including services such as records scheduling and appraisal, and technical assistance. For example, Udall Foundation officials responsible for records management stated that the agency worked with its assigned archivist who provided direction on how to manage agency records, connected the agency with other records management subject matter experts, and fielded questions on the scheduling of agency records. <3.1. NARA Required Agencies to Self-Assess Their Programs, but Did Not Ensure They Have Plans to Make Improvements> In addition to providing assistance to federal agencies, NARA also has the responsibility to monitor compliance with records management regulations and implementation of NARA policies, guidance, and other records management best practices by federal agencies. One way in which NARA accomplishes this is to require federal agencies to conduct an annual self-assessment that evaluates the agency s reported compliance with federal records management statutes, regulations, and program functions and is also useful to target resources to areas needing improvement. NARA scores each agency s responses and, based on this score, determines whether an agency is at risk of not complying with statutory and regulatory records management requirements. For the self-assessments that covered calendar year 2017, four of our 17 selected agencies AFRH, Marine Mammal Commission, Presidio Trust and the Udall Foundation were assessed as being at high risk of not complying with statutory and regulatory records management requirements. See table 2 for how 16 of the 17 selected agencies scored. While NARA requires agencies to self-assess their records management programs annually, it does not ensure that agencies that scored poorly on their self-assessments develop a plan to improve their programs or monitor their progress in such efforts. According to NARA officials, after reviewing the reports the agency conducted phone interviews with staff from selected small and micro agencies to determine if there were any common factors for why they scored poorly on the self-assessments and what NARA could do to help them improve their records management programs. Given the self-assessment process was designed to measure agency compliance and to target resources to areas needing improvement, it is important for NARA to ensure the small and micro agencies that have assessed their programs as high-risk are taking appropriate actions to improve their records management programs. Until NARA requires high- risk small and micro agencies to develop plans to make necessary improvements to their record management programs and monitor their progress, it cannot be certain that these agencies are managing electronic records in accordance with governing regulations. Similarly, agencies that have not submitted self-assessments may also not be addressing statutory and regulatory records management requirements. <4. Conclusions> While most of the selected agencies addressed the key electronic recordkeeping requirements, others did not. Specifically, many agencies did not address requirements related to electronic system and email implementation, including establishing controls for their electronic information systems, incorporating preservation considerations into systems, and issuing retention and management requirements for email. Until these agencies do so, they will lack assurance that electronic records are being created, managed, retained, preserved, and disposed of in a way that improves performance and promotes openness and accountability by better documenting agency actions and decisions. NARA continues to assist the selected agencies in managing electronic records by providing guidance and training as well as monitoring their compliance with records management regulations. However, while NARA oversees the selected agencies compliance through records management self-assessments, it has not ensured that the selected small and micro agencies that were at high risk of improper records management have developed plans to address weaknesses in their records management programs. <5. Recommendations for Executive Action> We are making 42 recommendations to 15 agencies. Specifically, we are making the following recommendations to NARA: The Archivist of the United States should 1. require small and micro agencies that were determined to be at high risk of not complying with statutory and regulatory records management requirements to develop plans and timelines to address their records management weaknesses (Recommendation 1) 2. monitor the agencies progress towards these efforts on a regular basis. (Recommendation 2) In addition, we are making 40 recommendations to 14 agencies to fully address the electronic recordkeeping requirements found in the Managing Government Records Directive and the Presidential and Federal Records Act Amendments of 2014 in their policies and procedures. Appendix II contains these recommendations. <6. Agency Comments and Our Evaluation> We requested comments on a draft of this report from NARA and the 17 other agencies included in our review. All of the agencies provided responses, as further discussed. In written comments, NARA concurred with our recommendations and stated that the agency will develop an action plan to require small and micro agencies that consistently score in the high risk category on NARA s annual records management self-assessment to address their records management weaknesses. In addition, NARA stated that it will continue to gather data to identify where inspections, guidance, and training are needed to ensure that small and micro agencies are improving their records management programs. NARA s comments are reprinted in appendix III. Of the 17 other agencies in our review, six agencies (CFPB, Commerce, NASA, NSF, OPM, and the Udall Foundation) concurred with our recommendations; five agencies (Marine Mammal Commission, OMB, ONDCP, OPIC, and Presidio Trust) did not state whether they agreed or disagreed with our recommendations; and six agencies (AFRH, EAC, FHFA, FTC, Peace Corps, and SIGAR) stated that they had no comments on the report. Multiple agencies also provided technical comments, which we incorporated as appropriate. Among these agencies, the following six concurred with our recommendations and, in most cases, described steps planned or under way to address them: The Consumer Financial Protection Bureau provided written comments in which the agency stated that it did not object to our recommendation. The agency added that it would establish a time frame to update its current inventory of electronic systems used to store agency records, so that the inventory includes all of the required elements. CFPB s comments are reprinted in appendix IV. In written comments, the Department of Commerce concurred with our two recommendations and stated that the agency intends to take additional steps to implement them. Specifically, with regard to our recommendation regarding up-to-date records schedules, the agency stated that it will ensure that its records schedules are updated and submitted to NARA no later than December 2020. Commerce also stated that, while it believes its current electronic system that manages email meets our recommendation, the agency intends to take additional steps by updating its policies and ensuring that users are correctly implementing the system to address federal recordkeeping requirements by December 2020. Commerce s comments are reprinted in appendix V. The National Aeronautics and Space Administration provided written comments in which it concurred with our recommendation. The agency added that it is currently developing a comprehensive inventory to serve as an authoritative source for identifying where the agency s electronic records reside, which should be completed by June 2021. NASA comments are reprinted in appendix VI. In written comments, the National Science Foundation concurred with our four recommendations. NSF stated that the agency is updating its schedules and intends to ensure that its records management practices and policies address current requirements and best practices for federal records management. NSF s comments are reprinted in appendix VII. The Office of Personnel Management provided written comments in which it concurred with our five recommendations and noted steps that the agency has begun or is planning to take to address them. OPM stated that, in fiscal year 2020, it intends to issue a strategic plan on the digitization and management of permanent and electronic records, update agency policies and procedures to include the required electronic information system function for recordkeeping systems, and implement the requirements of the agency s Capstone email policy. The agency also noted that, in fiscal year 2021, it plans to complete the updates needed on all agency disposition schedules and develop an inventory of all electronic information systems that store agency records. OPM s comments are reprinted in appendix VIII. In written comments, the Udall Foundation concurred with our recommendation and described the steps it plans to take in fiscal years 2020 and 2021 to establish records management policies and procedures. For example, according to the foundation, in September 2020, it plans to complete the initial build-out of required infrastructure to manage electronic records. Further, in March 2021, it plans to finalize a formal records management policy and associated procedures for creating, maintaining, and using records across the agency. The Udall Foundation s comments are reprinted in appendix IX. Further, the following five agencies did not state whether they agreed or disagreed with the recommendations: In written comments, the Office of Management and Budget did not state whether it agreed or disagreed with our recommendations. However, OMB stated that it is diligently working with NARA to revise and update its records schedule and intends to closely review and close any gaps in documentation that GAO identified. The office also provided technical comments, which we incorporated as appropriate. OMB s comments are reprinted in appendix X. In an email from the Executive Director, the Marine Mammal Commission did not state whether it agreed or disagreed with our recommendations. However, according to the executive director, the commission now has a signed records management policy that describes staff responsibilities for the management of electronic records and email as well as a draft records management handbook. The official also stated that the commission will continue efforts to fully implement the records management policy and procedures aiming toward full implementation and compliance by December 2022. The Commission also provided technical comments, which we incorporated as appropriate. In an email from the Acting General Counsel, the Office of National Drug Control Policy did not state whether it agreed or disagreed with our recommendations. The office provided technical comments, which we incorporated as appropriate. In an email from its GAO audit liaison, the Overseas Private Investment Corporation did not state whether it agreed or disagreed with our recommendations. However, the liaison stated that the agency intends to implement a new solution for electronic records and information management that includes the recordkeeping functionalities required by NARA. The liaison added that the agency plans to update its records and information management policies and procedures to strengthen the records management controls and preservation guidance in fiscal year 2021. In an email from the Chief Financial and Administrative Officer, Presidio Trust did not state whether it agreed or disagreed with our recommendations. However, the official stated that the trust had recently implemented the Capstone approach for email and would continue to work on records management throughout 2020 and 2021. Lastly, we received emails from the Armed Forces Retirement Home s Information Technology Manager, the U.S. Election Assistance Commission s Communication Specialist, the Federal Housing Finance Agency s Privacy Act Officer, the Federal Trade Commission s attorney representative in the Office of General Counsel, the Peace Corps Agency Records Officer, and the Special Inspector General for Afghanistan Reconstruction s Director of Information Technology. All of the emails stated that these agencies had no comments on the draft report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Commerce; the Administrator of the National Aeronautics and Space Administration; the Archivist of the United States; the Chief Executive Officers of the Armed Forces Retirement Home and Overseas Private Investment Corporation; the Executive Directors of the U.S. Election Assistance Commission and Udall Foundation; the Directors of the Consumer Financial Protection Bureau, Federal Housing Finance Agency, National Science Foundation, Office of Management and Budget, Office of National Drug Control Policy, Office of Personnel Management and Peace Corps; the Chairman of the Federal Trade Commission, Marine Mammal Commission, and the Presidio Trust Board; the Special Inspector General for Afghanistan Reconstruction and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9342 or marinosn@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix XI. Appendix I: Objectives, Scope, and Methodology Our objectives were to determine the extent to which (1) selected agencies policies and procedures address electronic recordkeeping requirements in the Managing Government Records Directive and the Presidential and Federal Records Act Amendments of 2014 and (2) NARA assisted selected agencies in managing their electronic records. To determine the agencies for our review, we identified agencies that established a Senior Agency Official for Records Management and submitted an annual report on NARA s website between fiscal year 2015 and fiscal year 2017. Of the 95 agencies that met these criteria, we removed two from consideration because they were not part of the executive branch: one was a judicial branch agency and the other was a legislative branch agency. The 93 remaining agencies to include in our review represented the following categories: (1) executive departments, (2) Executive Office of the President, and (3) independent agencies. To ensure that a variety of agencies were selected across the designated categories, we chose a selection of 17 agencies and ensured that at least two agencies were selected from the three identified categories. In order to generate this selection, we sorted the list of 93 agencies by assigned random numbers and selected the top 17 agencies in this list, while ensuring that at least two agencies from each category were selected. The selection of 17 agencies cannot be used to make generalizable statements about the full population of agencies. The 17 agencies selected were: 1. Armed Forces Retirement Home 2. Consumer Financial Protection Bureau 3. Department of Commerce 4. U.S. Election Assistance Commission 5. Federal Housing Finance Agency 6. Federal Trade Commission 7. Marine Mammal Commission 8. Morris K. Udall and Stewart L. Udall Foundation 9. National Aeronautics and Space Administration 10. National Science Foundation 11. Office of Management and Budget 12. Office of the National Drug Control Policy 13. Office of Personnel Management 14. Overseas Private Investment Corporation 17. Special Inspector General for Afghanistan Reconstruction To address the first objective, we identified key requirements specified in the Federal Records Act, the Presidential and Federal Records Act Amendments of 2014, and its implementing regulations, and the Office of Management and Budget s (OMB) and NARA s Managing Government Records Directive. In selecting the requirements for our assessment, we focused on requirements related to electronic records management, such as managing permanent and temporary records, managing email records, and managing electronic records management programs. To assess whether agencies policies and procedures addressed the key requirements, we collected and analyzed policies, procedures, and other documentation that described how agencies are positioned to effectively manage electronic records. In particular, we reviewed agencies recordkeeping handbooks, agencies bulletins, file plans, records schedules, and electronic system user guides. Further, we collected and reviewed documentation that described agencies actions or planned actions to meet the specified deadlines in the Managing Government Records Directive. Specifically, we analyzed agencies records schedules, reports from NARA s Senior Agency Official for Records Management s web page, agencies email management system specifications, and agencies Capstone approach to email management. We also verified with NARA records management officials whether selected agencies submitted records schedules by the December 31, 2016, deadline specified in the Managing Government Records Directive. We assessed these documents against each of the key requirements to determine each agency s status in developing policies and procedures to address federal record keeping requirements. Subsequent to our initial assessment, we conducted interviews with records management officials from the 17 selected agencies to discuss steps taken and obtain additional supporting evidence to determine the agencies status for implementing key federal recordkeeping requirements. We followed up with those agencies that did not fully address the key federal recordkeeping requirements to determine reasons for their lack of implementation. For the second objective, we reviewed federal laws and guidance, such as the Federal Records Act, NARA regulations, and OMB s and NARA s Managing Government Records Directive, to determine NARA s role and responsibilities in assisting the 17 agencies in managing their electronic records. Subsequently, we collected and analyzed guidance and other documentation from NARA, such as the agency s Records Management Oversight and Reporting Handbook, Guidance on Senior Agency Officials for Records Management bulletin, and Frequently Asked Questions about Selecting Sustainable Formats for Electronic Records, to determine whether the documentation addressed all of the requirements needed to assist agencies in managing their electronic records. We also analyzed responses in agencies fiscal year 2017 and 2018 Senior Agency Official for Records Management reports stating what assistance the agencies would like NARA to provide. We then conducted interviews with NARA s Chief Records Officer and other agency officials regarding their interactions with the 17 agencies on the use of electronic recordkeeping and implementation of federal records management policies and practices to determine to what extent NARA assisted selected agencies in managing their electronic records. We also conducted interviews with officials from each of the 17 selected agencies to gain insight into how the agencies use the resources provided by NARA. Lastly, we reviewed NARA s annual self-assessment program that evaluates agencies reported compliance with federal records management statutes, regulations, and program functions to obtain information on how NARA was determining which agencies needed assistance with implementing their records management programs. We supplemented our document reviews and analysis with interviews of selected agency officials responsible for records management and NARA agency officials to gain an understanding of these and other relevant documents aimed at helping agencies implement their records management programs. Additionally, to identify which of our selected agencies were to be categorized as small and micro agencies, we used OMB s definition of small agencies as agencies with fewer than 6,000 employees and micro agencies as agencies having fewer than 100 employees. We conducted our work from March 2018 to February 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Recommendations to Departments and Agencies We are making a total of 40 recommendations to 14 of the 17 agencies in our review to fully address the electronic recordkeeping requirements in their policies and procedures. The Chief Executive Officer of the Armed Forces Retirement Home should take the following four actions: Establish a time frame to develop an inventory of electronic information systems used to store agency records that includes all of the required elements. (Recommendation 3) Establish a time frame to update its policies and procedures to include all of the required electronic information system functionalities for recordkeeping systems. (Recommendation 4) Establish a time frame to update the agency s policies and procedures to include the (1) following records management controls required for electronic information systems: usability, content, context, and structure and (2) required preservation mechanisms to ensure that records in its electronic recordkeeping system will be retrievable and useable. (Recommendation 5) Ensure existing policies and procedures describe the rules for using personal email accounts when conducting official agency business to include instructing the employee to (1) copy an official electronic messaging account of the employee in the original creation or transmission of the records and (2) forward a complete copy of the record to an official electronic messaging account of the employee no later than 20 days after the original creation or transmission of the record. (Recommendation 6) The Secretary of Commerce should take the following two actions: Establish a time frame to ensure all records schedules are up-to-date and submitted to NARA. The schedules should include all required information, including when eligible temporary records must be destroyed or deleted and when permanent records are to be transferred to NARA. (Recommendation 7) Ensure the electronic system that manages email provides the capabilities to manage permanent and temporary email records and to identify, retrieve, and retain records. (Recommendation 8) The Director of the Consumer Financial Protection Bureau should take the following action: Establish a time frame to develop an inventory of electronic information systems used to store agency records that includes all of the required elements. (Recommendation 9) The Executive Director of the Election Assistance Commission should take the following five actions: Establish a time frame to develop an inventory of electronic information systems used to store agency records that includes all of the required elements. (Recommendation 10) Establish a time frame to develop a plan on how the agency intends to manage permanent electronic records. (Recommendation 11) Establish a time frame to update its policies and procedures to include all of the required electronic information system functionalities for recordkeeping systems. (Recommendation 12) Establish a time frame to update the agency s policies and procedures to include the (1) following records management controls required for electronic information systems: content, context, and structure and (2) required preservation mechanisms to ensure that records in its electronic recordkeeping system will be retrievable and useable. (Recommendation 13) Develop a written policy that describes the rules for using personal email accounts when conducting official agency business to include instructing the employee to (1) copy an official electronic messaging account of the employee in the original creation or transmission of the records and (2) forward a complete copy of the record to an official electronic messaging account of the employee no later than 20 days after the original creation or transmission of the record. (Recommendation 14) The Chairman of the Federal Trade Commission should take the following action: Establish a time frame to update the agency s electronic information system inventory to include the following characteristics: reading and processing the records contained in the system, inputs and outputs, contents of the files and records, and cycle updates. (Recommendation 15) The Chairman of the Marine Mammal Commission should take the following action: Use recently developed policies and procedures to implement and maintain an active, continuing agency records management program that includes policies and procedures to provide for effective controls over the creation, maintenance, and use of records in the conduct of current business. (Recommendation 16) The Administrator of the National Aeronautics and Space Administration should take the following action: Establish a time frame to develop an inventory of electronic information systems used to store agency records that includes all of the required elements. (Recommendation 17) The Director of the National Science Foundation should take the following four actions: Establish a time frame to ensure all records schedules are up-to-date and submitted to NARA. The schedules should include all required information, including when eligible temporary records must be destroyed or deleted and when permanent records are to be transferred to NARA. (Recommendation 18) Establish a time frame to update the agency s electronic information system inventory to include the following characteristics: technical characteristics of the systems, identify inputs and outputs, and describe update cycles. (Recommendation 19) Establish a time frame to update the agency s policies and procedures to include all of the records management controls required for electronic information systems and the required preservation mechanisms to ensure that records in its electronic recordkeeping system will be retrievable and useable. (Recommendation 20) Develop policies and procedures for the required retention and management requirements for email, including instructions to staff to ensure that the names and addresses of the sender, date of message, attachments, calendars, and draft documents will be retained. (Recommendation 21) The Director of the Office of Management and Budget should take the following five actions: Ensure, in conjunction with the Executive Office of the President s Office of Administration, that existing policies and procedures incorporate the management of electronic records into its overall records management program. (Recommendation 22) Establish a time frame to develop an inventory of electronic information systems used to store agency records that includes all of the required elements. (Recommendation 23) Establish a time frame to update its policies and procedures to include all of the required electronic information system functionalities for recordkeeping systems. (Recommendation 24) Establish a time frame to ensure, in conjunction with the Office of Administration, that policies and procedures include the (1) following records management controls required for electronic information systems: reliability, context, and structure and (2) required preservation mechanisms to ensure that records in its electronic recordkeeping system will be retrievable and useable. (Recommendation 25) Ensure, in conjunction with the Office of Administration, that existing policies and procedures include the required retention and management requirements for email. (Recommendation 26) The Director of the Office of National Drug Control Policy should take the following three actions: Establish a time frame to develop an inventory of electronic information systems used to store agency records that includes all of the required elements. (Recommendation 27) Establish a time frame to ensure, in conjunction with the Office of Administration, that policies and procedures include the (1) following records management controls required for electronic information systems: reliability, context, and structure; and (2) required preservation mechanisms to ensure that records in its electronic recordkeeping system will be retrievable and useable. (Recommendation 28) Ensure, in conjunction with the Office of Administration, that existing policies and procedures include the required retention and management requirements for email. (Recommendation 29) The Director of the Office of Personnel Management should take the following five actions: Establish a time frame to ensure that all records schedules are up-to- date and submitted to NARA. The schedules should include all required information, including when eligible temporary records must be destroyed or deleted and when permanent records are to be transferred to NARA. (Recommendation 30) Establish a time frame to develop an inventory of electronic information systems used to store agency records that includes all of the required elements. (Recommendation 31) Establish a time frame to develop a plan to manage permanent electronic records. (Recommendation 32) Establish a time frame to update its policies and procedures to include all of the required electronic information system functionalities for recordkeeping systems. (Recommendation 33) Establish a time frame to update the agency s policies and procedures on retention and management for email to include retaining electronic calendars and draft documents. (Recommendation 34) The Chief Executive Officer of the Overseas Private Investment Corporation should take the following four actions: Establish a time frame to develop an inventory of electronic information systems used to store agency records that includes all of the required elements. (Recommendation 35) Establish a time frame to develop policies and procedures that define required electronic information system functionalities for recordkeeping systems including declaring records and assigning unique identifiers, capturing records, maintaining security, and preserving records. (Recommendation 36) Establish a time frame to update the agency s policies and procedures to include the (1) following records management controls required for electronic information systems: reliability, content, context, and structure; and (2) required preservation mechanisms to ensure that records in its electronic recordkeeping system will be retrievable and useable. (Recommendation 37) Establish a time frame to update the agency s policies and procedures on retention and management for email to include policies for retaining electronic calendars. (Recommendation 38) The Director of the Peace Corps should take the following three actions: Establish a time frame to update the agency s electronic information systems inventory to (1) specify technical characteristics necessary for reading and processing the records contained in the system, (2) identify system inputs and outputs, (3) define the contents of the files and records, (4) determine restrictions on access and use, and (5) specify how the agency ensures the timely disposition of records. (Recommendation 39) Establish a time frame to update its policies and procedures to include all of the required electronic information system functionalities for recordkeeping systems. (Recommendation 40) Establish a time frame to update the agency s policies and procedures to include (1) following records management controls required for electronic information systems: usability, context, and structure and (2) required preservation mechanisms to ensure that records in its electronic recordkeeping system will be retrievable and useable. (Recommendation 41) The Executive Director of Udall Foundation should take the following action: Establish a time frame to develop and maintain an active, continuing agency records management program that includes policies and procedures to provide for effective controls over the creation, maintenance, and use of records in the conduct of current business. (Recommendation 42) Appendix III: Comments from the National Archives and Records Administration Appendix IV: Comments from the Consumer Financial Protection Bureau Appendix V: Comments from the Department of Commerce Appendix VI: Comments from the National Aeronautics and Space Administration Appendix VII: Comments from the National Science Foundation Appendix VIII: Comments from the Office of Personnel Management Appendix IX: Comments from the Morris K. Udall and Stewart L. Udall Foundation Appendix X: Comments from the Office of Management and Budget Appendix XI: GAO Contact and Staff Acknowledgments <7. GAO Contact Staff Acknowledgments> Nick Marinos, (202) 512-9342, marinosn@gao.gov In addition to the individual named above, Marisol Cruz Cain (Assistant Director), Anjalique Lawrence (Assistant Director), Elena Epps (Analyst- in-Charge), Roger Bracy, Kami Brown, Christopher Businsky, Alan Daigle, Nancy Glover, Charles Hubbard, Lee McCracken; Brian Palmer, and Monica Perez-Nelson made significant contributions to this report. | Why GAO Did This Study
The Federal Records Act , a subsequent directive, and NARA regulations establish requirements for agencies to ensure the transparency, efficiency, and accountability of federal records, including those in electronic form. In addition, NARA plays an important role in overseeing and assisting agencies' records management efforts.
GAO was asked to evaluate federal agencies' implementation of the aforementioned requirements related to electronic records. The objectives were to determine the extent to which (1) selected agencies' policies and procedures address the electronic recordkeeping requirements in the Managing Government Records Directive and the Presidential and FRA Amendments of 2014 and (2) NARA assisted selected agencies in managing their electronic records. To do so, GAO selected 17 agencies and reviewed their records management policies and procedures. GAO also reviewed laws and requirements pertaining to NARA's roles and responsibilities for assisting agencies in managing their electronic records. Further, GAO analyzed NARA guidance and other documents that discussed NARA's efforts in carrying out these responsibilities.
What GAO Found
Seventeen agencies GAO selected for review varied in the extent to which their policies and procedures addressed the electronic recordkeeping requirements in the Managing Government Records Directive and the Federal Records Act ( FRA ) and its amendments. More specifically, 14 of the 17 agencies established records management programs, while three agencies did not. Of those 14 agencies with established records management programs, almost all addressed requirements related to incorporating electronic records into their existing programs, but many did not have policies and procedures to fully incorporate recordkeeping functionalities into electronic systems, establish controls and preservation considerations for systems, and issue instructions on email requirements (see table).
NARA provided guidance and assistance to the selected agencies, including guidance on electronic records management and training. All of the agencies stated that the assistance was generally helpful and that they relied on it to some extent for implementing the key requirements discussed in this report. Further, NARA oversaw the selected agencies' implementation of federal records management regulations through their self-assessment progam. However, NARA had not ensured that the selected small or micro agencies that self-assessed to be at high risk of improper records management in calendar year 2017 were taking appropriate actions to make improvements to their records management programs. NARA officials stated they conduct follow-up with the agencies that report poor scores, but they do not proactively require the agencies to address their weaknesses. Until NARA requires these agencies to develop plans to make necessary improvements, these agencies will likely miss important opportunities to improve their record management practices.
What GAO Recommends
GAO is making 40 recommendations to 14 of the 17 selected agencies to improve their management of electronic records. GAO is also recommending that NARA (1) require high-risk smaller agencies to create improvement plans and (2) monitor progress on a regular basis. Six agencies, including NARA, agreed with the recommendations, while 11 did not state whether they agreed or disagreed, or had no comments. |
gao_GAO-20-127 | gao_GAO-20-127_0 | <1. Background> This section describes (1) U.S. climate risks and related impacts, (2) enhancing climate resilience using a risk management strategy, (3) GAO s Disaster Resilience Framework, and (4) benefits and costs of climate resilience projects. <1.1. U.S. Climate Risks and Related Impacts> Climate change poses risks to many U.S. environmental and economic systems, according to USGCRP s Fourth National Climate Assessment. For example, high temperature extremes, heavy precipitation events, high-tide flooding events along the U.S. coastline, ocean acidification and warming, and forest fires in the western United States and Alaska have been observed and are all projected to continue to increase. In contrast, land and sea ice cover, snowpack, and surface soil moisture have been declining and are expected to continue to decline in the coming decades. Climate change is also altering the characteristics of many extreme weather and climate-related events, according to the Fourth National Climate Assessment. Some of these events have already become more frequent, intense, widespread, or of longer duration, and many are expected to continue to increase or worsen. Furthermore, according to the assessment, many places are subject to more than one climate-related impact. Examples include extreme rainfall combined with coastal flooding, or drought coupled with extreme heat. The compounding effects of these impacts result in increased risks to people, infrastructure, and interconnected economic sectors. According to the Fourth National Climate Assessment, without significant reductions in global greenhouse gas emissions and regional efforts to pursue climate resilience, climate change is expected to cause substantial losses to infrastructure and property and impede the rate of economic growth over this century. The potential for losses in some economic sectors could reach hundreds of billions of dollars per year by the end of this century, according to the assessment. Future climate risks are subject to several sources of uncertainty, as identified by USGCRP s Fourth National Climate Assessment. According to the assessment, climate scientists find varying ranges of uncertainty in many areas, including observations of climate variables, the analysis and interpretation of those measurements, the development of new observational instruments, and the use of computer-based models of the processes governing Earth s climate system. According to the assessment, the largest uncertainty in projecting future climate risks is the level of greenhouse gas emissions going forward, because the level of emissions depends on economic, political, and demographic factors that can be difficult to predict with confidence far into the future. <1.2. Enhancing Climate Resilience Using a Risk Management Strategy> According to the Fourth National Climate Assessment, enhancing climate resilience entails a continuing risk management process through which individuals and organizations become aware of and assess risks and vulnerabilities from climate and other drivers of change, take actions to reduce those risks, and learn over time. In December 2016, we reported on a risk management strategy that may help guide federal climate resilience efforts. Enterprise risk management can help federal agencies identify, assess, and manage risks, such as preparing for and responding to natural disasters. In our report, we identified six essential elements of enterprise risk management: (1) aligning the enterprise risk management process to goals and objectives, (2) identifying risks, (3) assessing risk, (4) selecting a risk response based on risk appetite, (5) monitoring risks to see if responses are successful, and (6) communicating and reporting on risks. For example, we reported that assessing risks involves considering both the likelihood of the risk and the impact of the risk on the mission to help prioritize risk response. We also reported that selecting a risk treatment response involves leaders reviewing the prioritized list of risks and selecting the most appropriate treatment strategy to manage the risk. <1.3. GAO s Disaster Resilience Framework> In October 2019, we issued the Disaster Resilience Framework to serve as a guide for analysis of federal action to facilitate and promote resilience to natural disasters. The principles in this framework can help identify opportunities to enhance federal efforts to promote disaster resilience, including building resilience to climate change. According to the framework, strategic resilience goals integrated across relevant national strategies can help decision makers work toward a common vision and help ensure focus on a wide variety of opportunities to reduce disaster risk. Federal efforts can focus attention on disaster risk reduction by creating resilience goals in all relevant national strategies and linking those goals to an overarching strategic vision. Federal efforts can also facilitate coordination and promote governance approaches that mitigate fragmentation by requiring or funding mechanisms to enhance the continuity of different efforts across jurisdictions. In addition, because much of the nation s infrastructure is not owned and operated by the federal government, many resilience-related decisions ultimately are made by nonfederal actors, such as the states, and those decision makers face competing priorities. Incentives in the form of federal regulatory requirements or as conditions of federal grant programs and cooperative agreements can help promote investment in disaster risk reduction. As shown in figure 1, the framework is organized around three broad overlapping principles and a series of questions to guide analysis that can help users consider opportunities to enhance federal efforts to promote disaster resilience. Each of the principles includes more specific sets of actions that those who oversee or manage federal efforts can consider when analyzing opportunities to enhance national disaster resilience. For example, according to the framework, bringing together disparate agency missions and resources that support disaster risk reduction can help to build a national culture of resilience. Accordingly, federal efforts can (1) facilitate coordination across programs, (2) facilitate the combination of federal funding streams, and (3) leverage the expertise of nonfederal partners. <1.4. Benefits and Costs of Climate Resilience Projects> Information on the benefits and costs of climate resilience projects suggests that such projects can convey benefits, such as protecting life and property from climate hazards, according to the Fourth National Climate Assessment and other reports we reviewed. According to the Fourth National Climate Assessment, information on benefits is lacking in many sectors, though some information exists on the benefits and costs of resilience efforts in certain sectors, such as resilience efforts in coastal areas, resilience efforts designed to protect against riverine flooding (i.e., flooding that occurs when river flows exceed the capacity of the river channel), and resilience efforts related to agriculture at the farm level. According to this assessment, some of the actions in these sectors, at least in some locations, appear to have large benefit-cost ratios both in addressing current variability and in preparing for future change. However, benefits may not exceed costs in some instances. According to the Fourth National Climate Assessment, more research is needed to comprehensively assess the benefits of specific strategies that individuals and organizations are considering. Similarly, several other reports we reviewed also suggest that projects can convey benefits such as protecting life and property from climate hazards. For example, a 2018 interim report by the National Institute of Building Sciences estimated that benefits to society (i.e., homeowners and communities) would exceed costs for several types of resilience projects by protecting lives and property and preventing other losses, though precise benefits are uncertain. Specifically, this interim report examined a sample of hazard mitigation grants awarded by FEMA, the Economic Development Administration, and the Department of Housing and Urban Development (HUD) from 1993 through 2016 to address various hazards. These hazards included fires in the wildland-urban interface (i.e., fires in areas where homes are built near or among lands prone to wildland fire), hurricane- and tornado-force winds, and riverine floods. According to the interim report, for every grant dollar the federal government spent across the projects examined in the report, over time, society is estimated to accrue benefits amounting to the following: About $3 on average from projects addressing the effects of fire in the wildland-urban interface, with most benefits (approximately 70 percent) coming from the protection of property (i.e., avoiding property losses). About $5 on average from projects to address hurricane- and tornado- force winds, with most benefits (approximately 90 percent) coming from the protection of lives. This includes avoiding deaths, nonfatal injuries, and cases of post-traumatic stress. About $7 on average from projects that buy out buildings prone to riverine flooding, with most benefits (approximately 65 percent) coming from the protection of property. The interim report also projected that society could accrue benefits amounting to about $11 on average for every dollar invested in designing new buildings to meet the 2018 International Building Code and the 2018 International Residential Code the model building codes developed by the International Code Council with most benefits (about 45 percent) coming from the protection of property. The interim report has been cited by the Congressional Budget Office, in congressional hearings, and in other arenas to describe the benefits of investing in resilience. However, the benefit-cost ratios provided in the interim report are based on a relatively narrow set of disaster-loss data, and the report is not comprehensive. In addition to conveying climate resilience benefits, such as protecting lives and property, climate resilience projects can also convey co- benefits benefits beyond the primary protective function of resilience projects according to the Fourth National Climate Assessment and several reports we reviewed. For example, according to a report by the National Academies, restoring coastal wetlands a type of nature-based resilience project may reduce an area s vulnerability to coastal storms but could also provide co-benefits such as increasing biodiversity by creating new breeding grounds for fish and improving recreation and tourism amenities, thereby expanding the total potential benefits of a project. USGCRP officials we interviewed also told us that projects can convey a broad range of other co-benefits, including improvements in economic opportunity, human health, equity, and national security. However, according to the Fourth National Climate Assessment, quantifying these co-benefits can be difficult because different people value benefits differently. Several factors can influence the likelihood that the benefits from resilience projects exceed the cost of implementing and maintaining the projects. For example, benefits from climate resilience projects implemented in high-risk locations, such as areas more exposed to hurricanes, are likely to be higher and therefore exceed project costs than projects implemented in other, lower-risk areas, according to one report we reviewed. Similarly, projects that protect high-value assets may also be more likely to have benefits that exceed costs, according to this report. Several factors that affect the extent to which project benefits exceed costs remain uncertain, according to several reports. For example, according to the Fourth National Climate Assessment, benefit cost ratios can have large uncertainties associated with estimates of costs, the projection of benefits, and the economic valuation of benefits. Furthermore, according to the assessment, the benefits and costs of resilience projects are larger in scenarios with high emissions, but the level of future emissions remains uncertain. <2. The Federal Government Has Invested in Projects That May Convey Some Climate Resilience Benefits but Does Not Have a Strategic Investment Approach> Individual federal agencies have provided ad hoc funding for projects that may convey some climate resilience benefits, but our past work demonstrates an absence of government-wide strategic planning for climate change, and the federal government has not implemented key recommendations to improve strategic planning for climate resilience. In addition, the federal government does not have a strategic federal approach for investing in the highest priority climate resilience projects that includes periodically identifying and prioritizing projects as supported by enterprise risk management practices and our Disaster Resilience Framework. <2.1. The Federal Government Has Invested in Projects That May Convey Some Climate Resilience Benefits> Federal Mainstreaming Efforts Some agencies have made efforts to manage climate change risk within existing programs and operations a concept known as mainstreaming and these efforts may convey climate resilience benefits. For example, an agency planning to build a seawall to protect a coastal facility might build it higher to account for rising sea level projections. Alternatively, the U.S. military may consider climate change as part of existing construction plans on coastal installations by, for example, raising a building to include a sacrificial first floor and protecting critical assets such as computer servers from potential flooding by locating them on the building s higher floors. The agency may use the sacrificial floor for parking. According to the U.S. Global Change Research Program s Fourth National Climate Assessment, a significant portion of climate risk can be addressed by mainstreaming, which can provide many climate resilience benefits. However, according to the assessment, the practice may prove insufficient to address the full range of climate risks. Additional, strategic federal investments in large-scale projects such as those discussed in our report may also be needed to manage some of the nation s most significant climate risks, since climate change cuts across agency missions and poses fiscal exposures larger than any one agency can manage. aim to reduce flooding and storm damage. These and other projects have the potential to convey climate resilience benefits by protecting communities from damage from flooding, storms, and other extreme weather events that may be exacerbated by climate change. The Corps of Engineers policy is to integrate climate change preparedness and resilience in all activities a concept known as mainstreaming. However, the Corps civil works program balances several diverse missions related to navigation, ecosystems management, and flood control, among others. As a result, while projects may individually incorporate consideration of climate change risk and resilience, they may not be prioritized to address the most severe expected future climate change risks. Even with ad hoc agency efforts, federal investment in projects specifically designed to enhance climate resilience to date has been limited. As stated in our Disaster Resilience Framework, most of the federal government s efforts to reduce disaster risk are reactive, and many revolve around disaster recovery. To a lesser extent, the federal government also invests in activities to reduce risks not associated with a specific, recent disaster. As we reported in April 2018, since 1993 OMB has reported more than $154 billion spread across the government for federal activities to understand and address climate change. However, over that time frame, OMB reported only minimal funding directed specifically at climate resilience projects. <2.2. Our Past Work Shows an Absence of Government-wide Strategic Planning for Climate Change> We have issued multiple reports that review the federal government s approach to addressing climate change, and these reports demonstrate an absence of government-wide strategic planning for climate change. Specifically, our past work identifies limitations related to strategic planning for climate change that include a lack of coordination, prioritization, and consolidation of strategic priorities. For example, we reported in October 2009 that the federal government s emerging climate resilience activities were carried out in an ad hoc manner and were not well coordinated across federal agencies. In May 2011, we reported that federal officials did not have a shared understanding of strategic government-wide priorities related to climate change. In the same report, we found that there was not a consolidated set of strategic priorities integrating climate change programs and activities across the federal government. In our March 2019 high-risk update, we reported that one area of government-wide action needed to reduce federal fiscal exposure is in the federal government s role as the leader of a strategic plan that coordinates federal efforts and informs state, local, and private-sector action. For this 2019 high-risk update, we assessed the federal government s progress since 2017 related to climate change strategic planning against five criteria and found that the federal government had not met any of the criteria for removal from the high-risk list. Specifically, since GAO s 2017 high-risk update, four ratings regressed to not met and one remained unchanged as not met. (See fig. 2). We have made 62 recommendations related to the climate change high-risk area, 17 of which address improving federal climate change strategic planning. As of August 2019, no action had been taken toward 14 of those 17 recommendations one dating back to 2003. Executive Order 13783, Promoting Energy Independence and Economic Growth (Mar. 28, 2017). Executive Order 13653, Preparing the United States for the Impacts of Climate Change (revoked) (Nov. 6, 2013). Executive Order 13834, Efficient Federal Operations (May 17, 2018). Executive Order 13693, Planning for Federal Sustainability in the Next Decade (revoked) (Mar. 19, 2015). The Mitigation Framework Leadership Group, an intergovernmental coordinating body, finalized the National Mitigation Investment Strategy in August 2019. However, as noted, our review of the strategy indicates that it does not include a detailed strategic approach to prioritize investments for disaster risk reduction that explicitly accounts for future climate change risks. According to FEMA officials, the strategy sets goals and recommendations that set the stage for developing approaches to address changing conditions. GAO, Climate Change: Improvements Needed to Clarify National Priorities and Better Align Them with Federal Funding Decisions, GAO-11-317 (Washington, D.C.: May 20, 2011); Climate Change: Information on Potential Economic Effects Could Help Guide Federal Efforts to Reduce Fiscal Exposure, GAO-17-720 (Washington, D.C.: Sept. 28, 2017); and Climate Change: Analysis of Reported Federal Funding, GAO-18-223 (Washington, D.C.: Apr. 30, 2018). GAO, High-Risk Series: Substantial Efforts Needed to Achieve Greater Progress on High-Risk Areas, GAO-19-157SP (Washington, D.C.: Mar. 6, 2019). <2.3. The Federal Government Does Not Have a Strategic Approach for Investing in Climate Resilience Projects> The federal government does not have a strategic approach for investing in climate resilience projects that is, an intentional, cross-cutting approach in which the federal government identifies and prioritizes projects for the purpose of enhancing climate resilience. Federal agencies may take actions to invest in projects with potential climate resilience benefits related to their own mission areas using funds from federal programs designed for other purposes. In addition, the National Climate Assessment provides high-level information on what is known about observed and projected climate risks in the United States. However, no federal entity looks holistically at the federal government s investments to strategically prioritize projects to ensure they address the nation s most significant climate risks and provide the highest net benefits relative to other potential projects. Several stakeholders told us that the federal government s emphasis has been on funding post-disaster efforts instead of funding resilience projects before a disaster occurs. This is consistent with findings from our July 2015 report that most federal funding for hazard mitigation is only available after a disaster. In addition, according to FEMA officials, some of the agency s hazard mitigation programs are designed to empower state and local governments to determine their mitigation funding priorities, and these state and local priorities may or may not align with the federal interest. Although we did not identify a government-wide strategic approach specifically for investing in climate resilience projects, the National Mitigation Investment Strategy a national effort under way to plan for pre-disaster resilience investments represents a potential cross-agency vehicle for climate resilience planning. However, the strategy does not specifically address climate change or identify and prioritize specific climate resilience projects. In July 2015, we recommended that the Mitigation Framework Leadership Group a multi-agency group led by FEMA to promote coordination of hazard mitigation efforts across the federal government establish an investment strategy to identify, prioritize, and guide federal investments in disaster resilience and hazard mitigation-related activities and make recommendations to the President and Congress on how the nation should prioritize future disaster resilience investments. In response, in August 2019, the Mitigation Framework Leadership Group released a national strategy for advancing mitigation investment in the United States and increasing the nation s resilience to natural hazards. The strategy acknowledges our 2015 recommendation and articulates several high-level recommendations that relate generally to climate resilience, including aligning program requirements and incentives. Specifically, the strategy states that successful risk mitigation requires shared priorities, consistent approaches, aligned funding, expanded incentives, and coordination between the federal government and nonfederal partners (i.e., state, local, tribal, and territorial governments and nonfederal organizations). However, the strategy does not explicitly address future climate change risks or include a strategic approach to identify and prioritize specific climate resilience projects for federal investment. According to FEMA officials, the strategy provides an overarching framework that can accommodate strategic investment related to changing conditions that impact disaster resilience. FEMA officials also told us that specific implementation strategies will be addressed in a later phase of the high- level strategy. <2.4. A Strategic Approach for Identifying and Prioritizing Resilience Projects Could Better Target Federal Investment at the Greatest Climate Risks> While current federal climate resilience investments are ad hoc and not aligned with the nation s most significant climate risks, our past work and other sources show that an iterative and strategic risk-informed approach for identifying and prioritizing climate resilience projects could better target federal investment. In particular, in December 2016, we reported that enterprise risk management which involves identifying and assessing risks, as well as preparing appropriate risk responses can help federal agencies manage risks, such as preparing for and responding to natural disasters. Elements of enterprise risk management call for reviewing a prioritized list of risks and selecting the most appropriate strategy to manage those risks. Furthermore, according to our 2019 Disaster Resilience Framework, the integration of strategic resilience goals across relevant national strategies can help decision makers work toward a common vision and help ensure focus on a wide variety of opportunities to reduce disaster risk. For example, our framework states that in some cases federal efforts have been hindered by multiple agencies pursuing individual efforts without overarching strategies. In addition, the National Academies highlights the importance of an iterative approach to prioritizing climate resilience actions. According to the National Academies, many current and future climate change impacts require immediate actions to improve the nation s ability to adapt, and possible options need to be prioritized based on where and when urgent action is needed. In addition, because knowledge about future impacts and effectiveness of response options will evolve, policy decisions to manage climate change risks can be improved if they are made in an iterative fashion, according to the National Academies. However, no federal entity has been established to implement a strategic investment approach for climate resilience that includes identifying and prioritizing projects for federal investment in an iterative fashion. According to FEMA officials, without Congressional direction, no federal entity will identify and prioritize climate resilience projects for federal investment because existing federal programs are not designed to serve this purpose. Furthermore, investments by federal agencies are made according to their missions and operations within the federal investment guidelines put forth by OMB, according to officials from the Mitigation Framework Leadership Group. These officials explained that by law, agencies cannot make other investments, which hinders a more formalized climate resilience investment strategy at the agency level. Several stakeholders told us that a strategic approach would allow for a more purposeful, coordinated, and comprehensive federal response to climate risks. Such an approach could help target federal resources toward high-priority projects namely, those that address the nation s most significant climate risks and provide the greatest expected net benefits relative to other potential projects that are not already addressed through existing federal programs. In particular, a strategic and iterative risk-informed approach for identifying and prioritizing climate resilience projects for federal investment could supplement the agency- specific approaches to climate resilience investment currently carried out by individual agencies with different statutes, goals, constituencies, and funding streams. Such an approach presents an opportunity to enhance the nation s resilience to climate change and reduce federal fiscal exposure. <3. Six Key Steps Provide an Opportunity for the Federal Government to Strategically Identify and Prioritize Climate Resilience Projects> Six key steps provide an opportunity for the federal government to strategically identify and prioritize climate resilience projects, based on our review of reports (including a National Academies report and the Fourth National Climate Assessment) that discuss adaptation as a risk management process, international standards, our past work (including our enterprise risk management criteria), and interviews with stakeholders. The six key steps are (1) defining the strategic goals of the climate resilience investment effort and how the effort will be carried out, (2) identifying and assessing high-risk areas for targeted resilience investment, (3) identifying potential project ideas, (4) prioritizing projects, (5) implementing high-priority projects, and (6) monitoring projects and climate risks. See Figure 3. We use domestic and international examples the Louisiana coastal master planning effort and the Canadian Disaster Mitigation and Adaptation Fund (DMAF), respectively and the aforementioned sources to illustrate the six key steps for identifying and prioritizing climate resilience projects (see text box). Domestic and International Examples of Approaches for Identifying and Prioritizing Climate Resilience Projects Two efforts the Louisiana coastal master planning effort and the Canadian Disaster Mitigation and Adaptation Fund illustrate approaches for identifying and prioritizing resilience projects. The scale and purpose of each of these approaches is distinct, but both seek to identify projects that help enhance community resilience to several emerging risks, including risks associated with climate change. Louisiana coastal master planning process: In 2005, the state of Louisiana consolidated coastal planning efforts previously carried out by multiple state and local entities into a single effort carried out by the Coastal Protection and Restoration Authority (CPRA). In this effort, CPRA periodically identifies high-priority coastal resilience projects designed to reduce flood risk and coastal land loss. With involvement from stakeholders from private industry and local communities, CPRA has published three coastal master plans in which it identified and evaluated potential projects. In Louisiana s 2017 Comprehensive Master Plan for a Sustainable Coast, CPRA identified $50 billion in high-priority projects to be implemented as funding becomes available. Canadian Disaster Mitigation and Adaptation Fund: In 2018, the federal government of Canada launched the Disaster Mitigation and Adaptation Fund (DMAF), which seeks to enhance resilience by addressing the potential impacts of climate change in Canada. Canada s DMAF is a financial assistance program that provides funds to other entities (e.g., Canadian provinces and territories, not-for-profit and for-profit organizations, local governments, and indigenous communities) for implementation. This US$1.5 billion fund will provide contributions over 10 years for large-scale, nationally significant projects that address a myriad of risks triggered by natural hazards such as floods, wildfires, and droughts. The DMAF also encourages partnerships between eligible recipients, according to a DMAF official. Canada s DMAF effort is under way. <3.1. Step 1. Define the Climate Resilience Investment Effort s Strategic Goals and How the Effort Will Be Carried Out> Reports, our past work, stakeholders, and our examples from Louisiana and Canada illustrate the importance of several steps to define the climate resilience investment effort, including defining the efforts strategic goals, designating an entity and providing authority for it to lead the effort, identifying participants and defining responsibilities, and determining how the effort will be funded. <3.1.1. Defining the Strategic Goals of the Effort> Clear strategic goals can yield more effective decisions about which projects to prioritize and increase the likelihood that projects are strategically aligned around a common purpose. In October 2011, we reported that strategic goals explain the purpose of agency programs and the results that they intend to achieve. Our domestic and international examples also demonstrate the importance of having defined strategic goals. Specifically, Louisiana s Coastal Protection and Restoration Authority (CPRA) defined five goals to guide its coastal master planning effort: reducing economic losses to homes and business from storm surge-based flooding, promoting sustainable coastal ecosystems, providing habitats for a variety of commercial and recreational activities across the coast, sustaining coastal Louisiana s cultural heritage, and maintaining a viable working coast to support businesses and industry. The goal of Canada s DMAF is to strengthen the resilience of Canadian communities through investments in large-scale infrastructure projects of national importance including natural infrastructure projects enabling these communities to better manage the risk associated with current and future natural hazards such as floods, wildfires, and droughts. This includes natural hazards that may be exacerbated by climate change. Several stakeholders we interviewed identified potential strategic goals for a federal climate resilience investment effort, including increasing the resilience of communities to climate hazards and reducing federal fiscal exposure to climate change. Furthermore, several stakeholders explained that a goal of federal resilience investment should include helping communities that do not have the capacity to implement climate resilience projects on their own for various reasons such as limited funds to plan and implement such projects. According to one stakeholder we interviewed, because the federal role in investing in climate resilience projects could be broad, it will be necessary to precisely define the nature and scope of the funding effort in a way that is manageable, potentially restricting funding to resilience projects that would not occur without federal intervention. For example, federal resilience investment could focus on large-scale, long-term climate resilience projects that are otherwise too big, expensive, or cross-jurisdictional for local, state, or private-sector actors to address, according to several stakeholders. <3.1.2. Designating an Entity and Providing Authority for It to Lead the Effort> Based on our review of several reports and past GAO work and discussions with several stakeholders, various types of entities could lead a federal climate resilience investment effort. This could include various organizational arrangements such as a federal entity or interagency collaborative effort task forces, special councils, interagency offices, or interagency working groups led by agency and department heads or program-level staff. According to one stakeholder we interviewed, a federal climate resilience investment effort would need a high level of political support to be effective. Several other stakeholders explained that clear authority for the entity to conduct its work would be important to provide legitimacy for the effort and create buy-in among participants and the public. Authority for conducting a resilience effort could be provided via a legislative mandate or executive order. For example, in the case of Louisiana, the state legislature passed a law establishing CPRA, a state agency, in 2005 and providing it with a mandate to develop, implement, and enforce a comprehensive coastal protection and restoration master plan. <3.1.3. Identifying Participants and Defining Responsibilities> Identifying participants and defining responsibilities could involve identifying an interdisciplinary team of experts to help evaluate climate risk, generate project ideas, and evaluate projects. According to several stakeholders, experts should have a breadth of expertise in disciplines such as climate science, resilience, social sciences (e.g., economics), engineering, finance, urban planning, infrastructure, and knowledge of affected systems (e.g., transportation systems, public health, and ecosystems). Several reports and several stakeholders also identified the importance of involving representatives from the communities and groups impacted by potential projects, explaining that doing so can increase support for the process and help ensure projects meet communities needs. For example, a CPRA official told us that building trust and communicating projects necessity with external stakeholders is extremely important when prioritizing projects because some stakeholders will be directly impacted by certain projects. For this reason, according to CPRA officials, CPRA conducted extensive outreach with community groups and other stakeholders to understand their perspectives on projects under consideration and their potential impacts. In addition, past GAO work identifies agreement on roles and responsibilities as one of several practices to enhance and sustain collaborative efforts. According to our September 2012 report, this includes considering clarity of roles and responsibilities and articulating and agreeing to a process for making and enforcing decisions. <3.1.4. Determining How the Effort Will Be Funded> Determining how the effort will be funded includes identifying potential funding options (discussed later in this report) and establishing a budget for investments in resilience projects. Based on the domestic and international examples we reviewed, there are different ways to identify a budget for resilience projects. The budget for Canada s DMAF the equivalent of about US$1.5 billion over 10 years was established through the Canadian budget process. In contrast, Louisiana s CPRA used economic analysis to identify the optimal budget for the coastal master planning effort $50 billion with funds for specific projects to be solicited from various federal and nonfederal sources. <3.2. Step 2. Identify and Assess High-Risk Areas for Targeted Resilience Investment> High-risk areas for targeted resilience investment could include regions of the country at high risk for climate hazards, economic sectors at high risk (e.g., agriculture, health, or energy), or severe or costly expected climate hazards (e.g., sea level rise), based on our review of several reports, illustrative examples, and interviews with several stakeholders. According to the National Academies and several stakeholders we interviewed, climate resilience actions should address climate hazards that are acute (e.g., the risk of more frequent or intense extreme weather) and chronic (e.g., sea level rise). In Louisiana, CPRA identified two climate risks flooding risk and loss of coastal land for targeted resilience investment. The U.S. Climate Resilience Toolkit, a website designed to help people find and use tools, information, and subject matter expertise to build climate resilience, and several reports we reviewed identified several factors that influence a community s level of climate risk. This information can help decision makers identify high-risk areas for targeted resilience investment. First, a community s exposure is influenced by the population or assets exposed to a potential climate hazard (e.g., sea level rise, wildfire). For example, according to the Fourth National Climate Assessment, the expansion of human activity into forests and other wildland areas has been observed over the past few decades and is expected to further increase the exposure of people and property to fire risk. Second, the level of expected impact a community faces from a given climate hazard is influenced by the probability of a given climate hazard and its expected magnitude. Third, a community s vulnerability to these hazards is influenced by its sensitivity to a given climate risk and its adaptive capacity the ability to cope with stress or adjust to new situations. An area with high exposure but low sensitivity to a given climate hazard may have lower overall risk than an area with lower exposure to the same hazard but higher sensitivity. The degree of adaptive capacity can also serve to increase or decrease risks. For example, according to the Fourth National Climate Assessment, tribal nations are especially vulnerable to climate change because of their reliance on threatened natural resources for their cultural, subsistence, and economic needs. We reported in September 2017 that while estimates of the economic effects of climate change are imprecise due to modeling and information limitations, they can convey useful insight into broad themes about potential damages in different U.S. sectors or regions. This information could help decision makers identify significant climate risks as an initial step toward managing them and provide insight into high-risk areas for targeted investment. For example, we reported in September 2017 that the two national-scale studies available at the time that examined the economic effects of climate change across U.S. sectors suggested that the potential economic effects of climate change could be significant and unevenly distributed across sectors and regions. According to one of the studies, the Southeast, Midwest, and Great Plains regions likely will experience greater combined economic effects than other regions, largely because of coastal property damage in the Southeast and changes in crop yields in the Midwest and Great Plains. (See fig. 4). In addition, several stakeholders told us that USGCRP s National Climate Assessment, which describes potential climate change risks to the United States, could help inform decisions about which regions of the country or climate risks to target for resilience investment. In addition, the Notre Dame Global Adaptation Initiative has developed an interactive database that provides information on the level of climate risk U.S. cities face and these cities readiness to enhance resilience. Nevertheless, one official from the Mitigation Framework Leadership Group noted that identifying climate risks is challenging, in part, because opinions about which risks are most urgent will vary according to the perspective of the observer. According to the National Academies, even though there are still uncertainties about the nature, timing, and magnitude of climate change impacts, mobilizing now to increase the nation s resilience can be viewed as an insurance policy against climate change risks. <3.3. Step 3. Identify Potential Project Ideas> Identifying potential project ideas that align with high-risk areas for targeted resilience investment is the third step in the process for identifying and prioritizing climate resilience projects for federal investment. Potential projects may differ in purpose and location and could include constructing hard infrastructure (e.g., flood defenses such as seawalls) and natural infrastructure (e.g., wetlands in coastal areas) to protect against climate hazards, relocating a community out of harm s way, or developing a suite of projects designed to collectively address a climate hazard (e.g., wildfire risk or drought) in a particular region of the country, according to several stakeholders we interviewed and based on our review of several reports. From our interviews with several stakeholders and our review of our examples from Canada and Louisiana, we noted two methods for identifying ideas for resilience projects bottom up and top down that can be used individually or in combination. <3.3.1. Bottom-Up Method> Several stakeholders told us that project ideas could come from a bottom-up method in which the federal government seeks proposals from tribal, state, and local governments; regional groups; or other stakeholders for projects. For example, Infrastructure Canada, the federal department that administers the DMAF, sought project ideas from provinces, territories, municipal and regional governments, indigenous groups, and others. Under the DMAF, these entities applied directly to Infrastructure Canada for funding. Likewise, in Louisiana, CPRA also used a bottom-up method to identify projects by allowing citizens, state agencies, nongovernmental organizations, academics, and others to submit project ideas. Where necessary, staff at CPRA developed the more detailed plans needed to evaluate and operationalize the projects. CPRA officials said that involving the communities where climate resilience projects will be located in the project identification process helped create support for these projects. Two stakeholders explained that the process for identifying potential project ideas must be sensitive to the fact that some communities do not have the administrative capacity to develop proposals. Otherwise, project ideas will primarily come from communities with ample institutional capacity, and locations with less administrative capacity and the climate risks associated with these locations will be missed. According to a 2014 report by the President s State, Local, and Tribal Leaders Task Force on Climate Preparedness and Resilience, the federal government can drive more resilient community choices by, among other things, providing technical assistance to states, territories, tribes, and communities that lack capacity to adapt to climate change. In 2014, HUD launched the National Disaster Resilience Competition to fund disaster recovery and long-term community resilience in parts of the country that had recently been affected by major disasters. During the first phase of the competition, eligible states and communities impacted by a disaster from 2011 through 2013 could obtain technical assistance through resilience workshops. According to HUD, these workshops provided information and expertise to help communities understand resilience and identify various threats, hazards, economic stresses, and other potential shocks that could impact each community. The workshops also offered eligible applicants tools and concepts to better identify and assess their risk, engage with their communities, choose resilience- building opportunities, and develop strong applications. <3.3.2. Top-Down Method> Several stakeholders told us that projects could be identified through a top-down method, in which potential projects would be identified by an interdisciplinary group of federal officials and other experts. According to one stakeholder, a top-down method could facilitate consideration of cross-cutting projects that address multiple climate risks and regions of the country. In addition, according to two stakeholders, such a top-down method could help identify projects unlikely to be suggested by local stakeholders for various reasons, such as the local communities not having the administrative capacity to develop and submit such proposals or a local community s interest being at odds with the national interest (e.g., relocation of a high-risk community when relocation would result in the loss of local tax revenue). However, officials from the Mitigation Framework Leadership Group explained that without the involvement of communities and prioritization of local needs, a top-down approach could be viewed as disconnected from community needs. In Louisiana, CPRA supplemented its bottom-up method with top-down identification of additional potential projects by, among other things, reconsidering past project proposals that were not selected and working with stakeholders to design potential projects. <3.4. Step 4. Prioritize Climate Resilience Projects> Prioritizing projects is the fourth key step in the process for identifying high-priority projects for federal investment. Based on our review of several reports and interviews with several stakeholders, prioritizing projects for federal investment should involve evaluating individual projects using scientific and data-based processes. For example, according to a 2010 report by the National Academies, managing risk in the context of enhancing resilience to climate change involves using the best available social and physical science to understand the likelihood of climate impacts and their associated consequences and then selecting and implementing the response options that seem most effective. Stakeholders we interviewed, the Louisiana example, and our past work indicate the need to solicit feedback from communities on the potential impacts of proposed projects. Furthermore, according to several stakeholders we interviewed, projects should be prioritized by an independent, interdisciplinary group of experts capable of assessing projects against measurable criteria. For example, according to Canadian officials, Infrastructure Canada seeks considerations on potential projects from two committees of experts: the first one is comprised of a panel of experts from other federal departments, and the other is comprised of nonfederal experts, including urban planners, sustainability professionals, and individuals with various regional expertise. We identified several potential criteria and tools that could be used to evaluate projects and identify those that are high priority, as described below. <3.4.1. Potential Criteria for Evaluating Projects> We identified various potential criteria for evaluating projects and assigning priority for federal investment, based on our review of reports, interviews with stakeholders, and the Louisiana and Canadian examples. Potential criteria fell into three general categories: goal-oriented criteria (i.e., criteria that measure the extent to which a project enhances resilience and meets other goals), efficiency criteria (i.e., criteria that measure a project s ability to maximize efficiency, including by maximizing benefits and minimizing costs), and administrative criteria (i.e., other criteria that program administrators may want to consider). See table 1 for more details. The federal government can select a limited number of criteria for evaluation that align with the overall strategic goals of the climate resilience investment effort, based on our discussions with stakeholders. Goal-oriented criteria. We identified several goal-oriented criteria criteria that measure the extent to which a project enhances resilience to climate change and meets other goals that decision makers may want to consider when evaluating which projects to prioritize, based on several reports we reviewed and stakeholders we interviewed. Several reports and several stakeholders suggested prioritizing projects that, among other things, focus on severe or costly climate hazards as well as climate hazards about which there is the most scientific certainty. Several stakeholders we interviewed explained that when prioritizing projects for implementation, it is important to consider a project s potential to enhance resilience by protecting human lives, health, and safety, and assets that are critical, high- value, or culturally significant. In addition, several stakeholders told us that decision makers should not place too much emphasis on the monetary value of avoided property losses from a project because doing so can overemphasize projects that protect high-value assets and leave socially vulnerable populations with limited economic resources less protected. According to one report, the loss of assets is more difficult for a poor household to absorb than a wealthy household that has more assets to begin with and more access to insurance and credit. Similarly, the Fourth National Climate Assessment notes that poor or marginalized populations often face a higher risk from climate change because they live in areas with higher exposure, are more sensitive to climate impacts, or lack the capacity to respond to climate hazards. Several stakeholders told us that to account for a lack of social equity, it is important to prioritize projects in communities that have limited capacity to enhance resilience without federal financial assistance, including communities with limited financial means. In addition to these factors, several reports and several stakeholders discussed the importance of considering a project s impacts on the environment, including its ability to protect unique or sensitive environmental habitats or species. Finally, several reports discussed the importance of considering the potential system-wide impacts of a project, including a project s potential to provide benefits as well as the potential that risk may be transferred to neighboring communities. The DMAF applicant s guide provides an example of potential risk transfer, explaining that the construction of new dikes along a river to protect a segment of the floodplain may confine the river, raising water levels upstream and increasing the velocity of the river downstream. This may reduce the hazard in the segment of river immediately adjacent to the structure but will transfer risk to upstream and downstream communities. Efficiency criteria. We identified several efficiency criteria criteria that measure a project s ability to maximize net benefits that decision makers may want to consider when evaluating which projects to prioritize. Several reports we reviewed identified the importance of considering how a project s expected benefits compare to its costs to help ensure a project represents an efficient use of federal dollars. With respect to costs, one stakeholder identified the importance of considering the current costs of implementing a project as well as how costs might change in the future if a project s implementation is delayed to a later date. With respect to benefits, several stakeholders indicated that while it can be difficult to estimate the monetary value of some benefits, it is important to consider all expected benefits including co-benefits as fully as possible to draw accurate conclusions about how a project s benefits compare to its costs. For example, several stakeholders discussed the need to account for future benefits because much of the value of a climate resilience project may be realized far in the future as climate risks become more pronounced. In addition, several reports identified ways to account for uncertainty about the specific nature of future climate risks when making decisions about which projects to prioritize. This includes, for example, prioritizing projects that provide benefits under a wide range of future climate scenarios or prioritizing projects that can be modified if future climate conditions are different than expected. In addition to these considerations, several stakeholders also suggested considering the long-term viability of communities being helped by a project. These stakeholders explained that some communities may face climate risks that are so severe over the long term that they preclude cost-effective investments in resilience. They explained that rather than make costly resilience investments in these communities, a more efficient use of federal funds might involve making investments in projects that help transition a community to a safer location. Similarly, according to a 2015 study by the U.S. Army Corps of Engineers, given current and projected sea level and climate change trends, some of the built environment will become unsustainable for communities presently located there, which may mean that communities may have to relocate in a responsible manner to sustain their economic viability and social resilience. Another stakeholder suggested prioritizing resilience projects that are unlikely to be funded without federal investment, such as projects for the public good that do not generate revenue and likely would not attract private investors. Administrative criteria. We identified several additional criteria that federal decision makers investing in climate resilience projects may want to consider when evaluating which projects to prioritize, including whether the project is feasible and timely. One stakeholder identified the importance of using federal dollars to invest in projects with novel resilience techniques since these projects otherwise might be unlikely to receive investment from other sources. For example, the Canadian DMAF awards merit to projects that offer effective solutions through unique innovative ideas. One stakeholder suggested that the federal government may want to consider the overall distribution of projects across hazards and regions to ensure that all hazards and regions of the country are getting at least some investment in resilience. <3.4.2. Tools for Evaluating Projects> Based on our review of several reports and illustrative examples, various tools used individually or in combination could help decision makers evaluate projects in order to identify high-priority ones and visualize project trade-offs. For example, using multi-criteria analysis involves decision makers identifying potential criteria, assigning weights to the criteria, ranking proposed projects against the weighted criteria, and using the results to compare projects and inform decisions about which projects to implement. In Canada, officials with the DMAF use multi-criteria analysis to rank potential resilience projects against multiple criteria including the extent to which projects strengthen community resilience and reduce the impacts of natural disasters. Quantitative modeling is another tool that can help decision makers visualize the potential benefits and costs of proposed projects under multiple future climate change scenarios, and thus facilitate identification of high-priority projects. For example, in Louisiana, CPRA used computer modeling tools to evaluate how projects could reduce future land loss and flooding risk, among other effects. To account for uncertainty about future climate and economic conditions, the modeling tools estimated project outcomes under multiple future scenarios representing varied climate conditions (e.g., sea level rise and the frequency and intensity of storms), economic growth conditions, and other factors. According to the Comprehensive Master Plan for a Sustainable Coast, information from the modeling tools helped support deliberations between CPRA and coastal stakeholders that helped identify high-priority projects for implementation. <3.5. Step 5. Implement High- Priority Projects> High-priority resilience projects can be implemented as funds become available, while decision makers consider the optimal timing of project implementation. For example, in Louisiana s coastal master planning effort, CPRA identified $50 billion in projects to be implemented as various federal and nonfederal funding sources become available. CPRA sequences project implementation based on project effectiveness and benefits in the near term or the long term. See figure 5 for completed, ongoing, and planned projects. Project implementation may be influenced by the presence of windows of opportunity periods of time when outside factors make it advantageous or cheaper to implement a project, based on our review of several reports. For example, according to the Fourth National Climate Assessment, many jurisdictions and businesses have significant stocks of aging transportation, water, energy, housing, and other infrastructure, and new infrastructure investments and capital stock turnover provides one particularly favorable opportunity for low-cost, proactive climate resilience investment. In addition to the availability of funding and windows of opportunity, projects may also need final approval from a decision-making entity the Minister of Infrastructure, in the case of Canada s DMAF before implementation. In the case of Louisiana, the state legislature must approve the overall master plan, although, according to a CPRA official, the legislature does not approve the inclusion of individual projects or project concepts. <3.6. Step 6. Monitor Projects and Climate Risk> Monitoring the projects being implemented and the state of climate risks can provide information to inform future decisions about high-priority climate resilience projects for federal investment. According to the 2010 report by the National Academies, policy decisions to manage risk can be improved if they incorporate the concept of adaptive management monitoring progress in real time and changing management practices based on learning about and recognizing changing conditions. As an example, Louisiana s CPRA monitors the performance of projects and the condition of the Louisiana coast using the results from these activities to adjust project management actions and inform future coastal master planning efforts. We identified two options for focusing federal funding on high-priority climate resilience projects coordinating funding provided through multiple existing federal programs with various purposes and creating a new federal funding source specifically for high-priority climate resilience projects and these options have strengths and limitations. In addition, our analysis of these sources identified opportunities to increase the climate resilience impact of these two funding options. <3.7. Options for Focusing Federal Funding on High- Priority Climate Resilience Projects Have Strengths and Limitations> Options for focusing federal funding on high-priority climate resilience projects coordinating funding provided through multiple existing federal programs with varied purposes and creating a new federal funding source specifically for high-priority climate resilience projects have strengths and limitations, based on our review of our prior work, relevant reports, and the Louisiana and Canadian examples, as well as interviews with stakeholders. See table 2. One option for focusing funding on high-priority climate resilience projects involves coordinating funds from multiple existing federal programs with varied purposes that were not designed specifically for climate resilience but whose purpose may be compatible with these projects. For example, the state of Louisiana s coastal master planning effort uses multi-program coordination to fund projects. Specifically, funding for high-priority resilience projects identified in the master plan is provided via several federal and nonfederal programs designed for wetlands restoration, hurricane risk reduction, oil spill recovery, and community development, among other purposes, when the program s purpose aligns with the project s purpose. For example, the National Fish and Wildlife Foundation Gulf Environmental Benefit Fund established in early 2013 as an outcome of plea agreements for the Deepwater Horizon explosion and oil spill has been used to fund some projects consistent with the master plan that restore barrier islands and implement river diversions. Administrators of these federal and nonfederal funding programs, rather than CPRA, make decisions about how funds are to be spent, but they coordinate with CPRA to ensure decisions are consistent with the master plan. As with the Louisiana example, high-priority climate resilience projects could be funded via one or more federal programs compatible with the project s purpose. We identified federal programs related to flood control and hazard mitigation that could be used to fund individual projects that may convey climate resilience benefits, including FEMA s hazard mitigation assistance programs (i.e., Building Resilient Infrastructure and Communities, Pre-Disaster Mitigation, Flood Mitigation Assistance, and Hazard Mitigation Grant programs), HUD s Community Development Block Grant Disaster Recovery program, and the U.S. Army Corps of Engineers civil works program. These programs are managed individually within their agencies and operate under different statutory authorities. However, no federal entity oversees funding for high-priority climate resilience projects, for example, by identifying which existing federal programs could be used to fund particular high-priority projects and coordinating the use of these programs to fund particular projects. Based on our review of the Louisiana example, interviews with stakeholders, and a report we reviewed, we identified several strengths of coordinating multiple existing federal programs with varied purposes to fund high-priority climate resilience projects: Leveraging existing programs. This option leverages an existing architecture of related federal programs and could encourage consideration of climate change in routine agency decisions, based on our interviews with several stakeholders and review of a related report. The federal government already has programs that address natural resources (e.g., coastlines, water resources, and forests) and human systems (e.g., public health, housing, and infrastructure) that will be affected by climate change, according to a 2010 report we reviewed and two stakeholders we interviewed. According to this report and stakeholders, rather than create an additional program to address climate change, it would be better to incorporate consideration of climate change into existing federal decision-making processes. Providing funding for high-priority climate resilience projects via existing federal programs could encourage agencies to think more intentionally about climate change on a regular basis when implementing their programs, according to several stakeholders we interviewed. Providing access to specialists and expertise. Federal officials who have specialized, sector-specific knowledge (e.g., infrastructure, agriculture, or ecosystems) that can be useful when evaluating which projects to fund may have a greater opportunity to provide input if funding decisions are made within existing federal programs, according to several stakeholders. According to one stakeholder, specialized knowledge that resides within federal agencies is necessary when evaluating the trade-offs of potential projects that address diverse systems and assets. This stakeholder explained that, for example, evaluating a project to strengthen a shipping port against hurricanes requires different expertise than evaluating a project to protect the surrounding community against these hurricanes, and agency officials specialized knowledge would be useful in evaluating the value of such distinct projects. Providing access to multiple funding sources. Using multiple existing federal programs means that multiple potential funding streams are available for projects. For example, one stakeholder whose community previously used federal funding to implement large- scale resilience projects said that when funding from one program is not available for example, because the project does not match that program s purpose or because of insufficient funds having multiple existing programs from which to seek funding is advantageous. Similarly, Louisiana makes use of multiple federal and nonfederal funding sources to implement projects identified through its master planning effort. On the basis of our review of the Louisiana example, relevant reports, and interviews with stakeholders, as well as our past work including the Disaster Resilience Framework we identified several limitations of using existing programs to fund high-priority climate resilience projects: Administratively challenging to coordinate. Several stakeholders and a 2016 report we reviewed identified potential administrative challenges associated with using multiple existing programs with varied purposes to fund high-priority projects. For example, CPRA officials told us that the process of coordinating funding from multiple programs for coastal projects is complicated and requires dedicated staff to identify programs, assess whether projects meet program funding criteria, apply for funds, and ensure that program requirements are met. Several stakeholders told us that the budgets of existing programs may be too limited to fund large-scale climate resilience projects and that acquiring funding for a single project through multiple federal programs can be difficult. For example, FEMA officials told us that a potentially relevant FEMA program the Pre- Disaster Mitigation Grant Program has limited overall funding and restricts the financial size of a project, making it challenging to fund large-scale projects such as community relocation. Furthermore, according to a 2016 report about lessons learned from the HUD Rebuild by Design competition, grantees faced challenges combining funds from multiple programs to support comprehensive rebuilding visions because each program had its own procedural and administrative requirements, including different timelines for how and when the funds were made available. Similarly, according to our Disaster Resilience Framework, when multiple programs and activities and multiple funding streams are involved, there is a risk that the array of requirements will increase administrative complexity. As we reported in July 2015, jurisdictional officials engaged in disaster recovery have encountered complex review processes, conflicting federal guidance, and competing federal priorities that diminished the desire of localities to participate in resilience programs. Programs may be siloed. Existing federal programs may be siloed, according to several stakeholders and two reports we reviewed, meaning that agencies may have limited visibility over how their projects affect other agencies mission areas or a limited ability to consider those effects. The two reports we reviewed identified challenges with siloed agency programs, including that they can discourage more holistic resilience projects with benefits in multiple sectors. For example, according to the 2016 report about lessons learned from the HUD Rebuild by Design competition, program rules may restrict the use of federal funds to certain activities (e.g., flood control), which can make it difficult to justify the additional cost of a more holistic resilience project with benefits in other sectors (e.g., a larger-scale flood control project with water quality co-benefits). According to the National Academies, climate resilience activities have the potential to be redundant or to work at cross purposes if they are not coordinated across sectors, actors, scale, and time frames. For example, the National Academies identified potential tradeoffs between resilience activities in the agricultural, water, and ecosystem sectors, such as increased irrigation in response to drought competing with natural ecosystem flows and domestic water needs. Climate resilience is not the primary focus. Though it may be possible to use some existing federal programs to fund high-priority climate resilience projects, the primary purpose of these programs is not enhancing resilience to climate change, and they are not coordinated toward a common climate resilience goal, according to our work for this report. As a result, relying on existing programs for funding could result in inadvertent, ad hoc funding rather than intentional, coordinated, and strategic funding of high-priority projects, based on our past work and interviews with several stakeholders. In particular, according to FEMA officials, statutory and regulatory limitations could make it challenging to incorporate consideration of climate resilience into existing programs. Furthermore, according to several stakeholders, program funding criteria may not relate directly to climate resilience this can lower the chance that climate resilience projects will receive funding. In our May 2014 report about DOD s consideration of climate change in infrastructure planning, we reported that military installation officials rarely proposed climate resilience projects because the services criteria for ranking and funding potential military construction projects did not include climate change adaptation. In addition, a 2018 report about federal resilience policy we reviewed and several stakeholders we interviewed identified challenges with how cost-benefit formulas account for future climate risk when evaluating the costs and benefits of a project under consideration. Two stakeholders we interviewed told us that the discount rate the interest rate used to convert benefits and costs occurring in different time periods to a common present value used in federal cost benefit formulas may too heavily discount future benefits. They explained that when benefits accrue over long time horizons, this can result in future climate benefits appearing small relative to the current cost of project implementation and thus result in some climate resilience projects not being funded. Existing programs may be reactive, not proactive. Some existing programs for example, HUD s Community Development Block Grant Disaster Recovery program and FEMA s Hazard Mitigation Grant Program are limited to funding resilience projects after a disaster occurs, which may result in reactive instead of proactive funding, based on our review of our past work and discussions with several stakeholders. We concluded in July 2015 that funding hazard mitigation efforts in a post-disaster environment can create a reactive and fragmented approach in which disasters determine when and for what purpose the federal government invests in disaster resilience. Furthermore, tying climate resilience funding to a disaster can result in projects going unfunded in communities where there has not yet been a disaster but where there are legitimate risks from future climate change impacts including chronic climate hazards such as sea level rise according to several stakeholders we interviewed. For example, our past work and several stakeholders identified challenges in accessing funding from existing federal programs to relocate communities threatened by climate hazards, such as Alaskan native villages threatened by flooding and erosion caused by sea level rise. According to our June 2009 report, since many Alaskan native villages facing gradual erosion problems had not received a declared disaster designation, they did not qualify for some FEMA disaster recovery and hazard mitigation programs. In addition, according to a 2016 report we reviewed, disaster recovery programs tend to be reactive and backward looking, focusing on areas immediately affected by a disaster. This can limit the ability of grantees to fund projects that could more holistically reduce the full suite of future risks that a region or community face. <3.7.1. New Climate Resilience Funding Source> Another option for focusing federal funding on high-priority climate resilience projects involves creating a new funding source specifically for such projects. We identified two main ways a new funding source could be designed in the United States: (1) a federal financial assistance program that could provide grants, loans, or loan guarantees to nonfederal entities implementing high-priority climate resilience projects, or (2) a climate resilience infrastructure bank that could combine federal funds with funds from other sources to provide funding to nonfederal entities for implementing high-priority climate resilience projects. The government of Canada employs both of these methods. Specifically, Canada created the DMAF as a one-time, centralized fund of about US$1.5 billion dollars for climate resilience projects over a 10-year period. Applications not eligible for or not selected to receive DMAF funding could be eligible under other infrastructure programs. Projects that could generate revenue are shared with Canada s Infrastructure Bank for consideration. Based on our review of the DMAF and interviews with stakeholders, we identified several strengths of creating a new funding source for high- priority climate resilience projects: Administrative simplicity. Several stakeholders said that a new funding source avoids the administrative challenge of coordinating multiple funding sources to implement a large project or portfolio of projects. According to two stakeholders, such an option would avoid the challenge of having to utilize multiple programs with varying program rules, solicitation periods, and funding terms. Another stakeholder suggested that a single source would make it easier to track spending on climate resilience projects. Focusing on high-priority climate resilience projects. Several stakeholders said that an advantage of a new funding source is that it would provide dedicated funding for projects undertaken for the explicit purpose of climate resilience. For example, Canadian officials said that with the DMAF, climate resilience projects do not have to compete with other infrastructure projects for funding as they do within other programs administered in Canada that include multiple eligible project categories (e.g., water, wastewater, public transit). Canadian officials told us that this increases the likelihood that large-scale, nationally significant climate resilience projects will be funded. According to another stakeholder, a new funding source for high- priority climate resilience projects would allow for a proactive focus on the most pressing climate resilience needs instead of reactive project funding through post-disaster spending. In addition, another stakeholder told us this option could encourage communities to think intentionally about developing resilience, rather than climate resilience being an afterthought. Furthermore, several stakeholders said that such a funding source could be used for projects that otherwise would not receive funding through existing programs. For example, some projects may not receive funding because they are not compatible with current programs or because current programs have limited funding. Encouraging cross-sector projects. Several stakeholders told us that a new funding source for high-priority climate resilience projects could encourage cross-sector projects designed to achieve benefits in multiple sectors. According to one of these stakeholders, a dedicated fund for climate resilience could allow experts from multiple sectors such as infrastructure, housing, transportation, and health to collaborate on projects, leading to more creative, comprehensive approaches to enhance community resilience than would occur when funding projects through individual, existing federal programs. According to the Fourth National Climate Assessment, exploring the climate resilience nexus between sectors can identify co-benefits of resilience solutions and inform cost-effective resilience strategies. For example, the assessment describes co-benefits that resilience actions related to water consumption can have on the electricity sector. According to the assessment, California s mandate to reduce urban water consumption to address drought conditions in 2015 resulted in significant reductions in both water use and use of electricity to treat and convey water and wastewater. Based on interviews with stakeholders, we identified some limitations of creating a new funding source for high-priority climate resilience projects: Practical challenges. Several stakeholders identified practical challenges with a funding source specifically for high-priority climate resilience projects. For example, such a funding source in the United States does not exist and would have to be created, which would require Congressional authorization. Furthermore, several stakeholders identified decisions that would have to be made about how to design such a funding source, including which agencies would be responsible for administering the fund. Two stakeholders identified additional challenges to success, such as designing effective programmatic rules and eliminating duplication with existing programs. For instance, if the funding source had overly restrictive or poorly designed rules, it might be challenging to use and provide only limited benefits relative to existing programs, according to one of these stakeholders. Discouraging mainstreaming in existing federal programs. Several stakeholders raised concerns that a new funding source for high-priority climate resilience projects could discourage mainstreaming climate change considerations into existing federal programs or lead to the elimination of other sources of funding for climate resilience projects. Several stakeholders explained that mainstreaming is a fundamental way the federal government will enhance resilience to climate risks. In particular, several stakeholders raised concerns that if federal agencies viewed a single funding source specifically for climate resilience projects as sufficient for addressing climate resilience, federal agencies might be less likely to consider climate change impacts when making routine agency decisions or place a lower value on climate resilience project attributes when making funding decisions. <3.8. Opportunities Exist to Increase the Climate Resilience Impact of Federal Funding Options> Opportunities exist to increase the climate resilience impact of options for focusing federal funding on high-priority climate resilience projects, based on our review of our past work, related reports, an international standard, and the Louisiana and Canadian examples, as well as interviews with stakeholders: Using both existing and new funding options. Several stakeholders told us that using both funding options multiple, existing federal programs with varied purposes and a new funding source for high-priority climate resilience projects in a strategic, coordinated way could help increase the impact of federal investment. Several stakeholders told us that directing both funding options at high-priority projects could result in a more effective approach that makes it less likely that high-priority projects fall through the cracks and more likely that these projects will help agencies work toward a common strategic goal. Two stakeholders told us that in practice, multiple, existing federal funding sources that are not specific to climate resilience could be coordinated to fund projects when their purposes and rules align and adequate funding is available. A funding source specifically for climate resilience could be used to fund proposed projects when no related program exists or when existing programs do not have sufficient funding available, according to these and other stakeholders. Helping ensure adequate and consistent funding. Several stakeholders we interviewed identified the need for adequate and consistent funding to implement high-priority climate resilience projects. For example, according to one stakeholder we interviewed, inconsistent, inadequate funding makes it difficult to complete large- scale projects and can lead to additional costs if significant delays occur during which existing work deteriorates. In addition, according to some international officials we interviewed for a May 2016 report, long-term consistency in budgeting provides predictable, reliable resources for climate resilience projects. According to USGCRP s Fourth National Climate Assessment, adequate funding is a factor that contributes to the successful adoption and implementation of climate resilience by public-sector organizations. Furthermore, an industry standard identified the need to ensure that resources including financial, human, and technical resources needed for climate resilience actions are available. In addition to adequate and consistent funding, funding options should be designed to accommodate long-term projects since high-priority climate resilience projects can take multiple years to design and implement, according to two stakeholders we interviewed. Encouraging nonfederal investment. Several stakeholders we interviewed told us that the federal government could use a federal climate resilience investment effort to encourage nonfederal investment in high-priority climate resilience projects, thereby increasing the impact of federal investment. For example, several stakeholders identified the importance of a cost-share component so that funding recipients are invested in a project s success. Canada s DMAF encourages nonfederal investment by partially funding projects of national significance and requiring different levels of cost-share from funding recipients, ranging from 25 percent for Indigenous recipients to 75 percent for private-sector and other for-profit recipients. Several stakeholders also identified potential funding mechanisms for example, public-private partnerships and loan guarantees that could leverage federal dollars to encourage additional investment in climate resilience projects by nonfederal entities, including the private sector. According to the 2014 President s State, Local, and Tribal Leaders Task Force report, one way the federal government can drive more resilient community choices is by encouraging innovative approaches that leverage private capital. Encouraging complementary resilience activities. To increase the impact of federal investment, a federal resilience investment effort presents an opportunity to encourage complementary resilience activities by nonfederal actors such as states, localities, and private- sector partners, based on interviews with several stakeholders, the Canadian example, and reports we reviewed. Several stakeholders suggested establishing conditions that funding recipients must meet in exchange for receiving federal funding. Alternatively, according to the 2014 President s State, Local, and Tribal Leaders Task Force report and two stakeholders we interviewed, the federal government could use incentives (e.g., providing greater federal cost-share or giving additional preference in the project prioritization process) to encourage complementary resilience activities by nonfederal actors. Furthermore, our Disaster Resilience Framework states that incentives can make long-term, forward-looking risk reduction investments more viable and attractive among competing priorities. Specifically, incentives can lower the costs or increase the benefits of risk-reduction measures, which can help stimulate investment by state and local governments, individuals, and the private sector. The federal government could use a federal resilience investment effort to encourage several types of complementary resilience activities by nonfederal actors. For example, the federal government could encourage the use and enforcement of building codes that require stronger risk-reduction measures, according to two reports we reviewed and several stakeholders we interviewed. In the case of the DMAF, to be eligible for federal funding, all projects under the DMAF must meet or exceed building code requirements for their jurisdiction. In addition, several stakeholders suggested using a federal investment effort to encourage communities to limit or prohibit development in high-risk areas to minimize risks to people and assets exposed to future climate hazards. One example of this would be through zoning regulations. Another stakeholder suggested that communities receiving federal funding for resilience projects should be adequately insured against future climate risks so they have a potential source of funding for rebuilding in the event of a disaster. Allowing funds to be used at various stages of project development. Several stakeholders suggested that federal funds be allowed for use at multiple stages of project development such as project design, implementation, or monitoring to increase the impact of federal funds. For example, two stakeholders we interviewed told us that resilience projects can require significant amounts of design work to develop an implementable and effective project concept and that making funds available for project design could improve the quality of project proposals, thereby maximizing the impact of federal funds. Similarly, a CPRA official explained that many project proposals for Louisiana s Comprehensive Master Plan for a Sustainable Coast are in the concept stage when they are received so funds are needed to refine the concept and craft an implementable project design. In addition to providing federal funds for project design, one stakeholder suggested making federal funding available to measure project outcomes (e.g., how effectively projects increased resilience) to improve future decisions by both the federal government and others making resilience investments. <4. Conclusions> Individual federal agencies have provided ad hoc funding for projects that may convey some climate resilience benefits using existing federal programs. However, the federal government does not have a strategic approach for investing in climate resilience projects that targets federal resources toward projects that address the nation s most significant climate risks. USGCRP projects that disaster costs will likely increase as certain extreme weather events become more frequent and intense due to climate change. The rising number of natural disasters and increasing reliance on the federal government for assistance is a key source of federal fiscal exposure. Investment in climate resilience projects can help prepare the country for the effects of climate change. We found that to strategically identify and prioritize climate resilience projects for federal investment, the federal government could take six key steps, based on reports we reviewed, past GAO work, international standards, and stakeholders we interviewed. In addition, opportunities exist to increase the climate resilience impact of funding options, such as by encouraging the use of climate-resilient building codes. However, no federal agency, government-wide coordinating body, or other organizational arrangement has been established to periodically identify and prioritize climate resilience projects for federal investment. Our past work and other sources highlight the importance of a strategic and iterative risk-informed approach to climate change and the need to reduce the federal government s fiscal exposure. However, the federal government has made little measurable progress since 2017 to reduce its fiscal exposure to climate change. Although we have made 17 recommendations that address improving federal climate change strategic planning, as of August 2019, no action had been taken toward implementing 14 of those recommendations one dating back to 2003. A strategic and iterative risk-informed approach for identifying and prioritizing climate resilience projects for federal investment, with an appropriate organizational arrangement, could help target federal resources toward climate resilience projects that have the greatest expected net benefit and that address the nation s most significant climate risks. <5. Matter for Congressional Consideration> Congress should consider establishing a federal organizational arrangement to periodically identify and prioritize climate resilience projects for federal investment. Such an arrangement could be designed for success by considering the six key steps for prioritizing climate resilience investments and the opportunities to increase the climate resilience impact of federal funding options identified in our report. (Matter for Consideration 1) <6. Agency Comments> We provided a draft of this report to the U.S. Global Change Research Program, the Federal Emergency Management Agency, and the Mitigation Framework Leadership Group for review and comment. These entities provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Executive Director of the U.S. Global Change Research Program, the Acting Secretary of the Department of Homeland Security, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Appendix I: Objectives, Scope, and Methodology In this report, we examine (1) the extent to which the federal government has a strategic approach for investing in climate resilience projects; (2) key steps that provide an opportunity for the federal government to strategically prioritize climate resilience projects for federal investment; and (3) strengths and limitations of options for focusing federal funding on high-priority climate resilience projects. To address all three audit objectives, we conducted semi-structured interviews with 35 stakeholders with relevant expertise, including federal officials, researchers, and consultants. We used a snowball approach to identify stakeholders with expertise on the topics addressed by our report. This involved identifying an initial list of stakeholders with expertise in climate resilience and hazard mitigation by reviewing related reports and based on stakeholder involvement in related present or past federal efforts for example, work conducted by the U.S. Global Change Research Program (USGCRP) the federal program responsible for coordinating climate change research and preparing the Fourth National Climate Assessment. We identified additional stakeholders with expertise in these and other relevant areas through interviews with this initial group of stakeholders and review of additional reports. We considered several factors when selecting stakeholders: the relevance of their expertise, the number of times they were recommended to us by other stakeholders as having relevant expertise, and their current or previous federal experience. We sought a balanced set of stakeholders with expertise in a variety of fields that could inform climate resilience decisions: climate resilience, decision sciences, hazard mitigation, economics and finance, insurance, engineering and project design, economic and community development, potentially related federal programs (e.g., Federal Emergency Management Agency hazard mitigation programs), and several affected resources (e.g., coasts, infrastructure, water resources, and ecosystems). We use the term several to represent three or more stakeholders or reports expressing a particular viewpoint. In other cases, we provide the exact number of stakeholders expressing a particular viewpoint. Because this is a nonprobability sample, our findings cannot be generalized to other stakeholders we did not interview. Rather, these interviews provided us with illustrative examples of (1) the extent to which the federal government has a strategic approach for investing in climate resilience projects, (2) key steps that provide an opportunity for the federal government to strategically prioritize climate resilience projects for federal investment, and (3) strengths and limitations of options for focusing federal funding on high-priority climate resilience projects. In addition, the specific areas of expertise varied among the stakeholders we interviewed, so not all of the stakeholders commented on all of the interview questions we asked. To determine the extent to which the federal government has a strategic approach for investing in climate resilience projects, we reviewed past GAO work on federal efforts related to climate resilience and climate change funding as well as reports from the Congressional Research Service, Congressional Budget Office, the Council on Climate Preparedness and Resilience, USGCRP, and other sources. We also reviewed federal documents, including the National Mitigation Investment Strategy a national strategy for mitigating natural hazards. We interviewed officials from USGCRP and FEMA, the federal agency responsible for leading the Mitigation Framework Leadership Group, the interagency group that developed the National Mitigation Investment Strategy under Presidential Policy Directive 8. We also interviewed several other stakeholders on the extent to which the federal government has a strategic approach for investing in climate resilience projects and the nature and scope of the Mitigation Framework Leadership Group s activities. We reviewed federal documents and websites to identify examples of instances in which federal programs and funding sources designed for other purposes, such as disaster funding, have been used to invest in climate resilience projects. To identify key steps that provide an opportunity for the federal government to strategically prioritize climate resilience projects for federal investment, we reviewed our prior work related to risk management, climate change, climate resilience, and hazard mitigation, including our Disaster Resilience Framework and enterprise risk management report. We also reviewed approximately 50 reports and other sources to identify steps that provide an opportunity for the federal government to strategically identify high-priority climate resilience projects, several of which contained examples of potential criteria the federal government could consider when prioritizing these projects. We identified these reports and other sources through our review of other reports and related news, discussions with stakeholders, and searches of databases such as Scopus and ProQuest. The reports we reviewed included climate resilience planning guidebooks that outline steps communities can follow to design a resilience plan to address climate risks. We also interviewed stakeholders with relevant expertise to gather information on key steps the federal government could take and criteria it could consider to strategically prioritize climate resilience projects for federal investment. During the course of this work, we identified domestic and international examples of governments that invest in climate resilience and related projects. We selected two of these examples for more in-depth review and presentation in the report: the state of Louisiana s coastal master planning effort and the country of Canada s Disaster Mitigation and Adaptation Fund. These examples represent distinct approaches for investing in high-priority projects that help communities adapt to emerging risks such as those associated with climate change. We selected these examples for further review because they focus on projects that are large in scale; are of national or statewide significance; address multiple risks; represent well-defined, current processes for identifying and prioritizing projects; and had sufficient information available to understand their approach. To examine the strengths and limitations of options for focusing federal funding on high-priority climate resilience projects, we identified relevant examples of the strengths and limitations of federal funding options in several of the reports we mentioned above. Where appropriate, we supplemented this review with a review of additional reports that discussed specific financial mechanisms that the federal government could use to fund large-scale climate resilience projects. We also interviewed stakeholders to discuss the strengths and limitations of options the federal government could use to fund climate resilience projects. When available, we gathered their views on specific funding sources that the federal government could use to fund large-scale climate resilience projects and additional steps that the federal government could take to enable more targeted federal resilience investment. We conducted this performance audit from January 2018 to October 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments <7. GAO Contact> <8. Staff Acknowledgments> In addition to the individual named above, Joe Thompson (Assistant Director), Paige Gilbreath (Analyst in Charge), Taiyshawna Battle and Celia Rosario Mendive made key contributions to this report. Also contributing to this report were Colleen M. Candrl, Alicia P. Cackley, Kendall Childers, Steven Cohen, Christopher Curry, Cindy Gilbert, Kathryn Godfrey, Holly Halifax, Carol Henn, Susan Irving, Richard Johnson, Gwendolyn Kirby, Caroline N. Prado, Joseph Maher, Gregory Marchand, Diana Maurer, Kirk Menard, Tim Persons, William Reinsberg, Oliver Richard, Danny Royer, Jeanette Soares, Kiki Theodoropoulos, Sarah Veale, Patrick Ward, Jarrod West, Kristy Williams, Eugene Wisnoski, and Melissa Wolf. | Why GAO Did This Study
Federal funding for disaster assistance since 2005 has totaled at least $450 billion, including a 2019 supplemental appropriation of $19.1 billion for recent disasters. In 2018 alone, 14 separate billion-dollar weather and climate disaster events occurred across the United States, with total costs of at least $91 billion including the loss of public and private property, according to the National Oceanic and Atmospheric Administration. Disaster costs will likely increase as certain extreme weather events become more frequent and intense due to climate change, according to the U.S. Global Change Research Program, a global change research coordinating body that spans 13 federal agencies. In 2013, GAO included “Limiting the Federal Government's Fiscal Exposure by Better Managing Climate Change Risks” on its list of federal program areas at high risk of fraud, waste, abuse, mismanagement, or most in need of transformation.
The cost of recent weather disasters has illustrated the need to plan for climate change risks and invest in climate resilience. Investing in climate resilience can reduce the need for far more costly steps in the decades to come.
The Disaster Recovery Reform Act of 2018 provides one potential source of funding for climate resilience projects. In particular, it allows the President to set aside up to 6 percent of the estimated aggregate amount of grants from certain programs under a major disaster declaration to implement pre-disaster hazard mitigation activities. Officials estimate funds for the related program will average $300 million to $500 million annually.
GAO was asked to review the federal approach to prioritizing and funding climate resilience projects that address the nation's most significant climate risks. This report examines (1) the extent to which the federal government has a strategic approach for investing in climate resilience projects; (2) key steps that provide an opportunity to strategically prioritize projects for investment; and (3) the strengths and limitations of options for focusing federal funding on these projects.
GAO reviewed relevant reports and interviewed 35 stakeholders with relevant expertise, including federal officials, researchers, and consultants. In addition, during the course of this work, GAO identified domestic and international examples of governments that invest in climate resilience and related projects. GAO selected two of these examples for in-depth review and presentation in the report: the state of Louisiana's coastal master planning effort and Canada's Disaster Mitigation and Adaptation Fund.
What GAO Found
The federal government has invested in projects that may enhance climate resilience, but it does not have a strategic approach to guide its investments in high-priority climate resilience projects. Enhancing climate resilience means taking actions to reduce potential future losses by planning and preparing for potential climate hazards such as extreme rainfall, sea level rise, and drought. Some federal agencies have made efforts to manage climate change risk within existing programs and operations, and these efforts may convey climate resilience benefits. For example, the U.S. Army Corps of Engineers' civil works program constructs flood control projects, such as sea walls, that may enhance climate resilience. However, additional strategic federal investments may be needed to manage some of the nation's most significant climate risks because climate change cuts across agency missions and presents fiscal exposures larger than any one agency can manage. GAO's analysis shows the federal government does not strategically identify and prioritize projects to ensure they address the nation's most significant climate risks. Likewise, GAO's past work shows an absence of government-wide climate change strategic planning.
As of August 2019, no action had been taken to implement 14 of GAO's 17 recommendations to improve federal strategic planning for climate resilience. GAO's enterprise risk management framework calls for reviewing risks and selecting the most appropriate strategy to manage them. However, no federal agency, interagency collaborative effort, or other organizational arrangement has been established to implement a strategic approach to climate resilience investment that includes periodically identifying and prioritizing projects. Such an approach could supplement individual agency climate resilience efforts and help target federal resources toward high-priority projects.
Six key steps provide an opportunity for the federal government to strategically identify and prioritize climate resilience projects for investment, as GAO found based on its review of prior GAO work, relevant reports, and stakeholder interviews (see figure).
GAO identified one domestic and one international example to illustrate these key steps: Louisiana's Coastal Protection and Restoration Authority (CPRA) coastal master planning effort and Canada's Disaster Mitigation and Adaptation Fund (DMAF).
In the domestic example, in 2005 the Louisiana legislature consolidated coastal planning efforts previously carried out by multiple state entities into a single effort led by CPRA to address the lack of strategic coordination. CPRA periodically identifies high-priority coastal resilience projects designed to address two primary risks: flooding and coastal land loss. To identify potential projects, CPRA sought project proposals from citizens, nongovernmental organizations, and others. To prioritize projects, CPRA used quantitative modeling to estimate project outcomes under multiple future scenarios of varied climate and other conditions and coordinated with stakeholders to understand potential project impacts. In 2017, CPRA identified $50 billion in high-priority projects to be implemented as funds become available.
In the international example, in 2018, the Canadian government launched the DMAF, a financial assistance program to provide US$1.5 billion over 10 years for large-scale, nationally significant projects to manage natural hazard risks, including those triggered by climate change. Infrastructure Canada, the entity responsible for administering the DMAF, seeks project ideas from provinces and territories, municipal and regional governments, indigenous groups, and others. These entities apply directly to Infrastructure Canada for funding. According to Canadian officials, two committees of experts—one composed of experts from other federal departments and the other composed of nonfederal experts (e.g., urban planners and individuals with regional expertise)—provide feedback on potential projects. These projects are prioritized based on multiple criteria such as the extent to which they reduce the impacts of natural disasters.
On the basis of GAO's review of relevant reports and past GAO work, interviews with stakeholders, and illustrative examples, GAO identified two options—each with strengths and limitations—for focusing federal funding on high-priority climate resilience projects. The options are (1) coordinating funding provided through multiple existing programs with varied purposes and (2) creating a new federal funding source specifically for investment in climate resilience.
A strength of coordinating funding from existing sources is access to multiple funding sources for a project. For example, one stakeholder GAO interviewed—whose community used federal funding to implement large-scale resilience projects—said that having multiple programs is advantageous because when funding from one program is not available—such as when the project does not match that program's purpose or when there are insufficient funds—funds could be sought from another program. A limitation of that option, according to CPRA officials, is that coordinating funding from multiple sources could be administratively challenging and could require dedicated staff to identify programs, assess whether projects meet program funding criteria, apply for funds, and ensure program requirements are met. Alternatively, one strength of a new federal funding source is that it could encourage cross-sector projects designed to achieve benefits in multiple sectors. For example, according to one stakeholder, such a funding source could allow experts from multiple sectors—such as infrastructure, housing, transportation, and health—to collaborate on projects, leading to more creative, comprehensive approaches to enhance community resilience. However, such a new funding source would have to be created, which would require Congressional authorization.
In addition, GAO identified opportunities to increase the climate resilience impact of federal funding options. For example, a federal resilience investment effort presents an opportunity to encourage several types of complementary resilience activities by nonfederal actors such as states, localities, and private-sector partners. In this example, the federal government could require or provide incentives for communities to use and enforce climate-resilient building codes or limit development in high-risk areas through zoning regulations.
What GAO Recommends
Congress should consider establishing a federal organizational arrangement to periodically identify and prioritize climate resilience projects for federal investment. Such an arrangement could be designed using the six key steps for prioritizing climate resilience investments and the opportunities to increase the climate resilience impact of federal funding options that are identified in this report.
The Federal Emergency Management Agency and two federal coordinating bodies reviewed a draft of this report and provided technical comments, which GAO incorporated as appropriate. |